Pipegate

The joys of home ownership never stop giving and I’ve been having some fun with my old nemesis of plumbing

A few weeks back we got a rather curt letter from Wellington Water/Wellington City Council (WCC) advising us that they had detected a leak on our property at location  unknown and that they would fine us large amounts if not rectified in 14 days. The letter proceeded to give no other useful information on how this was detected or how a home owner should find said leak.

 

After following up via phone, it turns out they’ve been doing acoustic listening to the pipes and based on the audio taken at several different times they’re pretty certain there was a leak *somewhere*.

After doing some tests with our plumber, we were able to rule out the house being at fault, however that left a 60m water pipe up to the street, an even bigger headache to replace than the under-house plumbing given it’s probably buried under concrete and trees.

The most likely cause of any leak for us is Duxquest plumbing, a known defective product from the 70s/80s. Thankfully all the Duxquest inside the house has been removed by previous owners, but we were very concerned that our main water pipe could also be Duxquest (turns out they used it for the main feeds as well).

We decided to dig a new trench ourselves to save money by not having to have the expensive time of a plumber spent digging trenches and (strategically) started at the house end where there are the most joins in the pipe.

It's going to be a long day...

It’s going to be a long day…

Or maybe not - is that water squirting out of the ground??!?

Or maybe not – is that water squirting out of the ground??!?

So we got lucky very early in. We started digging right by the toby at the house given it was more likely any split would be towards the house and also it’s easiest to dig up here than the other end that’s buried in concrete.

The ground on the surface wasn’t damp or wet so we had no idea the leak was right below where we would start digging. It looks like a lot of the ground around the front of the house is sand/gravel infill that has been used, which resulted in the water draining away underground rather than coming to the surface. That being said, with the size of the leak I’m pretty amazed that it wasn’t a mud-bath at the surface.

Fffffff duxquest!!

Fffffff duxquest!!

The leak itself is in the Duxquest black joiner/branch pipe which comes off the main feed before the toby. It seems someone decided that it would be a great idea to feed the garden pipes of the house from a fork *before* the main toby so that it can’t be turned off easily, which is also exactly where it split meaning we couldn’t tell if the leak was this extension or the main pipe.

The thick grey pipe is the main water feed that goes to the toby (below the white cap to the right) and thankfully this dig confirms that it’s not Duxquest but more modern PVC which shouldn’t have any structural issues long term.

Finding the leak so quickly was good, but this still left me with a hole in the ground that would rapidly fill with water whenever the mains was turned back on. And being a weekend, I didn’t particularly want to have to call out an emergency plumber to seal the leak…

The good news is that the joiner used has the same screw fitting as a garden tap, which made it very easy to “cap” it by attaching a garden hose for the weekend!

Unscrewed

Hmm that looks oddly like a garden tap screw…

When number 8 wire doesn't suit, use pipe!

Huzzah!

 

Subsequently I’ve had the plumber come and replace all the remaining Duxquest under the house with modern PVC piping and copper joiners to eliminate the repeat of this headache. And I also had the toby moved so that it’s now positioned before the split so that it’s possible to isolate the 60m water main to the house which will make it a lot easier if we ever have a break in future.

You too, could have this stylish muddy hole for only $800!

You too, could have this stylish muddy hole for only $800!

 

I’m happy we got the leak fixed, but WCC made this way harder than it should have been. To date all my interactions with WCC have been quite positive (local government being helpful, it’s crazy!), but their state-owned-entity of Wellington Water leaves a lot to be desired with their communication standards.

Despite being in communication with the company that detected the leak and giving updates on our repairs we continued to get threatening form letters detailing all the fines in-store for us and then when we finally completed the repairs had zero further communications or even acknowledgment from them.

At least it’s just fixed now and I shouldn’t have any plumbing issues to worry about for a while… in theory.

Tagged , , , , | Leave a comment

Welly

Been getting out and enjoying Wellington lately, it should be a great summer!

Tagged , , , , , | Leave a comment

My IAM policy is correct, but awscli isn’t working?

I ran into a weird issue recently where a single AWS EC2 instance was failing to work properly with it’s IAM role for EC2. Whilst the role allowed access to DescribeInstances action, awscli would repeatedly claim it didn’t have permission to perform that action.

For a moment I was thinking there was some bug/issue with my box and was readying the terminate instance button, but decided to check out the –debug output to see if it was actually loading the IAM role for EC2 or not.

$ aws ec2 describe-instances --instance-ids i-hiandre--region 'ap-southeast-2' --debug
2015-11-16 11:53:20,576 - MainThread - botocore.credentials - DEBUG - Looking for credentials via: env
2015-11-16 11:53:20,576 - MainThread - botocore.credentials - DEBUG - Looking for credentials via: assume-role
2015-11-16 11:53:20,576 - MainThread - botocore.credentials - DEBUG - Looking for credentials via: shared-credentials-file
2015-11-16 11:53:20,576 - MainThread - botocore.credentials - DEBUG - Looking for credentials via: config-file
2015-11-16 11:53:20,576 - MainThread - botocore.credentials - DEBUG - Looking for credentials via: ec2-credentials-file
2015-11-16 11:53:20,576 - MainThread - botocore.credentials - DEBUG - Looking for credentials via: boto-config
2015-11-16 11:53:20,576 - MainThread - botocore.credentials - INFO - Found credentials in boto config file: /etc/boto.cfg

Turns out in my case, there was a /etc/boto.cfg file with a different set of credentials – and since Boto takes preference of on disk configuration over IAM Roles for EC2, it resulted in the wrong access credentials being used.

The –debug param is a handy one to remember if you’re having weird permissions issue and want to check exactly what credentials from where are being used.

Tagged , , , | Leave a comment

The Proprietary Renaissance

noun, re·nais·sance often attributive
\ˌre-nə-ˈsän(t)s, -ˈzän(t)s, -ˈsäⁿs, -ˈzäⁿs, ˈre-nə-ˌ,
chiefly British ri-ˈnā-sən(t)s\

: a situation or period of time when there is a new interest in
something that has not been popular in a long time

: a period of new growth or activity

- Merriam-Webster

We are entering a Proprietary Software Renaissance.

It didn’t happen overnight. Hackers didn’t turn on their sticker-covered laptops to find them suddenly running the latest proprietary release from Redmond. GNU/Linux wasn’t outlawed by the NSA. Mailing list flames are as strong as ever.

But somewhere in the past several years we started living in a future where the future success of proprietary systems seems a lot brighter than those of Free Software/Open Source[note]”Free Software” was always a bit of scary label to use around companies despite the fact that Free-as-in-Freedom does not have to equal Free-as-in-Beer, so the term “Open Source” was developed to explain the idea of having software with the source code available under some kind of share-alike and modification-allowed license. But whilst software can both be Free Software and Open Source software, there are differences. The major difference is that Open Source lowers the importance of Freedom and instead focuses on producing technically high quality software in a collaborative fashion. Because I’m more concerned about Freedoms in this post, I’m using the term Free Software instead of Open Source[/note] platforms.

This isn’t saying that Free Software isn’t successful. It’s been immensely successful, without Free platforms like BSD, GNU and Linux our current world would look quite different.

Certainly the success and development of the internet and modern technology companies have been driven predominately by these Free platforms and enabled them to break past some of the existing proprietary gate keepers in the process. And it’s not limited to new-age companies, even the traditional corporates have adopted Free Software in their businesses. RedHat claim 90% of Fortune 500 companies rely on Red Hat’s open source products [note] It’s important to note that commercial success of Free Software comes more under the label of Open Source software which is marketed more as providing an escape from vendor lock-in and granting the ability to customise for your business, rather than the pro-community, pro-freedom aspects of Free Software.[/note]

Because of the freedom to innovate, Free Software has succeeded amazingly well in the technology sector as businesses adopt components to make their products more reliable or more competitive.

 

But it’s failed with average users

Not due to bad technology. Not due to lack of proponents pushing awareness of Free Software. The awareness of Free Software is out there and advocacy is still strong. Even the US government has been helping raise awareness of the importance of Free Software lately.

But Freedoms mean little to those whom have them until they’re lost. Many people won’t ever exercise their right to protest, but as soon as that right is removed, that right becomes more important than life itself.

Users don’t see how Free Software has given them the free and open internet that they currently have rather than captive portals controlled by a select few large American corporations. Users don’t see how the cost and quality of their favourite device has been improved thanks to Free Software. Users don’t see how bad the situation would have been if we were stuck in a world where a single vendor had majority control.

For a long time there was the belief that the success of the GNU/Linux platform in the server space would start to translate to the desktop sector and we would see GNU/Linux systems on sale from mainstream vendors.

But this never really eventuated. A few companies, Dell included, toyed with very select GNU/Linux-enabled product ranges, but never offered it in a serious way across their fleet.

Part of the problem was that despite it’s zero-cost licensing, the choice of GNU/Linux made computers more expensive for the vendors since they could no longer profit from the much-hated bloatware they preload the computers with which subsidised the licensing costs from Microsoft[note]Higher support costs would also be likely, but this would have resolved itself with time if the market share grew to a level where the

And consumers voted with their wallets – Windows was “free”[note]As in really bad beer[/note] when buying an OEM machine due to the bloatware subsidies making it cheaper than GNU/Linux, ran more consumer applications and games and was a more recognised brand. Sure it restricted your freedoms and was technologically inferior but if you just cared about running apps and not about freedom, it was a more logical choice.

Sadly consumers don’t see an Free Software device and think “this is a better quality device since it respects more of my freedoms“, they look at the cost of the device and the apps available for it and make a decision accordingly.

And at the end of the day, most users[note]Readers of this blog probably excluded.[/note] aren’t interested in running an operating system. They want to run applications that enable them to get something done and there’s no difference for them if that application is running on GNU/Linux, BSD, MacOS or Windows [note]Stability and quality of the OS can play a big part if it’s too poor, but to Microsoft’s credit, Windows has come a long way in both security and stability since the Windows Vista days. I’d consider all 3 major platforms at a “good enough” stage for most users, where it’s not going to be a factor any more.[/note].

 

We succeeded in the mobile space…

The embedded and mobile computing space played out a bit differently compared to the desktop sector. The traditional proprietary mobile platforms were extremely expensive both in licensing and in regards to development tools. With the lack of a simple major platform (like Windows on desktop) the application market was fragmented, so consumers had no specific reason to choose one OS over another. Free Software wasn’t really a player in any measurable sense, but most of the proprietary platforms were in an equally bad situation.

The mobile application market didn’t really start developing properly until Apple released iOS [note]Of course it’s important to note that whilst Apple’s iOS uses a lot of Free Software in it’s core, it’s a fundamentally un-free platform – just less restrictive and more accessible than many of the proprietary platforms before it.[/note] and Google released Android – suddenly there were two massive players with platforms that anyone could easily join and develop for.

iOS took a premium slice of the market and Android is busy mopping up all the other potential customers. And because the app ecosystem keeps growing it’s harder and harder to compete with it, there’s barely any room left for a third player like Microsoft, even less for the others like Blackberry to get a stake in the market.

It’s not without a slice of irony that Microsoft (whom dominated the desktop space for so long due to their app ecosystem) is now being thrashed in the mobile space for the exact same reason by it’s competitors. But this is the way of the technology world, consumers don’t really care about the OS, they care about the device, the apps and the price. And Microsoft has found itself late in the game without enough apps – and the good hardware and price isn’t enough.

 

 

… but we’re losing Freedoms there still

Of the three major mobile players – Apple, Google and a distant third, Microsoft, only Google’s Android operating system is considered Free Software. And whilst Android has the vast majority of market share currently, if either Apple or Microsoft make gains it will probably come at the cost of Free platform market share.

“Free” is also an interesting world to describe Android. Many would consider it a poster child for Free Software success, but actually it’s more a text book example of how companies are using Free Software to get a head start in the market and then once they have obtained market share, start to introduce lock-in.

Sure there’s Free Software in the core, but all the value added bits are being built as proprietary components, tying users and apps to the Google ecosystem. It’s more like MacOS, sure the core is free, but you need all the proprietary bits to make it actually useful.

It’s somewhat telling in that the only Android distributions are either blessed by Google and carry Google’s proprietary software additions (Play Framework, Play store, Google Search, etc) or tend to be grass-roots hacker based projects.

The only really notable non-Google Android fork is Amazon’s FireOS which runs standard Android apps, but has none of the proprietary Google ecosystem. Of course it’s hardly an altruistic project from Amazon – simply they want to profit from consumers themselves rather than having to give Google a slice of the pie.

What FireOS does show however is how hard it is to really innovate and customise Android – you can do whatever you want until you do anything that Google doesn’t want, like not shipping their proprietary apps or completing with them on products like search. As soon as that happens, you’re locked out of the application store ecosystem and it is very, very hard to make up that market share.

Amazon is a huge player and developers can pretty much just port over their apps without any real work, but they’re still lagging behind the growth of Google’s app store. A free-er Android fork is a nice idea, but based on Amazon’s going, the lack of applications makes it a lot harder than one might expect – the simple barrier of not having access to the app store becomes a massive competitive disadvantage.

Over time it’s likely that more and more of Android will be replaced by various proprietary components. Google doesn’t need to have Android as a Free platform any more now that they’ve obtained the bulk of the market share – they can tell their device manufacturers how they want Android to be packaged to best suit Google’s business objectives. The previously Free Software apps that shipped with the OS can be left to stagnate and in their place, better but proprietary Google versions can be shipped.

And this is just the Android OS and how it’s been treated by it’s parent. Device manufacturers are even worse with companies shipping devices with binary drivers[note]It’s argued that this is actually a violation of the Linux kernel’s GPLv2 license, but it’s a common daily practice.[/note] and firmware update DRM that prevent users from loading on their own builds of Android, resulting in devices that are outdated with no hope of updating only 6 months after leaving the shop.

This was never the dream for anyone wanting to run a Free Software mobile device. And whilst Android has left us better than where we would be in an iOS/Windows Mobile/Blackberry/Symbian dominated world, there’s a long way to go.

 

 

Freedom funds Fragmentation

If an onslaught from companies wanting to create proprietary walled gardens wasn’t enough, we have our own community to content with. Unlike a company, there isn’t a single overarching chain of command in the Free Software community that dictates the path that software will evolve along, so developments take place organically based on specific preferences.

This has worked very well in many cases – one of the upsides of not having a single forced vision is that if that vision turns out to be wrong, the whole movement isn’t damaged. But it also causes fragmentation, seen very visibly with the state of the various Free Software desktops such as KDE, GNOME, XFCE and countless GNU/Linux distributions essentially trying to do the same thing rather than pooling resource to create a single robust platform.

A company like Microsoft or Apple can dictate a particular vision and see the whole company move towards it. They don’t tend to make good vs bad decisions any better than communities, but the lack of fragmentation is often more important that making the technically perfect decision since at least everyone is focused on the one direction.

The rise of MacOS, especially with Free or Open source developers is a particularly interesting example. On a technical level, GNU/Linux is a better operating system – vastly better driver support, not tied to a particular hardware vendor, huge selection of applications easily available via the package managers (think app store on steroids) and arguably more stable at a kernel level.

Yet despite this, more and more developers and engineers are now using MacOS. The fact that the community has adapted technologies from Free operating systems and brought them to the MacOS platform, such as with Brew and Cask rather than working to improve the Free Software platforms with the stability and desktop maturity they were missing is a bit of a damning statement.

I believe the reason for this is that Apple has a strong vision on what the desktop OS should be like and whilst it’s far from perfect, it delivers a single consistent high quality standard. You don’t need to worry about that weird bug that only occurs on Ubuntu 14.04 under GNOME nor do you have to figure out as a vendor how to package an app for several different competing package management systems. And you don’t spend months arguing over init systems.

In short, it makes life simple for people. And when Freedom isn’t a factor in your decisions, then MacOS provides the stability and simplicity that GNU/Linux doesn’t whilst still providing a platform suitable for running the tools loved by engineers such as the various GNU utilities. [note]It’s worth noting that Windows could have been this platform, but Microsoft basically stopped innovating – MacOS isn’t really any more Free than Windows, but it is better engineered thanks to it’s BSD roots. Microsoft could fix this if they tried and with the new direction under Satya Nadella, who knows what could happen.[/note].

Attending a conference of IT engineers, even something pro-Free like the excellent linux.conf.au, there is a visible sea of Apple devices running MacOS. [note]Whilst there are a few whom run Free Software OSes on their Apple hardware, the majority of these Apple users tend to be running MacOS – because it meets their application needs.[/note] This is a really clear failing of the Free Software desktop/laptop, if we can’t even get our own supportive community excited and dedicated to running a Free platform, how do we expect to encourage anyone else?

Given this fragmentation and lack of singular vision, I can’t see Free Software making any  significant leaps in adoption in the traditional desktop/laptop market [note]Which is suffering enough as it is with the onslaught of mobile devices and the fact something that fits in your hand has more than enough capabilities for a user who just wants to browse the web and send some emails. Why buy a traditional computer in this world?[/note].

There’s potentially space for one really strong GNU/Linux distribution to stake a place in the power-user/engineer’s toolbox as desktop of choice, but we’d have to get over our fragmentation problem first and work together to get a strong enough common platform and that seems unlikely. And I can’t see it unseating Microsoft Windows or MacOS as the non-power user’s platform of choice. It’d be more likely to see Android or ChromeOS take a bigger slice of the desktop market than traditional GNU/Linux distributions as mobile/desktop computing converges more.

 

 

But Free Software on the server is still strong right?

Sure. For now.

I foresee GNU/Linux slotting into much more of a commodity/utility space (some might say it already has) and it fulfilling the role of cloud glue – the thing that allows your hypervisor to actually run your application. Whether it’s to run some containers, or as the backbone of a server-less compute platform[note]We *really* need a better name for this. It’s cloud all over again…[/note] the operating system is going to take a much more background role in the coming decade of compute.

Most of us already treat our hardware as an appliance and don’t pay too much attention to it. Do you even still remember how much RAM your cellphone has? 10 years ago it would be unthinkable for any self-respecting geek to not be able to recite the spec of every component in their computer, but we’ve reached a point where it doesn’t matter any more.1 6GB of RAM? 32GB of RAM? Either way it runs your browser, Twitter client and chat client perfectly well. It just doesn’t matter.

The same is becoming true for operating systems and distributions. It doesn’t matter if you’re running Ubuntu Linux or CentOS Linux because either way your Java/Rails/Node/HipsterLang app is going to run on it. Either way your configuration manager is going to be able to configure it. Either way it runs on your favourite cloud provider. Either way there’s a licensing cost of $0. It’s a commodity, like water[note]Yeah OK this analogy fails if you’re living in one of the worse places of the world where fresh water isn’t a free human right, but if you care about the ethics of software, you probably have it pretty good.[/note].

The hayday of the GNU/Linux distribution has come and gone. Other than specific requirements about life frames of platforms and version variations of packages, the mainstream distributions are all pretty much interchangeable.

Even support and patch lifecycle, the golden reasons for commercial GNU/Linux distributions is becoming less and less appealing every year. Traditional support isn’t so useful when the cloud vendor provides excellent support and recommendations around running the platform.

I’ve raised AWS cases before reporting kernel/VM issues and had their enterprise support go as far as to send me links about bugs in the version of the Linux kernel shipped by Ubuntu and then sending me an actual patched version ready to install. And of course I could choose to use Amazon Linux and get support of the full stack from hypervisor to application if I wanted to reduce the risk of having a third party vendor’s software.

Any competent DevOps team can debug and resolve most user space issues themselves, which really tends to be something you need to do to get any kind of vendor support in the first place otherwise it’s a pretty painful exercise. So what’s the point of a GNU/Linux vendor in this new world? Seems redundant when the support is available from another layer.

 

But this is still good news for Free Platforms right? Even if we care less about them and selection reduces, Freedom still prevails?

Maybe not.

My concern is that once the OS becomes a commodity, PaaS is the next most logical step. If you don’t care about the OS and just want your app to run, why not use a PaaS product? Why not consider server-less computing? You clearly don’t run the OS for the sake of the OS[note]See I’m the kind of geek who would run a GNU/Linux, or BSD or Minix box purely because it’s beautiful engineering and infrastructure itself is beautiful, but we’re a rare breed of crazy motherfuckers and we need to remember the rest of the world (whilst crazy) isn’t this crazy. Users want apps, not OSes. [/note].

PaaS platforms are seriously worth considering for new applications and make complete business sense, especially if you aren’t invested in large scale traditional IT infrastructure operations. No sysadmin team? Mix in a few DevOps focused engineers with your full-stack[note]Full stack is a myth and in reality, specialization has it’s purpose, but I do favour teams that are multi-skilled rather than single focus. “My specialty is Java, but I’m not bad with Node.JS and a Linux box” is a much better hire than “I don’t need to care about anything but Java”. “Full Stack” is probably a better (even if incorrect) term than “person who does lots of computer stuff and is useful”.[/note] development teams and hit the ground running with PaaS.

But PaaS is a very good platform for lock-in.

It’s not always – a PaaS that takes a Java JAR file and runs it is (by it’s very nature) going to be quite portable and has minimal lock-in as it’s just a wrapper around the JVM offering a deployer and some management tools.

As customers this sort of PaaS is useful, but we’ve kind of solved the deployment and management problem already with configuration management, containers, cloud APIs etc. We demand more from our providers  – what will you do to make my life as a developer easier? How can I integrate with new products/platforms faster?

We call out, beg for, even demand additional API integration and functionality to make our lives easier. And those features tend to be proprietary and vendor-specific by their very nature as by nature they expose hooks into vendor specific platforms and tools. Any application built on such a PaaS becomes more expensive and difficult to port between providers, let alone to a self-hosted scene.

Once you’re running on a PaaS like this, suddenly it doesn’t matter what OS is running underneath. Whilst the GNU/Linux OS might benefit by some patches upstream from vendors building PaaS products on the platform, changes that provide a specific competitive advantage to your provider might not necessarily make their way back upstream. Many Free Software licenses like GPLv2 don’t require the source code be shared for a hosted service[note]This is why the GPLv3 and AGPL exist, to plug gaps in the modern way applications are run, but lots of key software is on more permissive licenses.[/note].

Your app could even end up on a completely proprietary platform, maybe some kind of Unikernel built for a single purpose and which could be entirely proprietary and custom to a single provider. How long till we someone defining a language unique to their platform? Apple defined their own language recently, a custom language for a specific cloud provider isn’t infeasible as more application workloads move into PaaS systems.

Microsoft can’t make Windows Server as good as GNU/Linux for all-round purposes, but they can easily make it a formidable force in the PaaS world where you don’t care about wanting the bash shell, or needing SSH or the various GNU utilities that make server administration on GNU/Linux so enjoyable. If Azure runs an app for less money and just as fast as Amazon, do you care what platform is underneath?

I don’t believe the GNU/Linux server is going to disappear overnight, or even in the next decade. What I do believe, is that we’ll see a change in the way we consider server-side programming and an increasing adoption of proprietary PaaS platforms built into the major public cloud platforms. And the more we adopt these PaaS platforms, the greater the cost to Free Software in the form of reduced innovation going back into our Free Platforms and the lock-in incurred by users of these PaaS platforms.

 

Any positives or is the future simply bleak?

I don’t know if it’s a positive or negative, but it’s worth noting that the proprietary companies of 2015 are a somewhat different beast from the companies of the ’90s.

In our current (dystopian?) reality, services are available by the second, minute, hour or months and common functionality is a race-to-the-bottom commodity[note]If you’re a VPS/cloud provider and not one of the top 3, I’d be worried right now. You can’t complete on pricing with their scale.[/note]. There’s usually no massive licensing costs to start using a platform, the modern software companies just need a credit card added to an account, or a small fee (like the Apple $99 developer fee) to get started with their technologies.

This is a positive in that at least these proprietary technologies and platforms are more fairly accessible for anyone and not restricted as much as in the past. But this also makes the products and platforms of these companies extra dangerous because they’re nowhere as unreasonable as the proprietary villains of old. And whilst there are plenty of members of the Free Software community attracted to Free Software because of the Freedom aspect, we cannot afford to ignore the fact there’s a huge amount of user adoption and developer contribution that occurred simply because it was zero-cost and easily accessible.

If Microsoft Windows Server had been a $5 download with no stupid incomprehensible license model and the ability to download via a click of a button, would we have had such a strong GNU/Linux adoption today? Or would a lot more innovation have occurred on Windows instead? [note] It’s a model that Microsoft seems to be playing with their mobile device operating system recently. They made the decision to make Windows zero-cost for small devices putting it on equal footing with Android, which they hope will pay off by making it’s adoption more attractive by device manufacturers, and in return application developers.[/note]

I fear that the extremely low barrier of entry of current proprietary technologies is driving us towards a less Free world. And what’s scary this time is that the proprietary offerings look good in terms of polish, price and accessibility.

An over-priced product, or restrictive product makes it easy to promote the merits of Free Software as an alternative, but a proprietary solution that’s cheap-enough, equally good/better technically and just as accessible as Free Software is a much harder sell.

And whilst I believe Freedom should be enough to drive choice on it’s own merits, it’s a hard sell. Companies are worried about building solutions as fast as possible and saving costs, not Freedom [note]Yes there’s a benefit in Freedom to companies in that it offers no lock-in, but companies are exceptionally poor at analysing the cost of lock-in and it’s very hard to allocate numbers against it in a business case.[/note]. Most users are worried about whether they can Facebook on their phone, not if their browser is Free as in Freedom. Hell, it’s hard enough to get people to care if a boat load of refugees sinks off the coast next to them, let alone the specifics of morality and freedom of software. We suck at understanding the consequences of anything that doesn’t impact us directly in the next 5 minutes.

 

Can we fix this?

I don’t see an easy solution to addressing the issues raised in this post. To some degree I believe we’re going to have to accept an increased amount of proprietary technologies in our computing environments and the reality of the majority of the world’s computing capacity being in the hands of a few select public cloud providers seems unavoidable given the economies of scale.

What we can do is try to ensure that we choose Free Software solutions as much as possible, or at least solutions that contribute back into Free Software.

And when we have to use proprietary solutions, avoid ones that encourage lock-in by using generic interchangable solutions rather than vendor specific. You may use a proprietary cloud provider[note]An argument could be made that a non-free cloud provider is akin to using non-free hardware and we’ve replaced one evil for another, but open hardware is a whole another rant for another day.[/note] but run a Free operating system on top of that, knowing that you can always select another virtual machine provider – the virtual machine layer being a commodity.

I strongly believe that for a truly free society, we need Free Software to allow us to prevent being controlled or restricted by any one particular company, organisation or government.

And whilst a lot of people don’t always buy this argument for Free Software, you can’t deny the history of our race isn’t particularly good at respecting individual freedoms even in traditionally “free” societies. Whilst politics tend to change at a pace a bit slower than technology, change they do – and today’s bastion of freedom isn’t always going to be the same in 40 years time.

I’m not going to say don’t use any proprietary solutions – to compete in the current world you need to accept you’re going to have to use them at some level. The current technology scene basically requires you to use a public cloud provider to stay competitive in the market and use various proprietary technologies. But make careful choices and don’t lock yourself in needlessly. And contribute to Free Software where you can, even if it’s just that small internal app you wrote that nobody would ever find useful.

If you’re a Free Software supporter and proponent – awesome, please keep doing so, the world needs more of you. But if you’re lacking direction I’d suggest focusing your energies around privacy and ensuring we can have private computing no matter the platform behind it.

We are in a world where our friends and families will use proprietary platforms no matter what we say or offer as an alternative. As computer geeks we tend to shy away from politics and “patch” anything we think is dumb with technology. Data retention laws? “Meh, my communications are fully encrypted”. Remote snooping? “Doesn’t matter, Free software means no backdoors[note]Well in theory, some members of the infosec community will disagree…[/note]”. We are smart, smarter than the people whom write laws, smarter than the average user.

But we’re not so smart that we’ve been able to make a platform that is so much better than our proprietary competitors and bring Freedom to the majority of computer users. We’re not so smart that we’ve figured out how to make GPG email work smoothly for everyone. We’re not so smart that we were able to prevent the NSA spying on millions of Americans.

We need to realise that ignoring what’s happening in the real world has real impact for those not able to use our free platforms (for whatever reason) and we need to focus some attention on helping and protecting society as a whole, not just our own technology community.

A purely Free software world is perfect. But a world where we have a mix of Free and proprietary solutions is a lot more palatable if we know companies and governments have very tight laws and technological restrictions around what they can and cannot do.

I’d like to know for certain that my friends iPhone isn’t recording what we’re saying. I’d like to make sure someone in a religiously charged restricted country can safely express themselves to their friends without fear of secret police. I’d like to know that governments can’t spy on political adversaries. I’d like to know that the police aren’t scanning people’s private data to dig up dirt and misdemeanors to prosecute them with. I wish these were theatrical examples, but they happen all the time – and those are just the ones we know about.

And I’d like to know that these freedoms are secure not simply because the laws are written to say “don’t do it”, but rather we make it impossible for proprietary software companies to even have that capability to give to a corrupt or malicious government using technology such as client-side encryption[note]Interesting Apple has been showing some positive moves in this space recently, although they have a long way to go.[/note] and OTR.

I don’t believe it’s possible to make a fully Free Software world. I wish I was. And I would love that so much. But based on the current trends we are going to enter a world where proprietary technologies and massive cloud companies play a much bigger part whether we like it or not. And we should work to make that world as best as we can for all involved.

 

Tagged , , , , , , , , , , , , , , , | 4 Comments

Your cloud pricing isn’t webscale

Thankfully in 2015 most (but not all) proprietary software providers have moved away from the archaic ideology of software being licensed by the CPU core – a concept that reflected the value and importance of systems back when you were buying physical hardware, but rendered completely meaningless by cloud and virtualisation providers.

Taking it’s place came the subscription model, popularised by Software-as-a-Service (or “cloud”) products. The benefits are attractive – regular income via customer renewal payments, flexibility for customers wanting to change the level of product or number of systems covered and no CAPEX headaches in acquiring new products to use.

Clients win, vendors win, everyone is happy!

Or maybe not.

 

Whilst the horrible price-by-CPU model has died off, a new model has emerged – price by server. This model assumes that the more servers a customer has, the bigger they are and the more we should charge them.

The model makes some sense in a traditional virtualised environment (think VMWare) where boxes are sliced up and a client runs only as many as they need. You might only have a total of two servers for your enterprise application – primary and DR – each spec’ed appropriately to handle the max volume of user requests.

But the model fails horribly when clients start proper cloud adoption. Suddenly that one big server gets sliced up into 10 small servers which come and go by the hour as they’re needed to supply demand.

DevOps techniques such as configuration management suddenly turns the effort of running dozens of servers into the same as running a single machine, there’s no longer any reason to want to constrain yourself to a single machine.

It gets worse if the client decides to adopt microservices, where each application gets split off into it’s own server (or container aka Docker/CoreOS). And it’s going to get very weird when we start using compute-less computing more with services like Lambda and Hoist because who knows how many server licenses you need to run an application that doesn’t even run on a server that you control?

 

Really the per-server model for pricing is as bad as the per-core model, because it no longer has any reflection on the size of an organisation, the amount they’re using a product and most important, the value they’ve obtaining from the product.

So what’s the alternative? SaaS products tend to charge per-user, but the model doesn’t always work well for infrastructure tools. You could be running monitoring for a large company with 1,000 servers but only have 3 user accounts for a small sysadmin team, which doesn’t really work for the vendor.

Some products can charge based on volume or API calls, but even this is risky. A heavy micro-service architecture would result in large number of HTTP calls between applications, so you can hardly say an app with 10,000 req/min is getting 4x the value compared to a client with a 2,500 req/min application – it could be all internal API calls.

 

To give an example of how painful the current world of subscription licensing is with modern computing, let’s conduct a thought exercise and have a look at the current pricing model of some popular platforms.

Let’s go with creating a startup. I’m going to run a small SaaS app in my spare time, so I need a bit of compute, but also need business-level tools for monitoring and debugging so I can ensure quality as my startup grows and get notified if something breaks.

First up I need compute. Modern cloud compute providers *understand* subscription pricing. Their models are brilliantly engineered to offer a price point for everyone. Whether you want a random jump box for $2/month or a $2000/month massive high compute monster to crunch your big-data-peak-hipster-NoSQL dataset, they can deliver the product at the price point you want.

Let’s grab a basic Digital Ocean box. Well actually let’s grab 2, since we’re trying to make a redundant SaaS product. But we’re a cheap-as-chips startup, so let’s grab 2x $5/mo box.

Screen Shot 2015-11-03 at 21.46.40

Ok, so far we’ve spent $10/month for our two servers. And whilst Digital Ocean is pretty awesome our code is going to be pretty crap since we used a bunch of high/drunk (maybe both?) interns to write our PHP code. So we should get a real time application monitoring product, like Newrelic APM.

Screen Shot 2015-11-03 at 21.37.46

Woot! Newrelic have a free tier, that’s great news for our SaaS application – but actually it’s not really that useful, it can’t do much tracing and only keeps 24 hours history. Certainly not enough to debug anything more serious than my WordPress blog.

I’ll need the pro account to get anything useful, so let’s add a whopping $149/mo – but actually make that $298/mo since we have two servers. Great value really. :-/

 

Next we probably need some kind of paging for oncall when our app blows up horribly at 4am like it will undoubtably do. PagerDuty is one of the popular market leaders currently with a good reputation, let’s roll with them.

Screen Shot 2015-11-03 at 21.52.57

Hmm I guess that $9/mo isn’t too bad, although it’s essentially what I’m paying ($10/mo) for the compute itself. Except that it’s kinda useless since it’s USA and their friendly neighbour only and excludes us down under. So let’s go with the $29/mo plan to get something that actually works. $29/mo is a bit much for a $10/mo compute box really, but hey, it looks great next to NewRelic’s pricing…

 

Remembering that my SaaS app is going to be buggier than Windows Vista, I should probably get some error handling setup. That $298/mo Newrelic APM doesn’t include any kind of good error handler, so we should also go get another market leader, Raygun, for our error reporting and tracking.

Screen Shot 2015-11-03 at 22.00.54

For a small company this isn’t bad value really given you get 5 different apps and any number of muppets working with you can get onboard. But it’s still looking ridiculous compared to my $10/mo compute cost.

So what’s the total damage:

Compute: $10/month
Monitoring: $371/month

Ouch! Now maybe as a startup, I’ll churn up that extra money as an investment into getting a good quality product, but it’s a far cry from the day when someone could launch a new product on a shoestring budget in their spare time from their uni laptop.

 

Let’s look at the same thing from the perspective of a large enterprise. I’ve got a mission critical business application and it requires a 20 core machine with 64GB of RAM. And of course I need two of them for when Java inevitably runs out of heap because the business let muppets feed garbage from their IDE directly into the JVM and expected some kind of software to actually appear as a result.

That compute is going to cost me $640/mo per machine – so $1280/mo total. And all the other bits, Newrelic, Raygun, PagerDuty? Still that same $371/mo!

Compute: $1280/month
Monitoring: $371/month

It’s not hard to imagine that the large enterprise is getting much more value out of those services than the small startup and can clearly afford to pay for that in relation to the compute they’re consuming. But the pricing model doesn’t make that distinction.

 

So given that we know know that per-core pricing is terrible and per-server pricing is terrible and (at least for infrastructure tools) per-user pricing is terrible what’s the solution?

“Cloud Spend Licensing” [1]

[1] A term I’ve just made up, but sounds like something Gartner spits out.

With Cloud Spend Licensing, the amount charged reflects the amount you spend on compute – this is a much more accurate indicator of the size of an organisation and value being derived from a product than cores or servers or users.

But how does a vendor know what this spend is? This problem will be solving itself thanks to compute consumers starting to cluster around a few major public cloud players, the top three being Amazon (AWS), Microsoft (Azure) and Google (Compute Engine).

It would not be technically complicated to implement support for these major providers (and maybe a smattering of smaller ones like Heroku, Digital Ocean and Linode) to use their APIs to suck down service consumption/usage data and figure out a client’s compute spend in the past month.

For customers whom can’t (still on VMWare?) or don’t want to provide this data, there can always be the fallback to a more traditional pricing model, whether it be cores, servers or some other negotiation (“enterprise deal”).

 

 

How would this look?

In our above example, for our enterprise compute bill ($1280/mo) the equivalent amount spent on the monitoring products was 23% for Newrelic, 3% for Raygun and 2.2% for PagerDuty (total of 28.2%). Let’s make the assumption this pricing is reasonable for the value of the products gained for the sake of demonstration (glares at Newrelic).

When applied to our $10/month SaaS startup, the bill for this products would be an additional $2.82/month. This may seem so cheap there will be incentive to set a minimum price, but it’s vital to avoid doing so:

  • $2.82/mo means anyone starting up a new service uses your product. Because why not, it’s pocket change. That uni student working on the next big thing will use you. The receptionist writing her next mobile app success in her spare time will use you. An engineer in a massive enterprise will use you to quickly POC a future product on their personal credit card.
  • $2.82/mo might only just cover the cost of the service, but you’re not making any profit if they couldn’t afford to use it in the first place. The next best thing to profit is market share – provided that market share has a conversion path to profit in future (something some startups seem to forget, eh Twitter?).
  • $2.82/mo means IT pros use your product on their home servers for fun and then take their learning back to the enterprise. Every one of the providers above should have a ~ $10/year offering for IT pros to use and get hooked on their product, but they don’t. Newrelic is the closest with their free tier. No prizes if you guess which product I use on my personal servers. Definitely no prizes if you guess which product I can sell the benefits of the most to management at work.

 

But what about real earnings?

As our startup grows and gets bigger, it doesn’t matter if we add more servers, or upsize the existing ones to add bigger servers – the amount we pay for the related support applications is always proportionate.

It also caters for the emerging trend of running systems for limited hours or using spot prices – clients and vendor don’t have to worry about figuring out how it fits into the pricing model, instead the scale of your compute consumption sets the price of the servers.

Suddenly that $2.82/mo becomes $56.40/mo when the startup starts getting successful and starts running a few computers with actual specs. One day it becomes $371 when they’re running $1280/mo of compute tier like the big enterprise. And it goes up from there.

 

I’m not a business analyst and “Cloud Spend Licensing” may not be the best solution, but goddamn there has to be a more sensible approach than believing someone will spend $371/mo for their $10/mo compute setup. And I’d like to get to that future sooner rather than later please, because there’s a lot of cool stuff out there that I’d like to experiment with more in my own time – and that’s good for both myself and vendors.

 

Other thoughts:

  • I don’t want vendors to see all my compute spend details” – This would be easily solved by cloud provider exposing the right kind of APIs for this purpose eg, “grant vendor XYZ ability to see sum compute cost per month, but no details on what it is“.
  • I’ll split my compute into lots of accounts and only pay for services where I need it to keep my costs low” – Nothing different to the current situation where users selectively install agents on specific systems.
  • This one client with an ultra efficient, high profit, low compute app will take advantage of us.” – Nothing different to the per-server/per-core model then other than the min spend. Your client probably deserves the cheaper price as a reward for not writing the usual terrible inefficient code people churn out.
  • “This doesn’t work for my app” – This model is very specific to applications that support infrastructure, I don’t expect to see it suddenly being used for end user products/services.
Tagged , , , , , , , , , , , , , , , | 2 Comments

Not all routing is equal

Ran into an interesting issue with my Routerboard CRS226-24G-2S+ “Cloud Router Switch” which is basically a smart layer 3 capable switch running Mikrotik’s RouterOS.

Whilst it’s specs mean it’s intended for switching rather than routing, given it has the full Mikrotik RouterOS on it it’s entirely possible to drop out a port from the switching hardware and use it to route traffic, in my case, between the LAN and WAN connections.

Routerboard’s website rate it’s routing capabilities as between 95.9 and 279 Mbits, in my own iperf tests before putting it into action I was able to do around 200Mbits routing. With only 40/10 Mbits WAN performance, this would work fine for my needs until we finally get UFB (fibre-to-the-home) in 2017.

However between this test and putting it into production, it’s ended up with a lot more firewall rules including NAT and when doing some work on the switch, I noticed that the CPU was often hitting the 100% threshold – which is never good for your networking hardware.

I wondered how much impact that maxed out CPU could be having on my WAN connection, so I used the very non-scientific Ookla Speedtest with the CRS doing my routing:

4735498067

After stripping all the routing work from the CRS and moving it to a small Routerboard 750 ethernet router, I’ve gained a few additional Mbits of performance:

4735587010

The CRS and the Routerboard 750 both feature a MIPS 24Kc 400Mhz CPU, so there’s no effective difference between the devices, in fact the switch is arguably faster as it’s a newer generation chip and has twice the memory, yet it performs worse.

The CPU usage that was formerly pegging at 100% on the CRS dropped to around 30% on the 750 when running these tests, so there clearly something going on in the CRS which is giving it a handicap.

The overhead of switching should be minimal in theory since it’s handled by dedicated hardware, however I wonder if there’s something weird like the main CPU having to give up time to handle events/operations for the switching hardware.

So yeah, a bit annoying – it’s still an awesome managed switch, but it would be nice if they dropped the (terrible) “Cloud Router Switch” name and just sell it for what it is – a damn nice layer 3 capable managed switch, but not as a router (unless they give it some more CPU so it can get the job done as well!).

For now the dedicated 750 as the router will keep me covered, although it will cap out at 100Mbits, both in terms of wire speed and routing capabilities so I may need to get a higher specced router come UFB time.

Tagged , , , , , | Leave a comment

More Puppet Stuff

I’ve been continuing to migrate to my new server setup and Puppetising along the way, the outcome is yet more Puppet modules:

  1. The puppetlabs-firewall module performs very poorly with large rulesets, to work around this with my geoip/rirs module, I’ve gone and written puppet-speedychains, which generates iptables chains outside of the one-rule, one-resource Puppet logic. This allows me to do thousands of results in a matter of seconds vs hours using the standard module.
  2. If you’re doing Puppet for any more than a couple of users and systems, at some point you’ll want to write a user module that takes advantage of virtual users to make it easy to select which systems should have a specific user account on it. I’ve open sourced my (very basic) virtual user module as a handy reference point, including examples on how to use Hiera to store the user information.

Additionally, I’ve been working on Pupistry lightly, including building a version that runs on the ancient Ruby 1.8.7 versions found on RHEL/CentOS 5 & 6. You can check out this version in the legacy branch currently.

I’m undecided about whether or not I merge this into the main branch, since although it works fine on newer Ruby versions, I’m not sure if it could limit me significantly in future or not, so it might be best to keep the legacy branch as special thing for ancient versions only.

Tagged , , , , , , , | 3 Comments

Finding & purging Puppet exported resources

Puppet exported resources is a pretty awesome feature – essentially it allows information from one node to be used on another to affect the resulting configuration. We use this for clever things like having nodes tell an Icinga/Nagios server what monitoring configuration should be added for them.

Of course like everything in the Puppet universe, it’s not without some catch – the biggest issue I’ve run into is that if you have a mistake and generate bad exported resources it can be extremely hard to find which node is responsible and take action.

For example, recently my Puppet runs started failing on the monitoring server with the following error:

Error: Could not retrieve catalog from remote server: Error 400 on SERVER: A duplicate resource was found while collecting exported resources, with the type and title Icinga2::Object::Service[Durp Service Health Check] on node failpet1.nagios.example.com

The error is my fault, I forgot that exported resources must have globally unique names across the entire fleet, so I ended up with 2x “Durp Service Health Check” resources.

The problem is that it’s a big fleet and I’m not sure which of the many durp hosts is responsible. To make it more difficult, I suspect they’ve been deleted which is why the duplication clash isn’t clearing by itself after I fixed it.

Thankfully we can use the Puppet DB command line tools on the Puppet master to search the DB for the specific resource and find which hosts it is:

# puppet query nodes \
--puppetdb_host puppetdb.infrastructure.example.com \
"(@@Icinga2::Object::Service['Durp Service Health Check'])"

durphost1312.example.com
durphost3436.example.com

I can then purge all their data with:

# puppet node deactivate durphost1312.example.com
Submitted 'deactivate node' for durphost1312.example.com with UUID xxx-xxx-xxx-xx

In theory deleted hosts shouldn’t have old data in PuppetDB, but hey, sometimes our decommissioning tool has bugs… :-/

Tagged , , , | 1 Comment

MacOS won’t build anything? Check xcode license

One of the annoyances of the MacOS platform is that whilst there’s a nice powerful UNIX underneath, there’s a rather dumb layer of top that does silly things like preventing the app store password being saved, or as I found the other day, disabling parts of the build system if the license hasn’t been accepted.

When you first setup MacOS to be useful, you need to install xcode’s build tools and libraries either via the app store, or with:

sudo xcode-select --install

However it seems if xcode gets updated via one of the routine updates, it can require that the license is re-accepted, and until that happens, it disable various builds of the build system.

I found the issue when I suddenly lost the ability to install native ruby gems, eg:

Gem::Installer::ExtensionBuildError: ERROR: Failed to build gem native extension.

 /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/bin/ruby extconf.rb
checking for BIO_read() in -lcrypto... *** extconf.rb failed ***
Could not create Makefile due to some reason, probably lack of necessary
libraries and/or headers. Check the mkmf.log file for more details. You may
need configuration options.

Provided configuration options:
 --with-opt-dir
 --without-opt-dir
 --with-opt-include
 --without-opt-include=${opt-dir}/include
 --with-opt-lib
 --without-opt-lib=${opt-dir}/lib
 --with-make-prog
 --without-make-prog
 --srcdir=.
 --curdir
 --ruby=/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/bin/ruby
 --with-puma_http11-dir
 --without-puma_http11-dir
 --with-puma_http11-include
 --without-puma_http11-include=${puma_http11-dir}/include
 --with-puma_http11-lib
 --without-puma_http11-lib=${puma_http11-dir}/
 --with-cryptolib
 --without-cryptolib
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/mkmf.rb:434:in `try_do': The compiler failed to generate an executable file. (RuntimeError)
You have to install development tools first.
 from /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/mkmf.rb:513:in `block in try_link0'
 from /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/tmpdir.rb:88:in `mktmpdir'
 from /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/mkmf.rb:510:in `try_link0'
 from /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/mkmf.rb:534:in `try_link'
 from /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/mkmf.rb:720:in `try_func'
 from /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/mkmf.rb:950:in `block in have_library'
 from /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/mkmf.rb:895:in `block in checking_for'
 from /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/mkmf.rb:340:in `block (2 levels) in postpone'
 from /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/mkmf.rb:310:in `open'
 from /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/mkmf.rb:340:in `block in postpone'
 from /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/mkmf.rb:310:in `open'
 from /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/mkmf.rb:336:in `postpone'
 from /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/mkmf.rb:894:in `checking_for'
 from /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/mkmf.rb:945:in `have_library'
 from extconf.rb:6:in `block in <main>'
 from extconf.rb:6:in `each'
 from extconf.rb:6:in `find'
 from extconf.rb:6:in `<main>'


Gem files will remain installed in /var/folders/py/r973xbbn2g57sr4l_fmb9gtr0000gn/T/bundler20151009-29854-mszy85puma-2.14.0/gems/puma-2.14.0 for inspection.
Results logged to /var/folders/py/r973xbbn2g57sr4l_fmb9gtr0000gn/T/bundler20151009-29854-mszy85puma-2.14.0/gems/puma-2.14.0/ext/puma_http11/gem_make.out
An error occurred while installing puma (2.14.0), and Bundler cannot continue.
Make sure that `gem install puma -v '2.14.0'` succeeds before bundling.

The solution is quite simple:

sudo xcodebuild -license

Why Apple thinks their build tools are so important that they require their own license to be accepted every so often is beyond me.

Tagged , , , | 2 Comments

Puppet modules

I’m in the middle of doing a migration of my personal server infrastructure from a 2006-era colocation server onto modern cloud hosting providers.

As part of this migration, I’m rebuilding everything properly using Puppet (use it heavily at work so it’s a good fit here) with the intention of being able to complete server builds without requiring any manual effort.

Along the way I’m finding gaps where the available modules don’t quite cut it or nobody seems to have done it before, so I’ve been writing a few modules and putting them up on GitHub for others to benefit/suffer from.

 

puppet-hostname

https://github.com/jethrocarr/puppet-hostname

Trying to do anything consistently with host naming is always fun, since every organisation or individual has their own special naming scheme and approach to dealing with the issue of naming things.

I decided to take a different approach. Essentially every cloud provider will give you a source of information that could be used to name your instance whether it’s the AWS Instance ID, or a VPS provider passing through the name you gave the machine at creation. Given I want to treat my instances like cattle, an automatic soulless generated name is perfect!

Where they fall down, is that they don’t tend to setup the FQDN properly. I’ve seen a number of solution to this including user data setup scripts, but I’m trying to avoid putting anything in user data that isn’t 100% critical and sticking to my Pupistry bootstrap so I wanted to set my FQDN via Puppet itself.

(It’s even possible to set the hostname itself if desired, you can use logic such as tags or other values passed in as facts to define what role a machine has and then generate/set a hostname entirely within Puppet).

Hence puppet-hostname provides a handy way to easily set FQDN (optionally including the hostname itself) and then trigger reloads on name-dependent services such as syslog.

None of this is revolutionary, but it’s nice getting it into a proper structure instead of relying on yet-another-bunch-of-userdata that’s specific to my systems. The next step is to look into having it execute functions to do DNS changes on providers like Route53 so there’s no longer any need for user data scripts being run to set DNS records at startup.

 

puppet-rirs

https://github.com/jethrocarr/puppet-rirs

There are various parts of my website that I want to be publicly reachable, such as the WordPress login/admin sections, but at the same time I also don’t want them accessible by any muppet with a bot to try and break their way in.

I could put up a portal of some kind, but this then breaks stuff like apps that want to talk with those endpoints since they can’t handle the authentication steps. What I can do, is setup a GeoIP rule that restricts access to the sections to the countries I’m actually in, which is generally just NZ or AU, to dramatically reduce the amount of noise and attempts people send my way, especially given most of the attacks come from more questionable countries or service providers.

I started doing this with mod_geoip2, but it’s honestly a buggy POS and it really doesn’t work properly if you have both IPv4 and IPv6 connections (one or another is OK). Plus it doesn’t help me for applications that support IP ACLs, but don’t offer a specific GeoIP plugin.

So instead of using GeoIP, I’ve written a custom Puppet function that pulls down the IP assignment lists from the various Regional Internet Registries and generate IP/CIDR lists for both IPv4 and IPv6 on a per-country basis.

I then use those lists to populate configurations like Apache, but it’s also quite possible to use it for other purposes such as iptables firewalling since the generated lists can be turned into Puppet resources. To keep performance sane, I cache the processed output for 24 hours and merge any continuous assignment blocks.

Basically, it’s GeoIP for Puppet with support for anything Puppet can configure. :-)

 

puppet-digitalocean

https://github.com/jethrocarr/puppet-digitalocean

Provides a fact which exposes details from the Digital Ocean instance API about the instance – similar to how you get values automatically about Amazon EC2 systems.

 

puppet-initfact

https://github.com/jethrocarr/puppet-initfact

The great thing about the open source world is how we can never agree so we end up with a proliferation of tools doing the same job. Even init systems are not immune, with anything tha intends to run on the major Linux distributions needing to support systemd, Upstart and SysVinit at least for the next few years.

Unfortunately the way that I see most Puppet module authors “deal” with this is that they simply write an init config/file that suits their distribution of choice and conveniently forget the other distributions. The number of times I’ve come across Puppet modules that claim support for Red Hat and Amazon Linux but only ship an Upstart file…. >:-(

Part of the issue is that it’s a pain to even figure out what distribution should be using what type of init configuration. So to solve this, I’ve written a custom Fact called “initsystem” which exposes the primary/best init system on the specific system it’s running on.

It operates in two modes – there is a curated list for specific known systems and then fallback to automatic detection where we don’t have a specific curated result handy.

It supports (or should) all major Linux distributions & derivatives plus FreeBSD and MacOS. Pull requests for others welcome, could do with more BSD support plus maybe even support for Windows if you’re feeling brave.

 

puppet-yas3fs

https://github.com/pcfens/puppet-yas3fs/commit/27af462f1ce2fe0610012a508236062e65017b5f

Not my module, but I recently submitted a PR to it (subsequently merged) which introduces support for a number of different distributions via use of my initfact module so it should now run on most distributions rather than just Ubuntu.

If you’re not familiar with yas3fs, it’s a FUSE driver that turns S3+SNS+SQS into a shared filesystem between multiple servers. Ideal for dealing with legacy applications that demand state on disk, but don’t require high I/O performance, I’m in the process of doing a proof-of-concept with it and it looks like it should work OK for low activity sites such as WordPress, although with no locking I’d advise against putting MySQL on it anytime soon :-)

 

These modules can all be found on GitHub, as well as the Puppet Forge. Hopefully someone other than myself finds them useful. :-)

Tagged , , , , , , , , , , , , | 1 Comment