Tag Archives: freebsd

Introducing Pupistry

I’ve recently been working to migrate my personal infrastructure from a very conventional and ageing 8 year old colocation server to a new cloud-based approach.

As part of this migration I’m simplifying what I have down to the fewest possible services and offloading a number of them to best-of-breed cloud SaaS providers.

Of course I’m still going to have a few servers for running various applications where it makes the most sense, but ideally it will only be a handful of small virtual machines and a bunch of development machines that I can spin up on demand using cloud providers like AWS or Digital Ocean, only paying for what I use.

 

The Puppet Master Problem

To make this manageable I needed to use a configuration management system such as Puppet to allow the whole build process of new servers to be automated (and fast!). But running Puppet goes against my plan of as-simple-as-possible as it means running another server (the Puppet master). I could have gone for something like Ansible, but I dislike the agent-less approach and prefer to have a proper agent and being able to build boxes automatically such as when using autoscaling.

So I decided to use Puppet masterless. It’s completely possible to run Puppet against local manifest files and have it apply them, but there’s the annoying issue of how to get Puppet manifests to servers in the first place…. That tends to be left as an exercise to the reader and there’s various collections of hacks floating around on the web and major organisations seem to grow their own homespun tooling to address it.

Just getting a well functioning Puppet masterless setup took far longer than desired and it seems silly given that everyone doing Puppet masterless is going to have to do the same steps over and over again.

User-data is another case of stupidity with every organisation writing their own variation of what is basically the same thing – some lines of bash to get a newly launched Linux instance from nothingness to running Puppet and applying the manifests for that organisation. There’s got to be a better way.

 

The blessing and challenges of r10k

It gets even more complex when you take the use of r10k into account. r10k is a Puppet workflow solution that makes it easy to include various upstream Puppet modules and pin them to specific versions. It supports branches, so you can do clever things like tell one server to apply a specific new branch to test a change you’ve made before rolling it out to all your servers. In short, it’s fantastic and if you’re not using it with Puppet… you should be.

However using r10k does mean you need access to all the git repositories that are being included in your Puppetfile. This is generally dealt with by having the Puppet master run r10k and download all the git repos using a deployer key that grants it access to the repositories.

But this doesn’t work so well when you have to setup deployer access keys for every machine to be able to read every one of your git repositories. And if a machine is ever compromised, it needs to be changed for every repo and every server again which is hardly ideal.

r10k’s approach of allowing you to assemble various third party Puppet modules into a (hopefully) coherent collection of manifests is very powerful – grab modules from the Puppet forge, from Github or from some other third party, r10k doesn’t care it makes it all work.

But this has the major failing of essentially limiting your security to the trustworthyness of all the third parties you select.

In some cases the author is relatively unknown and could suddenly decide to start including malicious content, or in other cases the security of the platform providing the modules is at risk (eg Puppetforge doesn’t require any two-factor auth for module authors) and a malicious attacker could attack the platform in order to compromise thousands of machines.

Some organisations fix this by still using r10k but always forking any third party modules before using them, but this has the downside of increased manual overhead to regularly check for new updates to the forked repos and pulling them down. It’s worth it for a big enterprise, but not worth the hassle for my few personal systems.

The other issue aside from security is that if any one of these third party repos ever fails to download (eg repo was deleted), your server would fail to build. Nobody wants to find that someone chose to delete the GitHub repo you rely on just minutes before your production host autoscaled and failed to startup. :-(

 

 

Pupistry – the solution?

I wanted to fix the lack of a consistent robust approach to doing masterless Puppet and provide a good way to allow r10k to be used with masterless Puppet and so in my limited spare time over the past month I’ve been working on Pupistry. (Pupistry? puppet + artistry == Pupistry! Hopefully my solution is better than my naming “genius”…)

Pupistry is a solution for implementing reliable and secure masterless puppet deployments by taking Puppet modules assembled by r10k and generating compressed and signed archives for distribution to the masterless servers.

Pupistry builds on the functionality offered by the r10k workflow but rather than requiring the implementing of site-specific custom bootstrap and custom workflow mechanisms, Pupistry executes r10k, assembles the combined modules and then generates a compressed artifact file. It then optionally signs the artifact with GPG and finally uploads it into an Amazon S3 bucket along with a manifest file.

The masterless Puppet machines then runs Pupistry which checks for a new version of the manifest file. If there is, it downloads the new artifact and does an optional GPG validation before applying it and running Puppet. Pupistry ships with a daemon which means you can get the same convenience of  a standard Puppet master & agent setup and don’t need dodgy cronjobs everywhere.

To make life even easier, Pupistry will even spit out bootstrap files for your platform which sets up each server from scratch to install, configure and run Pupistry, so you don’t need to write line after line of poorly tested bash code to get your machines online.

It’s also FAST. It can check for a new manifest in under a second, much faster than Puppet master or r10k being run directly on the masterless server.

Because Pupistry is artifact based, you can be sure your servers will always build since all the Puppetcode is packaged up which is great for autoscaling – although you still want to use a tool like Packer to create an OS image with Pupistry pre-loaded to remove dependency and risk of Rubygems or a newer version of Pupistry failing.

 

Try it!

https://github.com/jethrocarr/pupistry

If this sounds up your street, please take a look at the documentation on the Github page above and also the introduction tutorial I’ve written on this blog to see what Pupistry can do and how to get started with it.

Pupistry is naturally brand new and at MVP stage, so if you find bugs please file an issue in the tracker. It’s also worth checking the tracker for any other known issues with Pupistry before getting started with it in production (because you’re racing to put this brand new unproven app into production right?).

Pull requests for improved documentation, bug fixes or new features are always welcome, as is beer. :-)

I intend to keep developing this for myself as it solves my masterless Puppet needs really nicely, but I’d love to see it become a more popular solution that others are using instead of spinning some home grown weirdness again and again.

I’ve put some time into making it easy to use (I hope) and also written bootstrap scripts for most popular Linux distributions and FreeBSD, but I’d love feedback good & bad. If you’re using Pupistry and love it, let me know! If you tried Pupistry but it had some limitation/issue that prevented you from adopting it, let me know what it was, I might be able to help. Better yet, if you find a blocker to using it, fix it and send me a pull request. :-)

Puppet 3 & 4 on FreeBSD

I’ve been playing with FreeBSD recently and wanted to run Puppet on it to provide my machines with configuration.

Unfortunately I faced the following error when I first ran Puppet with a simple Puppet manifest that tries to install NTP via usage of the puppetlabs/ntp module:

Warning: Found multiple default providers for package: pip, gem; using pip
Error: Could not set 'present' on ensure: Could not locate the pip command. at modules/ntp/manifests/install.pp

Error: /Stage[main]/Ntp::Install/Package[net/ntp]/ensure: change from absent to present failed: Could not set 'present' on ensure: Could not locate the pip command. at modules/ntp/manifests/install.pp

This collection of errors is pretty interesting – we can see that Puppet seems to understand that it’s running on FreeBSD and that the package is called “net/ntp” but it’s trying to install it with pip, which is the Python package manager :-/

Turns out that Puppet versions older than and including 4.0.0 [1] lack support for PkgNg package manager which is used in FreeBSD 10.x and since it doesn’t know how to use it, defaults to trying one of the only two providers left over – pip & gem. Not really the smartest approach…

There’s two ways to fix right now:

  1. An independently packaged provider for PkgNg was written and is available at xaque208/puppet-pkgng. This can be included into any existing Puppet 3/4 deployment and you can define it as the default for all packages globally.
  2. Pull the commit from Puppet that provides PkgNg support (thanks to author of the above project). The downside is that you then have to build your own Gem and install it.

Longer term you can either:

  1. Wait for Puppet to release a stable version that actually works. I’m guessing it will be in 4.0.0 +1 (so 4.0.1?) but I’m not familiar with their approach to releasing new feature additions. See [1] for more information.
  2. Hope that the FreeBSD developers can backport the commit that adds PkgNg support to Puppet4. I’ve actually started chatting with one of the FreeBSD developers who’s been kind enough to have a look at upgrading their package from Puppet3 to Puppet4, but it’s not as simple as it should be thanks to the terrible way Puppetlabs has packaged Puppet4 (lots of hard coded /opt paths everywhere) so it may take some time.

 

Additionally there is an issue if you choose to install Puppet 4 via RubyGems and when you attempt to run it, you will get the following error:

Error: Cannot create /opt/puppetlabs/puppet/cache; parent directory /opt/puppetlabs/puppet does not exist
Error: /File[/var/log/puppetlabs/puppet]/ensure: change from absent to directory failed: Cannot create /var/log/puppetlabs/puppet; parent directory /var/log/puppetlabs does not exist

This is because the Gem is not properly packaged and doesn’t run nicely with FreeBSD’s filesystem structure. You can hack in a “fix” with:

mkdir -p /opt/puppetlabs/puppet/cache
mkdir -p /var/log/puppetlabs/puppet

It’s far from the right way of properly fixing this, the best solution if you value OS purity right now is to stick with the FreeBSD packaged Puppet 3 version and add the xaque208/puppet-pkgng module.

 

[1] So at time of writing, *no* stable Puppet version supports PkgNg yet, but given that it’s been merged into the master git branch for Puppet, I have to *presume* that it’s going into 4.0.1… I hope :-/

FreeBSD in the cloud

This weekend I was playing around with FreeBSD in order to add support to Pupistry. Although I generally use Linux exclusively, it’s fun to play around with other platforms now and then, bit like going on vacation. Plus building support for other platforms ensures that I’m writing code that’s more portable.

FreeBSD is probably the most popular BSD in use and it’s the only one available for download from the Amazon Web Services (AWS) Marketplace and as a supported platform from Digital Ocean alongside their Linux offerings.

However as popular as FreeBSD is, it pales in comparison to Linux, which means that it doesn’t get as much love and things don’t work quite as seamlessly with these cloud providers. In my process of testing FreeBSD with both providers I ran into some interesting feature differences and annoyances.

 

FreeBSD on Digital Ocean

I started with Digital Ocean first, love them since they’re a nice simple, cheap cloud provider for personal stuff – not much need for the AWS enterprise feature set when I’m building personal machines and paying the price of a coffee for a month of compute sure is nice.

They provide a FreeBSD 10.1 image via the usual droplet creation screen, I have to give Digital Ocean credit for such a nice clean simple interface – limiting user selection does make it much more approachable for people, something Apple always understood with their products.

Screen Shot 2015-04-18 at 23.30.50

As always Digital Ocean is pretty speedy, bringing up a machine within a minute or so. Once ready, login as the freebsd user and you can just sudo to root.

Digital Ocean provides a pretty recent image with pkg already installed and ready to go, although you’ll want to run the update process to get the latest patches. You need to login initially as the freebsd user and then can sudo to acquire root powers.

Over all it’s great – so naturally there is a catch. Digital Ocean doesn’t yet support user data with their droplets. So whilst you can fill in the user data field, it won’t actually get executed.

This is pretty annoying for anyone wanting to automate large number of machines, since it now means you have to SSH to each of them to get them provisioned. I’ve raised a question on their community forum around this issue, but I wouldn’t expect a quick fix since the upstream bsd-cloudinit project they use hasn’t implemented support yet either.

It’s not going to be an end-of-the-world for most people, but it could be barrier if you’re wanting to roll out a fleet of BSD boxen.

The best feature from Digital Ocean is actually their documentation – with the launch of FreeBSD on their platform, they’ve produced some excellent tutorials and guides to their platform which can be found here and are useful to both Linux gurus and noobs alike.

Finally their native IPv6 support extends to FreeBSD, so your machines can join the internet of the 21st century from day one.

 

FreeBSD on Amazon Web Services (AWS)

Next I spun up an instance in Amazon Web Services (AWS) which is the granddaddy of cloud providers and provides an impressive array of functionality, although this comes at a cost premium over Digital Ocean’s tight pricing.

It’s actually been the first time in a long time that I’ve built a machine via the AWS web console, normally for work we just build all of our systems via Cloud Formation and it was an interesting experience to see the usability difference of AWS’s setup page vs that of Digital Ocean’s.

The fact that the launch wizard has 7 different screens says a lot and I suspect AWS is at risk of having it’s consumer user base eaten by the likes of Linode and Digital Ocean – but when a consumer user is paying $5.00 a month and an enterprise customer pays $300,000 a month, I suspect AWS isn’t going to be too worried.

Launching a FreeBSD instance is not really any different to that of a Linux one, you just need to search for “freebsd” in the AWS Market Place to find the AMI and launch as normal.

Screen Shot 2015-04-18 at 23.43.18

 

Once launched, things get more interesting. Digital Ocean’s FreeBSD instance came up in around 1 minute which is standard for their systems – but AWS took a whopping 8-10mins to launch the AMI to the level where I could login via SSH!

Digging into the startup log reveals why – it seems the AWS AMI (Amazon’s machine images/templates) for FreeBSD launches the instance, then runs a prolonged upgrade task (freebsd-update fetch install), before doing a subsequent reboot and finally starting SSH.

Whilst I appreciate the good default security posture this provides, there’s a few issues with it:

  1. It differs from most other AWS images which deal with patching by having new images built semi-frequently and leaving the patching in-between up to the admin’s choice.
  2. During the whole process, the admin can’t login which causes some confusion. I initially assumed the AMI images were broken after reviewing my security groups and seeing no reason why I shouldn’t be able to login immediately.
  3. You can’t trust the AMI images to be a solid unchanging base, which means you need to be vary wary if doing autoscaling. Not only is 10mins a bit too slow for autoscaling, having the potential risk of it not coming up due to app changes in the latest update is always something to watch out for. If doing autoscaling with these images, you’ll need to consider
  4. It caused me no end of frustration when trying to test user data since I had to wait 10mins each time to get a confirmation of my test!

The last point brings me to user data – the good news is that Amazon correctly supports user data for FreeBSD machines, so you can paste in your tcsh script (not bash remember!) and it will get invoked at launch time.

The downside is that the user data handling of FreeBSD is a lot more fragile than Linux images. Generally with Linux, the OS boots (including SSH) and then runs the user data. If the user data breaks or hangs or does anything other than expected, you can still login and debug. Whereas since FreeBSD runs the user data before starting up SSH, if something goes wrong you have no way to easily login and debug. And given the differences between tcsh and bash plus annoying commands that default to expecting user input on non-interactive ptys, changes are you’ll have more than one attempt that results in a machine getting stuck at launch.

The ultimate fix is that you’ll probably have to use Packer if using FreeBSD in any serious way on AWS to get the startup performance to an acceptable level.

Finally remember that on AWS, you need to login as the ec2-user and then su –  to become root.

 

Which one?

If you’re interested in FreeBSD and want to pick a provider to play around with, the choice seems pretty simple to me – Digital Ocean. They’re got the better pricing (~ $5/month vs $15/month) and their ridiculously simple dashboard coupled with the excellent documentation they’ve assembled makes it really attractive for anyone new to the *.nix or cloud space. Plus they’ve bothered to invest in IPv6 which I appreciate.

However if you’re doing business/enterprise systems and want user data, autoscaling or the benefit of automating entire stacks with Cloud Formation, then you will probably find AWS the more attractive offering purely due to the additional functionality offered by that platform. Just be prepared to spend a bit of time baking your own AMI to allow you to skip the overhead of having to wait for updates to apply for each instance you bring up.

Neither provider has got their FreeBSD experience to be quite as slick as that of their Linux offerings, however hopefully they improve on these deficiencies over time –  there’s not much needed to get the experience up to the same level as Linux distributions and it’s nice having a different type of unix to play with for a change.