Easy IKEv2 VPN for mobile devices (inc iOS)

I recently obtained an iPhone and needed to connect it to my VPN. However my existing VPN server was an OpenVPN installation which works lovely on traditional desktop operating systems and Android, but the iOS client is a bit more questionable having last been updated in September 2014 (pre iOS 9).

I decided to look into what the “proper” VPN option would be for iOS in order to get something that should be supported by the OS as smoothly as possible. Last time I looked this was full of wonderful horrors like PPTP (not actually encrypted!!) and L2TP/IPSec (configuration hell), so I had always avoided like the plague.

However as of iOS 9+, Apple has implemented support for IKEv2 VPNs which offers an interesting new option. What particularly made this option attractive for me, is that I can support every device I have with the one VPN standard:

  • IKEv2 is built into iOS 9 and MacOS El Capitan.
  • IKEv2 is built into Windows 10.
  • Works on Android with a third party client (hopeful for native integration soonish?).
  • Naturally works on GNU/Linux.

Whilst I love OpenVPN, being able to use the stock OS features instead of a third party client is always nice, particularly on mobile where power management and background tasks behaviour can be interesting.

IKEv2 on mobile also has some other nice features, such as MOBIKE, which makes it very seamless when switching between different networks (like the cellular to WiFi dance we do constantly with phones/tablets). This is something that OpenVPN can’t do – whilst it’s generally fast and reliable at establishing a connection, a change in the network means issuing a reconnect, it doesn’t just move the current connection across.

 

Given that I run GNU/Linux servers, I went for one of the popular IPSec solutions available on most distributions – StrongSwan.

Unfortunately whilst it’s technical capabilities are excellent, it’s documentation isn’t great. Best way to describe it is that every option is documented, but what options and why you’d want to use them? Not so much. The “left” vs “right” style documentation is also a right pain to work with, it’s not a configuration format that reads nicely and clearly.

Trying to find clear instructions and working examples of configurations for doing IKEv2 with iOS devices was also difficult and there’s some real traps for young players such as generating SHA1 certs instead of SHA2 when using the tools with defaults.

The other fun is that I also wanted my iOS device setup properly to:

  1. Use certificate based authentication, rather than PSK.
  2. Only connect to the VPN when outside of my house.
  3. Remain connected to the VPN even when moving between networks, etc.

I found the best way to make it work, was to use Apple Configurator to generate a .mobileconfig file for my iOS devices that includes all my VPN settings and certificates in an easy-to-import package, but also (critically) allows me to define options that are not selectable to end users, such as on-demand VPN establishment.

 

After a few nights of messing around and cursing the fact that all the major OS vendors haven’t just implemented OpenVPN, I managed to get a working connection. To avoid others the same pain, I considered writing a guide – but it’s actually a really complex setup, so instead I decided to write a Puppet module (clone from github / or install from puppetforge) which does the following heavy lifting for you:

  • Installs StrongSwan (on a Debian/derived GNU/Linux system).
  • Configures StrongSwan for IKEv2 roadwarrior style VPNs.
  • Generates all the CA, cert and key files for the VPN server.
  • Generates each client’s certs for you.
  • Generates a .mobileconfig file for iOS devices so you can have a single import of all the configuration, certs and ondemand rules and don’t have to have a Mac to use Apple Configurator.

This means you can save yourself all the heavy lifting and setup a VPN with as little as the following Puppet code:

class { 'roadwarrior':
   manage_firewall_v4 => true,
   manage_firewall_v6 => true,
   vpn_name           => 'vpn.example.com',
   vpn_range_v4       => '10.10.10.0/24',
   vpn_route_v4       => '192.168.0.0/16',
 }

roadwarrior::client { 'myiphone':
  ondemand_connect       => true,
  ondemand_ssid_excludes => ['wifihouse'],
}

roadwarrior::client { 'android': }

The above example sets up a routed VPN using 10.10.10.0/24 as the VPN client range and routes the 192.168.0.0/16 network behind the VPN server back through. (Note that I haven’t added masquerading options yet, so your gateway has to know to route the vpn_range back to the VPN server).

It then defines two clients – “myiphone” and “android”. And in the .mobileconfig file generated for the “myiphone” client, it will specifically generate rules that cause the VPN to maintain a constant connection, except when connected to a WiFi network called “wifihouse”.

The certs and .mobileconfig files are helpfully placed in  /etc/ipsec.d/dist/ for your rsync’ing pleasure including a few different formats to help load onto fussy devices.

 

Hopefully this module is useful to some of you. If you’re new to Puppet but want to take advantage of it, you could always check out my introduction to Puppet with Pupistry guide.

If you’re not sure of my Puppet modules or prefer other config management systems (or *gasp* none at all!) the Puppet module should be fairly readable and easy enough to translate into your own commands to run.

There a few things I still want to do – I haven’t yet done IPv6 configuration (which I’ll fix since I run a dual-stack network everywhere) and I intend to add a masquerade firewall feature for those struggling with routing properly between their VPN and LAN.

I’ve been using this configuration for a few weeks on a couple iOS 9.3.1 devices and it’s been working beautifully, especially with the ondemand configuration which I haven’t been able to do on any other devices (like Android or MacOS) yet unfortunately. The power consumption overhead seems minimal, but of course your mileage may vary.

It would be good to test with Windows 10 and as many other devices as possible. I don’t intend for this module to support non-roadwarrior type configs (eg site-to-site linking) to keep things simple, but happy to merge any PRs that make it easier to connect more mobile devices or branch routers back to a main VPN host. Also happy to merge PRs for more GNU/Linux distribution support- currently only support Debian/Ubuntu, but it shouldn’t be hard to add others.

If you’re on Android, this VPN will work for you, but you may find the OpenVPN client better and more flexible since the Android client doesn’t have the same level of on demand functionality that iOS has built in. You may also find OpenVPN a better option if you’re regularly using restrictive networks that only allow “HTTPS” out, since it can be run on TCP/443, whereas StrongSwan IKEv2 runs on UDP port 500 or 4500.

Tagged , , , , , , , , , | 15 Comments

Ubiquiti UniFi video lack of SSL/TLS validation

Posting this here since I’ve filed a disclosure with Ubiquiti on Feb 28th 2016 and had no acknowledgment other than to be patient. But two months of not even looking at what is quite a serious issue isn’t acceptable to me.

I do really like the Unifi Video product (hardware + software) so it’s a shame it’s let down by poor transport security and slow addressing of security issues by the vendor. I intend to write up a proper review soon, but it was more important to get this report out first.

My mitigation recommendation is that you only communicate with your Unifi Video systems via secure encrypted VPN, eg IKEv2 or OpenVPN until such time that Ubiquiti takes this seriously and patch their shit.


28th Feb 2016 – Disclosure of issue via HackerOne (#119121).

There is a SSL/TLS certificate validation flaw on the Unifi Video application for Android and iOS where it accepts any self-signed certificate served by the Unifi Video server silently allowing a malicious third party to intercept data.

Versions of software used;

  • Unifi Video 3.1.2 (server)
  • Android app 1.1.3 (Build 153)
  • iOS app 1.1.7 (Build 1.1.48)

Impact
Any man-in-the-middle attacker could intercept customers using Unifi Video from mobile devices by replacing the secure connection with their own self-signed certificate, capturing login password, all video content and being able to use this in future to view any cameras at their leisure.

Steps to reproduce:

  1. Perform clean installation of Unifi Video server.
  2. Connect to the web interface via browser. Self-signed cert, so have to accept cert.
  3. Connect to NVR via the Android app. No cert acceptance needed.
  4. Connect to NVR via the iOS app. No cert acceptance needed.
  5. Erase the previously generated keystore on server with: echo -n “” > /usr/lib/unifi-video/data/keystore
  6. Restart server with: /etc/init.d/unifi-video restart
  7. We now have the server running with a new cert. You can validate that, by refreshing the browser session and it will require re-acceptance of the new self-signed certificate and can see new generation time & fingerprint of new cert.
  8. Launch the Android app. Reconnect to the previously connected NVR. No warning/validation/acceptance of the new self-signed cert is requested.
  9. Launch the Android app. Reconnect to the previously connected NVR. No warning/validation/acceptance of the new self-signed cert is requested.
  10. Go get some gin and cry :-(

Comments
Whilst I can understand an engineer may have decided to develop the mobile apps to always accept a cert the first time it sees it to simplify setup for customers whom will predominately have a self-signed cert on Unifi Video server, it must not accept subsequent certificate changes without warning to the user. Failing to do so, allows a MITM attack on any insecure networks.

I’d recommend a revised workflow such as:

  1. User connects to a new NVR for the first time. Certificate is accepted silently (or better, shows the fingerprint, aka SSH style).
  2. Mobile app stores the cert fingerprint against the NVR it connected to.
  3. Cert gets changed – whether intentionally by user, or unintentionally by attacker.
  4. Mobile apps warn that the NVR’s cert fingerprint has changed and that this could be dangerous/malicious. User has option of selecting whether they trust this new certificate or whether they do not wish to connect. This is the approach that web browsers take with changed self-signed certificates.

This would prevent silent MITM attacks, whilst will allowing a cert to be updated/changed intentionally.


 

Communication with Ubiquiti:

12th March 2016 Jethro Carr

hi Ubiquiti,

Can I please get an update – do you confirm there is an issue and have a timeframe for resolution?

regards,
Jethro

15th March 2016 Ubiquiti Response

Thank you for submitting this issue to us, and we apologize for the delay. Since launching with HackerOne we have seen many issues submitted, and we are currently working on reducing our backlog. We appreciate your patience and we’ll be sure to update you as soon as we have more information.

Thanks and good luck in your future bug hunting.

24th April 2016 Jethro Carr

hi Ubiquiti,

I’ll be disclosing publicly on 29th of April due to no action on this report after two months.

regards,
Jethro

26th April 2016 Ubiquiti Response

Thank you for submitting this issue to us, and we apologize for the delay.

We’re still reviewing this issue and we appreciate your patience. We’ll be sure to update you as soon as we have more information.

Thanks and good luck in your future bug hunting.

 

 

Tagged , , , , , | 1 Comment

Upcycling 32-bit Mac Minis

The first generation Intel Apple Mac Mini (Macmini1,1) has a special place as the best bang-for-buck system that I’ve ever purchased.

Purchased for around $1k NZD in 2006, it did a stint as a much more sleep-friendly server back after I started my first job and was living at my parents house. It then went on to become my primary desktop for a couple of years in conjunction with my laptop. And finally it transitioned into a media centre and spent a number of years driving the TV and handling long running downloads. It even survived getting sent over to Sydney and running non-stop in the hot blazing hell inside my apartment there.

My long term relationship on the left and a more recent stray obtained second hand.

My long term relationship on the left and a more recent stray I obtained. Clearly mine takes after it’s owner and hasn’t seen the sun much.

Having now reached it’s 10th birthday, it’s started to show it’s age. Whilst it handles 720p content without an issue, it’s now hit and miss whether 1080p H264 content will work without unacceptable jitter.

It’s previously undergone a few upgrades. I bumped it from the original 512MB RAM to 2GB (the max) years ago and it’s had it’s 60GB hard drive replaced with a more modern 500GB model. But neither of these will help much with the video decoding performance.

 

Given we had recently obtained something that the people at Samsung consider a “Smart” TV, I decided to replace the Mac Mini with the Plex client running natively on the TV and recycle the Mac Mini into a new role as a small server to potentially replace a much more power hungry AMD Phenom II system that performs somewhat basic storage and network tasks.

Unfortunately this isn’t as simple as it sounds. The first gen Intel Mac Minis arrived on the scene just a bit too soon for 64-bit CPUs and so are packing the original Intel Core Solo or Intel Core Duo (1 or 2 cores respectively) which aren’t clocked particularly high and are only 32bit capable.

Whilst GNU/Linux *can* run on this, supported versions of MacOS X certainly can’t. The last MacOS version supported on these devices is Mac OS X 10.6.8 “Snow Leopard” 32-bit and the majority of app developers for MacOS have decided to set their minimum supported platform at 64-bit MacOS X 10.7.5 “Lion” so they can drop the old 32-bit stuff – this includes the popular Chrome browser which now only provides 64-built builds. Basically OS X Snow Leopard is the Win XP of the MacOS world.

Even running 32-bit GNU/Linux can be an exercise in frustration. Some distributions now only ship 64-bit builds and proprietary software vendors don’t always bother releasing 32-bit builds of their apps limiting what you can run on them.

 

On the plus side, this earlier generation of Apple machines was before Apple decided to start soldering everything together which means not only can you replace the RAM, storage, drives, WiFi card, you can also replace the CPU itself since it’s socketed!

I found a great writeup of the process at iFixit which covers the process of replacing the CPU with a newer model.

Essentially you can replace the CPUs in the Macmini1,1 (2006) or Macmini2,1 (2007) models with any chip compatible with Intel Socket M, the highest spec model available being the Intel Core 2 Duo 2.33 Ghz T7600.

At ~$60NZD for the T7600, it was a bit more than I wanted to spend for a decade old CPU. However moving down slightly to the T7400, the second hand price drops to around ~$20NZD per CPU with international shipping included. And at 2.177Ghz it’s no slouch, especially when compared to the original 1.5Ghz single core CPU.

It took a while to get here, I used this seller after the first seller never delivered the item and refunded me when asked about it. One of my CPUs also arrived with a bent pin, so there was some rather cold sweat moments straightening the tiny pin with a screw driver. But I guess this is what you get for buying decade old CPUs from a mysterious internet trader.

I'm naked!

I was surprised at the lack of dust inside the unit given it’s long life, even the fan duct was remarkably dust-free.

The replacement is a bit of a pain, you have to strip the Mac Mini right down and take the motherboard out, but it’s not the hardest upgrade I’ve ever had to do – dealing with cheap $100 cut-your-hand-open PC cases were much nastier than the well designed internals of the Mac. The only real tricky bit is the addition and removal of the heatsink which worked best with a second person helping remove the plastic pegs.

I did it using a regular putty knife, needle-nose pliers, phillips & flat head screw drivers and one Torx screw driver to deal with a single T10 screw that differs from the rest of the ones in the unit.

Moment of truth...

Recommend testing this things *before* putting the main case back together, they’re a pain to open back up if it doesn’t work first run.

The end result is an upgrade from a 1.5 Ghz single core 32bit CPU to 2.17 Ghz dual core 64bit CPU – whilst it won’t hold much to a modern i7, it will certainly be able to crunch video and server tasks quite happily.

 

The next problem was getting an OS on there.

This CPU upgrade opens up new options for MacOS fans, if you hack the installer a bit you can get MacOS X 10.7.5 “Lion” on there which gives you a 64-bit OS that can still run much of the current software that’s available. You can’t go past Lion however, since the support for the Intel GMA 950 GPU was dropped in later versions of MacOS.

Given I want them to run as servers, GNU/Linux is the only logical choice. The only issue was booting it… it seems they don’t support booting from USB flash drives.

These Mac Minis really did fall into a generational gap. Modern enough to have EFI and no legacy ports, yet old enough to be 32-bit and lack support for booting from USB. I wasn’t even sure if I would even be able to boot 64-bit Linux with a 32-bit EFI…

 

Given it doesn’t boot from USB and I didn’t have any firewire devices lying around to try booting from, I fell back to the joys of optical media. This was harder than it sounds given I don’t have any media and barely any working drives, but my colleague thankfully dug up a couple old CD-R for me.

They're basically floppy disks...

“Daddy are those shiny things floppy disks?”

I also quickly remembered why we all moved on from optical media. My first burn appeared to succeed but crashed trying to load the bootloader. And then refused to eject. Actually, it’s still refusing to eject, so there’s a Debian 8 installer that might just be stuck in there until it’s dying days… The other unit’s optical drive didn’t even work at all, so I couldn’t even do the pain of swapping around hardware to get a working combination.

 

Having exhausted the optional of a old-school CD-based GNU/Linux install, I started digging into ways to boot from another partition on the machine’s hard drive and found a project called rEFInd.

This awesome software is an alternative boot manager for EFI. It differs from a boot loader slightly, in a traditional BIOS -> Boot Loader -> OS world, rEFInd is equivalent to a custom BIOS offering better boot functionality than the OEM vendor.

It works by installing itself into a small FAT partition that lives on the hard disk – it’s probably the easiest low-level tool I’ve ever installed – download, unzip, and run the installer from either MacOS or Linux.

./unsuckefi

Disturbingly easy from the existing OS X installation

Once installed, rEFInd kicks in at boot and offers the ability to boot from USB flash drives, in addition to the hard drive itself!

Legacy

The USB flash installer has been detected as “Legacy OS from whole disk volume”.

Yusss!

Yusss, Debian installer booted from USB via rEFInd!

A typical Debian installation followed, only thing I was careful about was not to delete the 209.7MB FAT filesystem used by EFI – I figured I didn’t want to find out what deleting that would mean on a box that was hard enough to boot as it is…

This is an ugly partition table

The small < 1MB free space between the partitions here irks me so much, I blame MacOS for aligning the partitions weirdly.

Once installed. rEFInd detected Linux as the OS installed on the hard drive and booted into GRUB and from there the usual Linux boot process works fine.

Launch the penguins!

Launch the penguins!

This spec sheet violates the manafacturer's warranty!

Final result -2GB RAM, 64bit CPU, delicious delicious GNU/Linux x86_64

I can confirm that both 32bit and 64bit Debian works nicely on this box (I installed 32-bit first by mistake) – so even without doing the CPU upgrade, if you want to get a bit more life out of these early unsupported Mac Minis, they’d happily run a 32-bit Debian desktop so you can enjoy wonders like a properly patched browser and operating system.

Not all other distributions will work – Ubuntu for example don’t include EFI support on their 32-bit installer which will probably cause some headaches. You should be OK with any of the major 64-bit distributions since they tend to always support EFI.

 

The final joy I ran into is that when I set up the Mac Mini as a headless box, it didn’t boot… it just turned on and never appeared on the network.

Seems that the Mac Minis (even the later unibody generation) have some genius firmware that disables the GPU hardware if no screen is attached, which then messes up most operating systems on it.

The easy fix, is to hack together a fake VGA load by connecting a 100Ω resister between pins 2 and 7 of a DVI-to-VGA adaptor (such as the one that ships with the Mac Mini).

I need to make a tidier/better version of this, but it works!

I need to make a tidier/better version of this, but it works!

No idea what engineer thought this was a good feature, but thankfully it’s an easy and cheap fix, especially since I have a box littered with these now-useless adaptors.

 

The end result is that I now have 2x 64-bit first gen Mac Minis running Debian GNU/Linux for a cost of around $20NZD and some time dismantling/reassembling them.

I’d recommend these small Mac Minis for server purposes, but the NZ second hand prices are still a bit too expensive for their age to buy specifically for this… Once they start going below $100 they’d make reasonable alternatives to something like the Intel NUC or Raspberry Pi for small serving tasks.

The older units aren’t necessarily problem free either. Whilst the build quality is excellent, after 10 years things don’t always work right. Both of my optical drives no longer function properly and one of the Mac Minis has a faulty RAM slot, limiting it to 1GB instead of the usual 2GB.

And of course at 10 years whom knows how much longer they’ll run for – but it’s been a good run so far, so here’s to another 10 years (hopefully)! The real limiting factor is going to be the 1GB/2GB RAM long term.

Tagged , , , , , , | 13 Comments

Node.js deployments at Fairfax with Code Deploy, Codeship and 12factor

This week I presented at the Node.js Wellington meetup around the tooling we have setup at Fairfax for running micro services for Node.js apps.

Essentially we have a workflow that uses Codeship for CI/CD and AWS Code Deploy for deployment. Our apps follow the principals of the Twelve-Factor App making each service simple and consistent to deploy.

This talk covers the reasons for this particular approach, the technologies used and offers a look at our stack including infrastructure and the deployment pipeline.

Whilst this talk is Node.js specific, we use the same technology for both Node.js and Java microservices and will shortly be standardising our Ruby applications on this approach as well.

Tagged , , , , , , , , , , | 1 Comment

AWS Cost Control at Fairfax

Earlier this month I was invited to speak at the AWS Wellington User Group around how we’ve been handling cost control at Fairfax including our use of spot pricing. I’ve now processed the video and got a recording up online for anyone interested in watching.

The video isn’t great since we took it in dim light using a cellphone and a webcam in a red lit bar, but the audio came through pretty good.

 

Tagged , , , , , , , | Leave a comment

How much swap should I use on my VM?

Lately a couple people have asked me about how much swap space is “right” for their servers – especially in the context of running low spec machines like AWS t2.nano/t2.micro or Digital Ocean boxes with low allocations like 1GB or 512MB RAM.

The old fashioned advice was always “your swap space should be double your RAM” but this doesn’t actually make a lot of sense any more. Really swap should be considered a tool of last resort – a hack even – to squeeze a bit more performance out of systems and should be used sparingly where it makes sense.

I tend to look after two different types of systems:

  1. Small systems running a specific dedicated service (eg microservices). These systems might do nothing more than run Nginx/Apache or something like PHP-FPM or Unicorn with a few workers. They typically have 512MB-1GB of RAM.
  2. Big heavy servers running heavy weight applications, typically Java. These systems will be configured with large memory allocations (eg 16GB) and be configured to allocate a specific amount of memory to the application (eg 10GB Java Heap) and to keep the rest free for disk cache and background apps.

The latter doesn’t need swap. There’s no time I would ever want my massive apps getting pushed into swap for a couple reasons:

  • Performance of these systems is critical. We’ve paid good money to allocate them specific amounts of memory which is essentially guaranteed – we know how much the heap needs, how much disk cache we need and how much to allocate to the background apps.
  • If something does go wrong and starts consuming too much RAM, rather than having performance degrade as the server tries to swap, I want it to die – and die fast. If Puppet has decided it wants 7 GB of RAM, I want the OOM to step in and slaughter it. If I have swap, I risk everything on the server being slowed down as it moves tasks into the horribly slow (even on SSD) swap space.
  • If you’re paying for 16GB of RAM, why do you want to try and get an extra 512MB out of some swap space? It’s false economy.

For this reason, our big boxes are all swapless. But what about the former example, the small microservice type boxes, or your small personal VPS type systems?

Like many things in IT, “it depends”.

If you’re running stateless clusters, provided that the peak usage fits within the memory allocation, you don’t need swap. In this scenario, your workload is sized appropriately and if anything goes wrong due to an unexpected issue, the machine will either kill the errant process or die and get removed from the pool entirely.

I run a lot of web app workers this way – for example a 1GB t2.micro can happily run 4 Ruby Unicorn workers averaging around 128MB each, plus have space for Puppet, monitoring and delayed jobs. If something goes astray, the process gets killed and the usual automated recovery processes handle things.

However you may need some swap if you’re running stateful systems (pets) where it’s better for them to go slow than to die entirely, or if you’re running a system where the peak usage won’t fit within the memory allocation due to tight budget constraints.

For an example of tight budget constraints – I run this blog on a small machine with only 512MB RAM. With an allocation this small, there’s just not enough memory to run applications like Apache and also be able to handle the needs of background daemons and Puppet runs which can use several hundred MB just by themselves.

The approach I took was to create a small swap volume and size the worker counts in Apache so that the max workers at average size would just fit within the real memory allocation. However any background or system tasks, would have to fight over the swap space.

Screen Shot 2016-03-03 at 23.52.58

What you can see from the above is that I’m consuming quite a bit of swap – but my disk I/O is basically nothing. That’s because most of what’s in swap on this machine isn’t needed regularly and the active workload, i.e. the apps actually using/freeing RAM constantly, fit within the available amount of real memory.

In this case, using swap allows me to get better value for money, than using the next size up machine – I’m paying just enough to run Apache and squeezing in the management tools and background jobs onto the otherwise underutilized SSD storage. This means I can spend $5 to run this blog, vs $10. Excellent!

In respects to sizing, I’m running with a 1GB swap on a 512MB RAM server which is compliant with the traditional “twice your RAM” approach to sizing. That being said, I wouldn’t extend past this, even if the system had more RAM (eg 2GB) you should only ever use swap as a hack to squeeze a bit more out of a system. Basically don’t assume swap will scale linearly as memory scales.

Given I’m running on various cloud/VPS environments, I don’t have a traditional swap partition – instead I create an image file on the root filesystem and format it as swap space – I use a third party Puppet module (https://forge.puppetlabs.com/petems/swap_file) to do this:

swap_file::files { 'default':
  ensure            => present,
  swapfile         => '/tmp/swapfile',
  swapfilesize  => '1000 MB’
}

The performance impact of using a swap file ontop of a filesystem is almost nothing and this dramatically simplifies management and allocation of swap space. Just make sure you’re not using tmpfs for that /tmp path or you’ll find that memory benefit isn’t as good as it seems.

Tagged , , , , , , , | 1 Comment

/tmp mounted as tmpfs on CentOS

After a recent reboot of my CentOS servers, I’ve inherited an issue where the server comes up running with /tmp mounted using tmpfs. tmpfs is a memory-based volatile filesystem and has some uses for people, but others like myself may have servers with very little free RAM and plenty of disk and prefer the traditional mounted FS volume.

Screen Shot 2016-02-17 at 23.06.28

As a service it should be possible to disable this as per the above comment… except that it already is – the following shows the service disabled on both my server and also by default by the OS vendor:

Screen Shot 2016-02-17 at 22.50.23

The fact I can’t disable it, appears to be a bug. The RPM changelog references 1298109 and implies it’s fixed, but the ticket seems to still be open, so more work may be required… it looks like any service defining “PrivateTmp=true” triggers it (such as ntp, httpd and others).

Whilst the developers figure out how to fix this properly, the only sure way I found to resolve the issue is to mask the tmp.mount unit with:

systemctl mask tmp.mount

Here’s something to chuck into your Puppet manifests that does the trick for you:

exec { 'fix_tmpfs_systemd':
 path => ['/bin', '/usr/bin'],
 command => 'systemctl mask tmp.mount',
 unless => 'ls -l /etc/systemd/system/tmp.mount 2>&1 | grep -q "/dev/null"'
}

This properly survives reboots and is supposed to survive systemd upgrades.

Tagged , , , , , | 1 Comment

No linux.conf.au this year

So 2016 marks 10 years since my first very linux.conf.au in 2006. That’s so long ago it even predates the start of this blog!

Sadly I’m not heading to LCA 2016 in Geelong this year. A number of reasons for this, but I think in general I just needed a break this year. I want LCA to be something I’m really excited about again rather than it feeling like a yearly habit so a year away is probably going to help.

This isn’t a criticism of the 2016 team, it looks like they’re putting together an excellent conference and it should be very enjoyable for everyone attending this year. It’s going to suck not seeing a bunch of friends whom I normally see, but hope to see everyone at 2017 in Hobart.

Tagged | 3 Comments

Secure Hiera data with Masterless Puppet

One of the biggest limitations with masterless Puppet is keeping Hiera data secure. Hiera is a great way for separating site-specific information (like credentials) from your Puppet modules without making a huge mess of your sites.pp. On a traditional Puppet master environment, this works well since the Puppet master controls access to the Hiera data and ensures that client servers only have access to credentials that apply to them.

With masterless puppet this becomes difficult since all clients have access to the full set of Hiera data, which means your webserver might have the ability to query the database server’s admin password – certainly not ideal.

Some solutions like Hiera-eyaml can still be used, but they require setting up different keys for each server (or group of servers) which is a pain with masterless, especially when you have one value you wish to encrypted for several different servers.

To solve this limitation for Pupistry users, I’ve added a feature “HieraCrypt” in Pupistry version 1.3.0 that allows the hieradata directory to be encrypted and filtered to specific hosts.

HieraCrypt works, by generating a cert on each node (server) you use with the pupistry hieracrypt --generate parameter and saving the output into your puppetcode repository at hieracrypt/nodes/HOSTNAME. This output includes a x509 cert made against the host’s SSH RSA host key and a JSON array of all the facter facts on that host that correlate to values inside the hiera.yaml file.

When you run Pupistry on your build workstation, it parses the hiera.yaml file for each environment and generates a match of files per-node. It then encrypts these files and creates an encrypted package for each node that only they can decrypt.

For example, if your hiera.yaml file looks like:

:hierarchy:
  - "environments/%{::environment}"
  - "nodes/%{::hostname}"
  - common

And your hieradata directory looks like:

hieradata/
hieradata/common.yaml
hieradata/environments
hieradata/nodes
hieradata/nodes/testhost.yaml
hieradata/nodes/foobox.yaml

When Pupistry builds the artifact, it will include the common.yaml file for all nodes, however the testhost.yaml file will only be included for node “testhost” and of course foobox.yaml will only be available on node “foobox”.

The selection of matching files is then encrypted against each host’s certificate and bundled into the artifact. The end result is that whilst all nodes have access to the same artifact, nodes can only decrypt the Hiera files relating to them. Provided you setup your Hiera structure properly, you can make sure your webserver can’t access your database server credentials and vice-versa.

 

Tagged , , , , , | Leave a comment

HowAlarming

The previous owners of our house had left us with a reasonably comprehensive alarm system wired throughout the house, however like many alarm systems currently in homes, it required an analogue phone line to be able to call back to any kind of monitoring service.

To upgrade the alarm to an IP module via the monitoring company would be at least $500 in parts and seemed to consist of hooking the phone line to essentially a VoIP ATA adaptor which can phone home to their service.

As a home owner I want it internet connected so I can do self-monitoring, give me the ability to control remotely and to integrate it with IP-based camera systems. Most of the conventional alarm companies seem to offer none of things, or only very expensive sub-standard solutions.

To make things worse, their monitoring services are also pretty poor. Most of the companies I spoke to would receive an alarm, then call me to tell me about it/check with me and only then send someone out to investigate. The existing alarm company the previous owner was using didn’t even offer a callout security service!

Spark (NZ incumbent telco) recently brought out a consumer product called Morepork (as seen on stuff!) which looks attractive for your average non-techie consumer, but I’m not particularly keen to tie myself to Spark’s platform and it is very expensive, especially when considering I have to discard an existing functional system and start from scratch. There’s also some design weaknesses like the cameras being mains dependent, which I don’t consider acceptable given how easy it is to cut power to a house.

So I decided that I’d like to get my existing alarm IP connected, but importantly, I wanted to retain complete control over the process of generating an alert and delivering it to my phone so that it’s as fast as possible and also, as reliable as possible.

Not only did I want to avoid the human factor, but I’m also wary of the proprietary technologies used by most of the alarm companies off-the-shelf solutions. I have some strong doubts about the security of a number of offers, not to mention life span (oh sorry that alarm is EOL, no new mobile app for you) and the level of customisation/integration offered (oh you want to link your alarm with your camera motion detection? Sorry, we don’t support that).

 

I did some research on my alarm system and found it’s one of the DSC PowerSeries range which is a large Canadian company operating globally. The good thing about them being a large global player is that there’s a heap of reference material about their products online.

With a quick search I was able to find user guides, installer guides, programming guides and more. They also include a full wiring diagram inside the alarm control centre which is exceptionally useful, since it essentially explains how you can connect any kind of sensors yourself which can save a whole heap of money compared to paying for an alarm company to do the installation.

Spagettie

I wish all my electronic devices came with documentation this detailed.

The other great thing about this alarm is that since DSC is so massive, there’s an ecosystem of third party vendors offering components for it. Searching for third party IP modules, I ran into this article where the author purchased an IP module from a company known as Envisalink and used it’s third party API to write custom code to get alarm events and issue commands.

A third party API sounded perfect, so I purchased the EnvisaLink EVL-4 for $239 NZD delivered and did the installation myself. In theory the installation is easy, just a case of powering down the alarm (not touching any 240V hard wired mains in the process) and connecting it via the 4 wire keypad bus.

In my case it ended up being a bit more complex since the previous owner had helpfully never given me any of the master/installer alarm codes, so I ended up doing a factory reset of the unit and re-programming it from scratch (which means all the sensors, etc) which takes about a day to figure out and do the first time. The plus side is that this gave me complete control over the unit and I was able to do things like deprogram the old alarm company’s phone number to stop repeat failed callout attempts.

Once connected, the EnvisaLink unit was remarkably hassle free to setup – it grabbed a DHCP lease, connected to the internet and phoned home to the vendor’s free monitoring service.

Installed with pretty LEDs!

EnvisaLink unit installed at the top above the alarm control circuit. A++ for LED ricing guys!

 

The EnvisaLink hardware is a great little unit and the third party programmer’s interface is reasonably well documented and works without too much grief. Unfortunately the rest of the experience of the company selling it isn’t particularly good. Specifically:

  • Their website places the order by emailing their accounts mailbox. How do I know? Because they printed the email including my credit card number in full and sent it as the packing slip on it’s journey across the world. Great PCI compliance guys!
  • They show the product as working with Android, iPhone and Blackberry. They carefully avoid saying it has native apps, they actually mean it has a “smart phone” optimized version, which is as terrible as it sounds.
  • I can’t enable alerts on their service since their signup process keeps sending my email a blank validation code. So I had an alarm that couldn’t alarm me via their service.
  • No 2FA on logging into the alarm website, so you could brute force login and then disable the alarm remotely… or set it off if you just want to annoy the occupants.

I haven’t dug into the communications between the unit and it’s vendor, I sure hope it’s SSL/TLS secured and doesn’t have the ability to remotely exploit it and upgrade it, but I’m not going to chance it. Even if they’ve properly encrypted and secured comms between the unit and their servers, the security is limited to the best practices of the company and software which already look disturbingly weak.

Thankfully my requirements for the module is purely it’s third party API so I can integrate with my own systems, so I can ignore all these issues and put it on it’s own little isolated VLAN where it can’t cause any trouble and talk to anything but my server.

 

 

So having sorted out the hardware and gotten the alarm onto the network, I now needed some software that would at least meet the basic alerting requirements I have.

There’s an existing comprehensive Java/Android-based product (plainly labeled as “DSC Security Server”) which looks very configurable, but I specifically wanted something open source to make sure the alarm integration remained maintainable long term and to use Google Push Notifications  for instant alerting on both Android (which supports long running background processes) and iOS (which does not – hence you must use push notifications via APNS).

I ended up taking advantage of some existing public code for handling the various commands and error responses from the Envisalink/DSC alarm combination but reworked it a bit so I now have a module system that consists of “alarm integrators” exchanging information/events with the alarm system and “alarm consumers” which decide what to do with the events generated. These all communicate via a simple beanstalk queue.

This design gives ultimate simplicity – each program is not much more than a small script and there’s a standard documented format for anyone whom wants to add support for other alarm integrators or alarm consumers in future. I wanted it kept simple, making it the sort of thing you could dump onto a Raspberry Pi and have anyone with basic scripting skills be able to debug and adjust it.

I’ve assembled these programs into an open source package I’m calling “HowAlarming”“, hopefully it might be useful for anyone in future with the same alarm system or wanting a foundation for building their own software for other alarms (or even their own alarms).

 

 

The simplest solution to get alerts from the system would be by sending SMS using one of the many different web-based SMS services, but I wanted something I can extend to support receiving images from the surveillance system in future and maybe also sending commands back.

Hence I’ve written a companion Android app which receives messages from HowAlarming via push notifications and maintains an event log and the current state of the alarm.

UX doens't get much better than this.

UX doens’t get much better than this.

It’s pretty basic, but it offers the MVP that I require. Took about a day to hack together not having done any Android or Java before, thankfully Android Studio makes the process pretty easy with lots of hand holding and easy integration with the simulators and native devices.

TBD if I can hack something together in a day not having done any native app development before that’s better than many of the offerings from the alarm companies currently around, they need to be asking themselves some hard questions. At the very least, they should get someone to write some apps that can pull their customer’s alarm state from their current phone-home infrastructure – there’s probably good money to be made giving existing customers on non-IP era alarms upgrades given the number of installations out there.

 

So far my solution is working well for me. It’s not without it’s potential problems, for example alarm communications are now at the mercy of a power/internet outage whereas previously as long as the phone line was intact, it could call out. However this is easily fixed with my UPS and 3G failover modem – the 3G actually makes it better than previously.

 

The other potential issue is that I don’t know what insurance would classify this nature of self-monitoring as. I have mine declared as “un-monitored” to avoid any complications, but if your insurance conditions require monitoring I’m unsure if a home-grown solution would meet those requirements (even if it is better than 90% of the alarm companies). So do your research and check your contracts & terms.

Tagged , , , , , , , | 5 Comments