Access Route53 private zones cross account

Using Route53 private zones can be a great way to maintain a private internal zone for your server infrastructure. However sometimes you may need to share this zone with another VPC in the same or in another AWS account.

The first situation is easy – a Route53 zone can be associated with any number of VPCs within a single AWS account using the AWS console.

The second is more tricky but is doable by creating a VPC association authorization request in the account with the zone, then accepting it from the other account.

# Run against the account with the zone to be shared.
aws route53 \
create-vpc-association-authorization \
--hosted-zone-id abc123 \
--vpc VPCRegion=us-east-1,VPCId=vpc-xyz123 

# Run against the account that needs access to the private zone.
aws route53 \
associate-vpc-with-hosted-zone \
--hosted-zone-id abc123 \
--vpc VPCRegion=us-east-1,VPCId=vpc-xyz123 \
--comment "Example Internal DNS Zone"

# List authori(z|s)ations once done
aws route53 \
list-vpc-association-authorizations \
--hosted-zone-id abc123

This doesn’t even require VPC peering since it works behind the scenes, with the associated zone now being resolvable using the default VPC DNS server on each zone that has been associated.

Note that the one catch is that this does not help you if you’re linking to a non-AWS VPC environment, such as an on-prem data centre via IPSec VPN or Direct Connect. Even though you can route to the VPC and systems inside it, the AWS DNS resolver for the VPC will refuse requests from IP space outside of the VPC itself.

So the only option is have an EC2 instance acting as a DNS forwarder inside the VPC, which is reachable from the linked data centre and yet since it’s in the VPC, can use the resolver.

Posted in Uncategorized | Tagged , , , , | Leave a comment

FailberryPi – Diverse carrier links for your home data center

Given the amount of internet connected things I now rely on at home, I’ve been considering redundant internet links for a while. And thanks to the affordability of 3G/4G connectivity, it’s easier than ever to have a completely diverse carrier at extremely low cost.

I’m using 2degrees which has a data SIM sharing service that allows me to have up to 5 other devices sharing the one data plan, so it literally costs me nothing to have the additional connection available 24×7.

My requirements were to:

  1. Handle the loss of the wired internet connection.
  2. Ensure that I can always VPN into the house network.
  3. Ensure that the security cameras can always upload footage to AWS S3.
  4. Ensure that the IoT house alarm can always dispatch events and alerts.

I ended up building three distinct components to build a failover solution that supports flipping between my wired (VDSL) and wireless (3G) connection:

  1. A small embedded GNU/Linux system that can bridge a USB 3G modem and an ethernet connection, with smarts to recover from various faults (like crashed 3G stick).
  2. A dynamic DNS solution, since my mobile telco certainly isn’t going to give me a static IP address, but I need inbound traffic.
  3. A DNS failover solution so I can redirect inbound requests (eg home VPN) to the currently active endpoint automatically when a failure has occurred.

 

The Hardware

I considered using a Mikrotik with USB for the 3G link – it is a supported feature, but I decided to avoid this route since I would need to replace my perfectly fine router for one with a USB port, plus I know from experience that USB 3G modems are fickle beasts that would be likely to need some scripting to workaround various issues.

For the same reason I excluded some 3G/4G router products available that take a USB modem and then provide ethernet or WiFi. I’m very dubious about how fault tolerant these products are (or how secure if consumer routers are anything to go by).

I started off the project using a very old embedded GNU/Linux board and 3G USB modem I had in the spare parts box, but unfortunately whilst I did eventually recycle this hardware into a working setup, the old embedded hardware had a very poor USB controller and was throttling my 3G connection to around 512kbps. :-(

Initial approach – Not a bomb, actually an ancient Gumstix Verdex with 3G modem.

So I started again, this time using the very popular Raspberry Pi 2B hardware as the base for my setup. This is actually the first time I’ve played with a Raspberry Pi and I actually really enjoyed the experience.

The requirements for the router are extremely low – move packets between two interfaces, dial a modem and run some scripts. It actually feels wasteful using a whole Raspberry Pi with it’s whole 1GB of RAM and Quad Core ARM CPU, but they’re so accessible and cost affordable, it’s not worth the time messing around with any more obscure embedded boards.

Pie ingredients

It took me all of 5 mins to assemble and boot an OS on this thing and have a full Debian install ready for work. For this speed and convenience I’ll happily pay a small price premium for the Raspberry Pi than some other random embedded vendors with much more painful install and upgrade processes.

Baked!

It’s important to get a good power supply – 3G/4G modems tend to consume the full 500mW available to them. I kept getting under voltage warnings (the red light on the Pi turns off) with a 2.1 Amp phone charger I was using. Ended up buying the official 2.5 Amp Raspberry Pi charger, which powers the Raspberry Pi 2 + the 3G modem perfectly.

I brought the smallest (& cheapest) class 10 Micro SDHC card possible – 16GB. Of course this is way more than you actually need for a router, 4GB would have been plenty.

The ZTE MF180 USB 3G modem I used is a tricky beast on Linux, thanks to the kernel seeing it as a SCSI CDROM drive initially which masks the USB modem features. Whilst Linux has usb_modeswitch shipping as standard these days, I decided to completely disable the SCSI CDROM feature as per this blog post to avoid the issue entirely.

 

The Software

The Raspberry Pi I was given (thanks Calcinite! 🍻) had a faulty GPU so the HDMI didn’t work. Fortunately Raspberry Pi doesn’t let such a small issue like no display hold it back – it’s trivial to flash an image to the SD card from another machine and boot a headless installation.

  1. Download Raspbian minimal/lite (Debian + Raspberry Pi goodness).
  2. Installed image to the SD card using the very awesome Etcher.io (think “safe dd” for noobs) as per the install instructions using my iMac.
  3. Enable SSH as per instructions: “SSH can be enabled by placing a file named ssh, without any extension, onto the boot partition of the SD card. When the Pi boots, it looks for the ssh file. If it is found, SSH is enabled, and the file is deleted. The content of the file does not matter: it could contain text, or nothing at all.”
  4. Login with username “pi” and password “raspberry”.
  5. Change the password immediately before you put it online!
  6. Upgrade the Pi and enable automated updates in future with:
    apt-get update && apt-get -y upgrade
    apt-get install -y unattended-upgrades

The rest is somewhat specific to your setup, but my process was roughly:

  1. Install apps needed – wvdial for establishing the 3G connection via AT commands + PPP, iptables-persistent for firewalling, libusb-dev for building hub-ctrl and jq for parsing JSON responses.
    apt-get install -y wvdial iptables-persistent libusb-dev jq
  2. Configure a firewall. This is very specific to your network, but you’ll want both ipv4 and ipv6 rules in /etc/iptables/rules.* Generally you’d want something like:
    1. Masquerade (NAT) traffic going out of the ppp+ and eth0 interfaces.
    2. Permit forwarding traffic between the interfaces.
    3. Permit traffic in on port 9000 for the health check server.
  3. Enable IP forwarding (net.ipv4.ip_forward=1) in /etc/sysctl.conf.
  4. Build hub-ctrl. This utility allows the power cycling of the USB controller + attached devices in the Raspberry Pi, which is extremely useful if your 3G modem has terrible firmware (like mine) and sometimes crashes hard.
    wget https://raw.githubusercontent.com/codazoda/hub-ctrl.c/master/hub-ctrl.c
    gcc -o hub-ctrl hub-ctrl.c -lusb
  5. Build pinghttpserver. This is a tiny C-based webserver which we can use to check if the Raspberry Pi is up (Can’t use ICMP as detailed further on).
    wget -O pinghttpserver.c https://gist.githubusercontent.com/jethrocarr/c56cecbf111af8c29791f89a2c30b978/raw/9c53f66fbed609d09652b8c4ceff0194876c05a3/gistfile1.txt
    make pinghttpserver
  6. Configure /etc/wvdial.conf. This will vary by the type of 3G/4G modem and also the ISP in use. One key value is the APN that you use. In my case, I had to set it to “direct” to ensure I got a real public IP address with no firewalling, instead of getting a CGNAT IP, or a public IP with inbound firewalling enabled. This will vary by carrier!
    [Dialer Defaults]
    Init1 = ATZ
    Init2 = ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0
    Init3 = AT+CGDCONT=1,"IP","direct"
    Stupid Mode = 1
    Modem Type = Analog Modem
    Phone = *99#
    Modem = /dev/ttyUSB2
    Username = { }
    Password = { }
    New PPPD = yes
  7. Edit /etc/ppp/peers/wvdial to enable “defaultroute” and “replacedefaultroute” – we want the wireless connection to always be the default gateway when connected!
  8. Create a launcher script and (once tested) call it from /etc/rc.local at boot. This will start up the 3G connection at boot and launch various processes we need. (this could be nicer and be a collection of systemd services, but damnit I was lazy ok?). It also handles reboots and powercycling USB if problems are encountered for (an attempt) at automated recovery.
    wget -O 3g_failover_launcher.sh https://gist.githubusercontent.com/jethrocarr/a5dae9fe8523cf74d30a065d77d74876/raw/57b5860a9b3f6a048b02b245f3628ee60ea766dc/3g_failover_launcher.sh

At this point, you should be left with a Raspberry Pi that gets a DHCP lease on it’s eth0, dials up a connection with your wireless telco and routes all traffic it receives on eth0 to the ppp interface.

In my case, I setup my Mikrotik router to have a default GW route to the Raspberry Pi and the ability to failover based on distance weightings. If the wired connection drops, the Mikrotik will shovel packets at the Raspberry Pi, which will happily NAT them to the internet.

 

The DNS Failover

The work above got me an outbound failover solution, but it’s no good for inbound traffic without a failover DNS record that flips between the wired and wireless connections for the VPN to target.

Because the wireless link would be getting a dynamic IP addresses, the first requirement was a dynamic DNS service. There are various companies around offering free or commercial products for this, but I chose to use a solution built around AWS Lambda that can be granted access directly to my DNS hosted inside Route53.

AWS have a nice reference dynamic DNS solution available here that I ended up using (Sadly not using the Serverless framework so there’s a bit more point+click setup than I’d like, but hey).

Once configured and a small client script installed on the Raspberry Pi, I had reliable dynamic DNS running.

The last bit we need is DNS failover. The solution I used was the native AWS Route53 Health Check feature, where AWS adjust a DNS record based on the health of monitored endpoints.

I setup a CNAME with the wired connection as the “primary” and the wireless connection as the “secondary”. The DNS CNAME will always point to the primary/wired connection, unless it’s health check fails, in which case the CNAME will point to the secondary/wireless connection. If both fail, it fails-safe to the primary.

A small webserver (pinghttpserver) that we built earlier is used to measure connectivity – the Route53 Health Check feature unfortunately lacks support for ICMP connectivity tests hence the need to write a tiny server for checking accessibility.

This webserver runs on the Raspberry Pi, but I do a dst port NAT to it on both the wired and wireless connections. If the Pi should crash, the connection will always fail safe to the primary/wired connection since both health checks will fail at once.

There is a degree of flexibility to the Route53 health checks. You can use a CloudWatch alarm instead of the HTTP check if desired. In my case, I’m using a Lambda I wrote called “lambda-ping” (creative I know) which is a Lambda that does HTTP “pings” to remote endpoints and recording the response code, plus latency. (Annoyingly it’s not possible to do ICMP pings with Lambda either, since the container that Lambda execute inside of lack the CAP_NET_RAW kernel capability, hence the “ping-like” behaviour).

lambda-ping in action

I use this, since it gives me information for more than just my failover internet links (eg my blog, sites, etc) and acts like my Pingdom / Newrelic Synthetics alternative.

 

Final Result

After setting it all up and testing, I’ve installed the Raspberry Pi into the comms cabinet. I was a bit worried that all the metal casing would create a faraday cage, but it seems to be working OK (I also placed it so that the 3G modem sticks out of the cabinet surrounds).

So far so good, but if I get spotty performance or other issues I might need to consider locating the FailberryPi elsewhere where it can get clear access to the cell towers without disruption (maybe sealed ABS box on the roof?). For my use case, it doesn’t need to be ultra fast (otherwise I’d spend some $ and upgrade to 4G), but it does need to be somewhat consistent and reliable.

Installed on a shelf in the comms cabinet, along side the main Mikrotik router and the VDSL modem

So far it’s working well – the outbound failover could do with some tweaking to better handle partial failures (eg VDSL link up, but no international transit), but the failover for the inbound works extremely well.

Few remaining considerations/recommendations for anyone considering a setup like this:

  1. If using the one telco for both the wireless and the wired connection, you’re still at risk of a common fault taking out both services since most ISPs will share infrastructure at some level – eg the international gateway. Use a completely different provider for each service.
  2. Using two wired ISPs (eg Fibre with VDSL failover)  is probably a bit pointless, they’re probably both going back to the same exchange or along the some conduit waiting for a  single backhoe to take them both out at once.
  3. It’s kind of pointless if you don’t put this behind a UPS, otherwise you’ll still be offline when the power goes out. Strongly recommend having your entire comms cabinet on UPS so your wifi, routing and failover all continue to work during outages.
  4. If you failover, be careful about data usage. Your computers won’t know they’re on an expensive mobile connection with limited data and they’ll happily download updates, steam games, backups, etc…. One approach is using a firewall to whitelist select systems only for failover (eg IoT devices, alarm, cameras) and leaving other devices like laptops blocked to prevent too much billshock.
  5. Partial ISP outages are still a PITA. Eg, if routing is broken to some NZ ISPs, but international is fine, the failover checks from ap-southeast-2 won’t trigger. Additional ping scripts could help here (eg check various ISP gateways from the Pi), but that’s getting rather complex and tries to solve a problem that’s never completely fixable.
  6. Just buy a Raspberry Pi. Don’t waste time/effort trying to hack some ancient crap together it wastes far too much time and often falls flat. And don’t use an old laptop/desktop, there’s too much to fail on them like fans, HDDs, etc. The Pi is solid embedded electronics.
  7. Remember that your Pi is essentially a server attached to the public internet. Make sure you configure firewalls and automatic patching and any other hardening you deem appropriate for such a system. Lock down SSH to keys only, IP restrict, etc.
Posted in Uncategorized | Tagged , , , , , , , , , | Leave a comment

Easy APT repo in S3

When running a number of Ubuntu or Debian servers, it can be extremely useful to have a custom APT repo for uploading your own packages, or third party packages that lack their own good repositories to subscribe to.

I recently found a nice Ruby utility called deb-s3 which allows easy uploading of dpkg files into an S3-hosted APT repository. It’s much easier than messing around with tools like reprepro and having to s3 cp or sync files up from a local disk into S3.

One main warning: This will create a *public* repo by default since it works out-of-the-box with the stock OS and (in my case) all the packages I’m serving are public open source programs that don’t need to be secured. If you want a *private* repo, you will need to use apt-transport-s3 to support authenticating with S3 to download files and configure deb-s3 for private upload.

Install like any other Ruby Gem:

gem install deb-s3

Adding packages is easy. First make sure your aws-cli is working OK and an S3 bucket has been created, then upload with:

deb-s3 upload \
--bucket example \
--codename codename \
--preserve-versions \
mypackage.deb

You can then add the repo to a Ubuntu or Debian server with:

# We trust HTTPS rather than GPG for this repo - but you can config
# GPG signing if you prefer.
cat > /etc/apt/sources.list.d/myrepo.list << EOF
deb [trusted=yes] https://example.s3.amazonaws.com codename main
EOF

# and ensure you update the package info on the server
apt-get update

Alternatively, here’s an example of how to add the repo with Puppet:

apt::source { 'myrepo':
 comment        => 'This is our own APT repo',
 location       => 'https://example.s3.amazonaws.com',
 release        => $::os["distro"]["codename"],
 repos          => 'main',
 allow_unsigned => true, # We don't GPG sign, HTTPS only
 notify_update  => true, # triggers apt-get update
}
Posted in Uncategorized | Tagged , , , , , , , | 1 Comment

StuffMe? Or Just Stuffed?

The big news today is that the NZ Commerce Commission has declined the NZME/Fairfax merger. There’s plenty of coverage from the various news companies around NZ, but does it really matter? Either way NZME and Fairfax are stuffed – merger or no merger – unless they can actually create a viable business. The merger would only have changed how long the decline would have lasted.

The fundamental issue is that both Fairfax and NZME were built on the cash cow that (was) classified advertising and print advertising. Classifieds were lost long ago to the likes of TradeMe (although you can argue that Fairfax let that one slip away) and advertising (of both print and digital form) has been on a steady decline as readers move to a range of other medias (social networking, online advertising, TV advertising, etc).

If Fairfax and NZME want to survive they can’t fix their business model by simply cutting headcount to reduce costs, or trying to diversify into unrelated ventures like fibre internet or a daily deals website. They need a fundamental redesign of their business and I doubt that either company is going to be prepared or brave enough to build a senior leadership team that can make this happen.

So what should a media company in 2017 struggling to survive do? Adopt a playbook from the technology and startup world. Cut all the noise out and focus on what the core product should be. Simplify. Dump legacy. And stop operating a structure that suits massive enterprises.

So what does this mean? What should these companies do?

  1. Firstly recognise you’ll never be the massive financial behemoths that media companies were back in the golden era. The money just isn’t there with reader and advertisers attentions now split across so many medias and consumables. Instead of trying to regain glory days, focus on being a leaner company with smaller revenues, but making good profits and keeping the important role of a free press going.
  2. Print is dead. We all know it. It’s just a question of time until the revenue still made from print advertising and subscriptions can no longer cover it’s costs for production. So treat it as a legacy product. Stop investing in it. Do the absolute bare minimum to keep it ticking over until the end. And when that end comes, be ruthless. Kill it.
  3. Move your best people away from any legacy projects – you’re incurring massive lost opportunity costs by preventing them from working on more long term investments.
  4. Avoid the side ventures. You’re not an investment company. The only skill/resource a media company can bring to other industries is free advertising. And then you’re devaluing your advertising product offering by flooding it with your own ads.
  5. Strip the company overhead. You can no longer be a big corporate with layers of management. You need to be a small lean business. Remove layers of management that doesn’t directly create more value.
  6. Drop Outbrain. It isn’t worth the money, it’s a drain on your reading experience and website quality and ruins any quality aspirations that you have. Promote your own stories instead, expose hidden evergreen content and give it longer life thus get more value out of producing that content in the first place.
  7. Does the massive cost of serving your video content pay for itself? Enterprise transcoding tech and data transfer is ridiculously expensive, especially for small NZ players whom are buying data by the hundreds of terabytes rather than hundreds of petabytes. It may be you’re paying more to have control of your own videos than you actually make from all the revenue around them. Think like a startup – upload all your video into Youtube and take advantage of their ad revenue sharing system. And yes it may only be 50% or so revenue share, but it’s 50% + getting all your data transit for free + getting high def 4k capable video serving infrastructure. And your journalists know how to use Youtube and the built in editing tools. Hell, they can upload to it directly from a phone in the field.
  8. Stop writing your own in-house CMS solutions or buying awful not-fit-for-purpose CMS solutions sold by companies with no understanding of the media and news website business and technical requirements. Build something light ontop of open source bones (never underestimate WordPress with a theme and some plugins) or buy a solution that’s specifically designed to meet media requirements like Arc (which is what NZME is doing).
  9. As an extension of the above – you can’t afford to build everything you need. You’re a tiny NZ local news provider, you need to focus on your core business and find ready-to-use off-the-shelf (or off-the-github) solutions.
  10. If you keep finding that you “just have to build our own tech since nothing else does what we want”, ask yourself the question of whether it’s your business workflow at fault – it’s generally cheaper to change processes than to write all your own technology.

And the big one – kill your mobile website. 99% of the smart phones being used are running either Android or iOS. Sorry to the people out there running Windows Mobile or FirefoxOS, you simply don’t have the statistics to justify any kind of investment into your needs. Building and maintaining a mobile website is a hugely wasteful investment to cater to 1% of users.

Instead pour all the mobile budget into developing beautiful apps for Android and iOS that customers actually want and enjoy using – they shouldn’t feel sad that the mobile site has gone, they should be elated that the app experience is so good.

By pushing mobile traffic exclusively to apps, suddenly some interesting capabilities reveal themselves:

  1. You can deliver tailored push messages with breaking news and updates that are actually relevant to the user’s interest- and measure this using an off-the-shelf push message analytics platform (like my current employer offers).
  2. Paywall introduction suddenly becomes trivial. Android and iOS both support in-app purchases. What could be easier than paying $4.99 for one month of full access to all the premium stories and an ad-free experience? You don’t even need to invest in payment and paywall infrastructure, it’s built right into the goddamn operating system. Unhappy about Apple’s and Google taking 15-25%? Doing nothing means taking a 100% cut. And it costs a bloody fortune building reliable and secure payment and subscription infrastructure yourself, don’t think you can do it cheaper unless you really know what you’re doing. And the biggest issue with your own platform is getting users motivated to actually get that credit card out of their wallet. With in-app purchases, it’s trivial since Apple/Google already have their card details – they just need a thumb print to authorise it.
  3. Some people will always be unwilling to part with any amount of cash for subscriptions. That’s fine. Offer up the main headlines and the low cost wire and/or soft-content stories (some might call this “click-bait”) for free and use advertising to drive revenue – just don’t expect it to ever equal print.
  4. And that advertising – suddenly it’s controlled in-app and no longer subject to a browser plugin blocking it. And since you dumped unrelated side ventures, you’re now reducing the volume of advertising so the ads that you DO run are more pronounced, with better engagement. And if you can drive higher engagement, you can get a higher price. Offer an advertising product focused on premium quality, not quantity. And using app location targeting you can do very, very precise local advertising campaigns.
  5. Advertising is an interesting one actually, since so often sales think “banner ads” – but it doesn’t have (and maybe shouldn’t) be just traditional impression or click-through advertising. It’s now trivial to setup an online store with a service like Shopify and just drop their SDK into your own app to sell real world items directly from the phone. Don’t just advertise tickets to that show, SELL the tickets to that show, directly from your app. And offer quality sponsored content – advertorials are awful and should die, but good quality sponsored content relevant to the readers interests has quite successful engagement rates – TheSpinoff is an NZ example doing this quite well. Stuff does it pretty well too when it’s not just promoting their fibre product (stuff bran?) or Neighbourly endlessly.

And don’t think that this mobile-app native strategy will necessarily alienate older print-loving subscribers.  Travel through any international airport and every other elderly traveller has an iPad in their hand. They’re the ultimate old-person computer. Simple, easy to use and feature built in text size zooming for the reader whom struggles to keep up with the font size of the print edition. A quality app experience is actually better than a newspaper ever was. And iPads are everywhere in the older generations now, they *understand* it in a way that they never did with traditional computers or mobile phones.

I do think a good outcome is achievable in the media space, but only if media companies like Fairfax and NZME can apply their funding and learn to actually innovate. There is a space for quality media and content, especially with decent nation-wide coverage – you don’t get that with the international news sources or the smaller players.

During my time working at Fairfax I met many hard working and passionate people in the company whom strongly care about the role of media in society. The editorial team I saw works hard, cares about what they do and can produce some great content. If they can couple it with a proper business plan, the right technology choices and be prepared to make some hard decisions, there is some hope for them. I just fear that it won’t happen.

 

I worked for Fairfax in the technology team for 4 years in both AU and NZ. I now work for a push messaging and analytics company. This post is my personal opinion and does not necessarily represent the views of my former and/or current employer. It could represent the views of a future employer, but only if you’re a media company that actually wants me to come and apply technology-driven innovation to your business.

Posted in Uncategorized | Tagged , , , , , , , , , , , , , , | 2 Comments

Detectatron

I recently installed security cameras around my house which are doing an awesome job of recording all the events that take place around my house and grounds (generally of the feline variety).

Unfortunately the motion capture tends to be overly trigger happy and I end up with heaps of recordings of trees waving, clouds moving or insects flying past. It’s not a problem from a security perspective as I’m not missing any events, but it makes it harder to check the feed for noteworthy events during the day.

I decided I’d like to write some logic for processing the videos being generated and decided to write a proof of concept that sucks video out of the Ubiquiti Unifi Video server and then analyses it with Amazon Web Services new AI product “Rekognition” to identify interesting videos worthy of note.

What this means, is that I can now filter out all the noise from my motion recordings by doing image recognition and flagging the specific videos that feature events I consider interesting, such as footage featuring people or cats doing crazy things.

I’ve got a 20 minute talk about this system which you can watch below, introducing it’s capabilities and how I’m using the AWS Rekognition service to solve this problem. The talk was for the Wellington AWS Users Group, so it focuses a bit more on the AWS aspects of Rekognition and AWS architecture rather than the Unifi video integration side of things.

The software I wrote has two parts – “Detectatron” which is the backend Java service for processing each video and storing it in S3 after processing and the connector I wrote for integration with the Unifi Video service. These can be found at:

https://github.com/jethrocarr/detectatron
https://github.com/jethrocarr/detectatron-connector-unifi

The code quality is rather poor right now – insufficient unit tests, bad structure and in need of a good refactor, but I wanted to get it up sooner rather than later… since perfection is always the enemy of just shipping something.

Note that whilst I’ve only added support for the product I use (Ubiquiti’s Unifi Video), I’ve designed it so that it’s pretty trivial to build other connectors for other platforms. I’d love to see contributions like connectors for Zone Minder and other popular open source or commercial platforms.

If you’re using Unifi Video, my connector will automatically mark any videos it deems as interesting as locked videos, for easy filtering using the native Unifi Video apps and web interface.

It also includes an S3 upload feature – given that I integrated with the Unifi Video software, it was a trivial step to extend it to also upload every video the system records into S3 within a few seconds for off-site retention. This performs really well, my on-prem NVR really struggled to keep up with uploads when using inotify + awscli to upload footage, but using my connector and Detectatron it has no issues keeping up with even high video rates.

Posted in Uncategorized | Tagged , , , , , , , | 1 Comment

Surveillance State “at home” Edition

A number of months ago I purchased a series of Ubiquiti UniFi video surveillance cameras. These are standard IP ethernet cameras and uses a free (as-in-beer) server agent that runs happily on GNU/Linux to manage the recording and motion detection, which makes them a much more attractive offering than other proprietary systems that use their own specific NVRs.

Once I first got them I hooked them up in the house to test with the intention of installing properly on the outside of the house. This plan got delayed somewhat when we adopted two lovely kittens which immediately removed any incentive I had to actually install them properly since it was just too much fun watching the cats rather than keeping an eye out for axe murderers roaming the property.

I had originally ordered the 720p model, but during this time of kitten watching, Ubiquiti brought out a new 1080p “g3” model which provides better resolution as well as also offering a much nicer looking and easier to install form factor – so I now have a mix of both generations.

The following video shows some footage taken from the older 720p model:

During this test phase we also captured the November 2016 Wellington earthquake on the cameras using a mix of both generation of camera:

Finally with the New Year break, I got the time and motivation to get back up into the attic and install the cameras properly. This wasn’t a technically challenging task – mostly just a case of running cabling, but it’s a right PITA due to the difficulty of moving around in my attic thanks to heaps of water pipes, electrical wires, data wires and joists all hidden under a good foot or two of insulation.

 

 

On the plus side, the technical requirements for the cameras are pretty simple. Each camera is a Power-over-Ethernet (PoE) device, which means it gets both data and power via a single cable, which makes installation simple – no mains electrical wiring, just need to get a single cat6 cable to wherever you want the camera to sit. The camera then connects to the switch and of course the server running the included software.

I am aware of some vendors selling wireless cameras that use WiFi with a battery that needs to be recharged every so often. I can see the use and appeal for renters, but as a home owner, a hard wired system is going to be much easier and more reliable in the long term.

Ubiquiti sell the camera either with or without a PoE adaptor. Using the included PoE adaptor means you can connect them to essentially any existing switch, but if installing a number of cameras this can create a cable management nightmare. I’d strongly recommend a PoE switch if installing more than 5 cameras, even taking into account their higher cost.

A PoE switch suddenly didn’t seem like such an expensive investment…

The easiest installation was the remote shed camera. Conveniently the shed has mains electrical wiring, but I needed to install a wireless AP to connect back to the house as running ethernet out there is just a bit too difficult.

I used Ubiquiti’s airGW-LR product which is a low cost access point that is designed to clip to their standard PoE supply. End result is a really tidy setup with a single power supply for both devices and with both devices mounted on a robust bracket for easy installation.

720p camera + airGW + PoE supply

The house cameras were a bit more work. It took me roughly a day to run cabling through the attic – my house isn’t easy to move in the roof or floor space so it takes longer than some others. Also tip – it’s much easier running cabling *before* the insulation is installed, so if you’re thinking of doing both, install the ethernet in advance.

High ceilings and a small attic entrance is just the start of the hassles of running cabling.

The annoying moment when you drill into a stud and end up with a hole that needs filling again. (with solid hardwood walls and ceilings, stud finders don’t work well at my place)

Once the cable run had been completed, I crimped the outside ends with RJ45 connectors for the cameras and then proceeded to take apart the existing patch panel, which also required removing most of the gear in the comms cabinet to free up room to work.

Couple tips for anyone else doing this:

  • I left plenty of excess cable on my ethernet runs. This allowed me to crimp the camera end whilst standing comfortably on the ground, then when I installed the camera I just pushed up all the excess into attic. Ethernet cable is cheap compared to one’s time messing around up at the tops of ladders.
  • The same applies at the patch panel – make sure to leave enough slack to allow you to easily take the patch panel off and work on it in the future – you can see from the picture below I have a good length spare that comes out of the wall.
  • Remember to wire the RJ45 connectors and the patch panel to the same standard – I managed to do T568B at the camera end and T568A at the patch panel on my first attempt.
  • Test each cable as you complete the wiring. Because of this I caught the above issue on the first camera and it saved me a lot of pain in future. A cheap ethernet tester can be found online for ~$10 and is worth having in your tool kit.

Down to only 4/24 ports free on the patch panel! I expect the last 4 will be consumed by WiGig/802.11ad in future, since it will require an AP per-room in order to get high performance, I might even need a second patch panel in future… good thing I brought the large wall mounted cabinet.

 

With the cabling done, I connected all the PoE adaptors. These are a bit of a PITA if you’re using a rack – you could get a small rackmount shelf with holes and cable tie down, but I went for cable tying them to the outside of the cabinet.

I also colour coded the output from the PoE adaptors. You need to be careful with passive PoE adaptors, you can potentially damage computers and network equipment if you connect them to the adaptor by mistake so I used the colour coding to make it very clear what cables are what.

Finished cabling installation. About as tidy as I can get it in here without moving to using custom length patch cables…. but crimping 30+ patch cables by hand isn’t my idea of a good time.

 

Having completed the cabling and putting together the networking gear and PoE adaptors, I could finally install the cameras themselves. This isn’t particularly hard, basically just need to be able to screw something to the side of the house and then aim the camera in the right position.

The older 720p model is the most annoying to install as it requires adjusting everything using an allen key, plus the cable must be exposed with a drip loop. It’s also more of an eyesore which is a mixed bag – you get better deterrence aspect, but it can look a bit ugly on the house.

The newer model is more aesthetically pleasing, but it’s possible some people might not realise it’s a camera which could be a downside for deterrence.

That being said, they look OK when installed on the house – certainly no worse than the ugly alarm and sensor lights you get on many houses. I even ended up putting one inside to give me complete visibility of the hallway linking every room in the house and it’s not much more visible than a large alarm PIR sensor.

Some additional features worth noting:

  • All the cameras have built in IR, which means they provide decent footage, even at night time. The cameras switch an IR filter on/off automatically as required.
  • All the cameras have built in microphones. Whilst they capture a lot of background wind noise, they’re also quite good at picking up conversations even when outside – it’s a handy tool for gathering intel on any unwanted guests.

 

With all the hardware completed, onto the software. Ubiquiti supply their server software free-of-charge. It’s easy enough to download and install, but if you have Puppetised your home server (of course you have right?) I have a Puppet module here for you.

 

Generally I’ve found the software solution (including the iOS mobile app) to be pretty good, but there are two main issues to be aware of with it:

  1. First is that the motion detection is pretty dumb and works on percentage of image changed. This means windy areas with lots of greenery get lots of unwanted recordings made. It doesn’t causing technical issues, but it does make for a noisy set of recordings – don’t expect it to *only* record events of note, you’ll get all the burglars and axe murderers, but also every neighbourhood cat and the nearby trees on windy days. Oh and night time you get lots of footage of moths when they fly close to the camera with the IR night vision on.

  2. Second is that I found a software bug in the mobile apps where they did not validate SSL certs properly and got a very poor response from Ubiquiti. That being said one of their reps recently claimed they’ve hired more security staff to deal with their poor responsiveness, so let’s see what happens on this front.

 

 

One feature which is strangely absent, is the lack of support for automatically uploading recordings to a cloud storage service. It’s not possible for everyone, but if on a fast connection (eg VDSL, UFB) it’s worth uploading all recordings to something like Amazon S3 so that an attacker can’t subsequently break in and remove the recording hardware.

My approach was setting up lsyncd to listen to inotify events from Linux every time a video file is written to disk and then quickly copy that file up into Amazon S3 where it remains for a prolonged period.

If you can’t achieve this due to poor internet performance, your best bet is to put the video recording server in a difficult to find and/or access location, sufficient to prevent the casual intruder from finding it. If you have a proper monitored alarm system they shouldn’t be lingering long enough to find it.

 

Stability seems good. I’ve been running these cameras since April and have never had the server agent or the cameras crash or fail to record. I’m using a Mac Mini for the camera server but you can always buy an embedded black-box NVR solution from Ubiquiti themselves. If you’re on a budget, a second hand Mac Mini or Intel NUC might be better value for money – just make sure it’s 64bit, not an older gen 32bit device.

 

Posted in Uncategorized | Tagged , , , , , , , , | Leave a comment

AirPods

Being tethered to one’s device via a cable has long been an annoyance and with Apple finally releasing their AirPods product, I decided to take the risk of a first adopter of a Gen1 product and ordered a pair.

I seemed to be lucky and had mine arrive on 19th December – I suspect Apple allocated various amounts of stock per region and NZ didn’t have the same level of competition for the early shipments as the US did. Looks like the wait time is now around 6 weeks for new orders.

Fortunately AirPods do not damage my unstoppable sex appeal.

There’s been a heap of reviews online, but I wanted to write a bit about them myself, because frankly, they’re just brilliant and probably the best purchase I’ve made this year.

So why are they so good?

  • Extremely comfortable – I found them a lot lighter than I expected them to be and they stay in my ears properly without any discomfort or looseness. This is with the caveat that the old wired EarPods also fitted me well, I’m sure that this won’t be the case for everyone.
  • Having music automatically pause when you remove them is just awesome. It’s an example of the Apple of old making intuitive tech that you don’t need to think about controlling, because it just does what it should. Take AirPods out? Probably don’t want that track to keep playing…
  • The battery life and charging is pretty good. I’m getting the stated 4-5 hours life per charge and of course the carry case gives you another 24 hours or so of charge.
  • The freedom of not being attached by a cord is extremely liberating.  At one point I forgot I was tethered to my iMac and ended up going for a walk down to the other end of the house before it disconnected. I could easily see myself forgetting they’re in and getting into the shower one day by mistake.
  • The quality is “good enough”. Sure, you can get better audio through other products, but when it comes to earbuds, you’re going to compromise the quality in exchange for form factor and portability. For the sort of casual listening that I’m doing, it meets my expectations happily enough.
  • Easy switching between devices, something that traditional bluetooth products tended to do pretty badly.
  • Connectivity seems pretty strong. I’ve never had the audio dropout whilst listening, infact when I left them in and walked around the house, I had audio going through the walls. 

They’re not perfect – not that I was expecting them to be given it’s a first-gen product from Apple which generally always means a few teething issues:

  • Sometimes the AirPods simply aren’t discoverable by my iMac, and even when they are, the connection method can be hit and miss – sometimes I can just click on the volume icon and select them, other times I have to go into the Bluetooth menu and select from there. Personally I’ll blame MacOS/iMac for this, I’ve had other Bluetooth headaches with it in the past and I suspect it’s just not that well tested and implemented for anything more than the wireless keyboards/mice they ship with them. For example, other than the AirPods issues, my iMac often fails to see my iPhone or iPad when they’re in the near vicinity to do iOS handoff.
  • At one point a phone call decided to drop from the AirPods and revert to the on-phone speaker without any clear reason/cause.
  • I managed to end up with one AirPod unpairing itself from the phone, so I had mono audio until I re-selected AirPods again from the phone’s output menu.

The “dental floss” charging and storage case is pretty clever as well, although I find it a bit odd that Apple didn’t emboss it with the Apple logo like they normally do for all their other products.

That being said, these issues are not frequent and I expect them to be improved with software updates over time.

If I had any feature changes I’d like to see for Gen2, it would be for Apple to:

  1. Insert a touch sensor on the AirPods to allow changing of volume by swiping up/down on the AirPod. That being said, using the phone volume rocker or the keyboard to change volume isn’t a big issue. The AirPods also have a “double-tap” detection feature that can either launch Siri or Play/Pause music.
  2. Bring up the waterproofing to a level that allows their use in the shower. Whilst there are reports on the internet of AirPods surviving washing machine cycles already, I’d love a version that’s properly rated for water exposure that could truely go anywhere.

 

Are they worth the NZ $269? I think so – sure, I’d be happy if they could drop the price and include a pair with every iPhone sold, but I think it’s a remarkable effort of technology miniaturisation that’s resulted in a high quality product that produces a fantastic user experience. That generally doesn’t come cheap and I feel that given the cost of other brand name wireless audio products, AirPods are reasonably priced.

I’d maybe re-consider buying them if I was using a non-Apple ecosystem (which limits the nice peering features, although they should work with any Bluetooth device) or if I was only going to use it with MacOS rather than iOS devices. With less need for the nice peering and portability features, third party offerings become a bit more attractive.

Posted in Uncategorized | Tagged , , , , , , | Leave a comment

Macbook Pro 2016

Having recently changed jobs (Fairfax/Stuff -> Sailthru/Carnival), the timing worked out so that I managed to get one of the first new USB-C 2016 Macbook Pros. A few people keep asking me about the dongle situation, so figured I’d just blog about the machine.

Some key things to keep in mind:

  • I don’t need to attach much in the way of USB devices. Essentially I want my screen and input devices when docked at the office, but I have no SD cards and don’t generally swap anything with USB flash drives.
  • My main use case is pushing bits to/from the cloud. Eg web browser, terminals, some IDE usage. Probably the heaviest task I’d throw at it would be running something like IntelliJ or Xcode.
  • I value portability more than performance.

If Apple still made cinema displays, this would be a fully Apple H/W stack

 

Having used it for about 1 month now, it’s a brilliant unit. Probably the biggest things I love about it are:

  • The weight – at 1.37Kg, it’s essentially the same weight as the 13″ Macbook Air, but packs a lot more grunt. And having come from the 15″ Macbook Pro, it’s a huge size and weight reduction, yet still extremely usable.
  • USB-C. I know some people are going to hate the new connector, but this is the first laptop that literally only requires a single connector to dock – power, video data – one plug.
  • The larger touch pad is a nice addition. And even with my large man hands, I haven’t had any issues when typing, Apple seems to have figured out how to do palm detection properly.
  • It looks and feels amazing, loving the space gray finish. The last generation Macbooks were beautiful machines, but this bumps it up a notch.

The new 13″ is so slim and light, it fits perfectly into my iPad Pro 12″ sleeve. Don’t bother buying the sleeves intended for the older 13″ models, they’re way too big.

One thing to note is that the one I have is the entry level model. This brings a few differences over the other models:

  • This model is the only one to lack the new Touchbar. In my case, I use the physical ESC key a lot and don’t have a lot of use for the gimmick. I’d have preferred if Apple had made the Touchbar an optional additional for all models so any level machine could opt in/out.
  • As the entry level model, it features only 2x USB-C/Thunderbolt-3 ports. All Touchbar enabled models feature 4x. If you are like me and only want to dock, generally the 2x ports only issue isn’t a biggie since you’ll have one spare port, but it will be an issue if you want to drive multiple displays. If you intend to attach 2+ external displays, I’d recommend getting the model with 4x ports.
  • All the 13″ models feature Intel graphics. The larger 15″ model ships with dual Intel and AMD graphics that swap based on activity and power usage. Now this does mean the 13″ is slower at graphics, but I’m also hearing anecdotally that some users of the 15″ are having graphics stability issues with the new AMD drivers – I’ve had no stability issues of any kind with this new machine.
  • The 2.0Ghz i5 isn’t the fastest CPU. That being said, I only really notice it when doing things like compiles (brew, Xcode, etc) which my 4Ghz i7 at home would crunch through much faster. As compiling things isn’t a common requirement for my work, it’s not an issue for me.

It’s not without it’s problems of course – “donglegate” is an issue, but the extend of the issue depends on your requirements.

On the plus side, the one adaptor you won’t have to buy is headphones – all models still include the 3.5mm headphone jack. One caveat however, they are now purely analogue audio, the built in toslink port has been abandoned.

Whilst there are a huge pile of dongles available, I’d say the essential two dongles you must have are:

  • The USB-C to USB adaptor. If you ever need to connect USB devices when away from desk, you’ll want this one in your bag.
  • The USB-C Digital A/V adaptor. Unless you are getting a native USB-C screen, this is the only official Apple adaptor that supports a digital display. This specific adaptor provides 1x USB2, 1x HDMI and 1x USB-C for charging.

I have some concerns about the digital A/V adaptor. Firstly I’m not sure about is whether it can drive a 4K panel, eg if it’s HDMI 2.0 or not. I’m driving a 25″ Dell U2515H at 2560×1440 at 60Hz happily, but haven’t got anything higher resolution to check with.

It also feels like it’s not going to tolerate a whole lot of flexing and unflexing so I’ll be a bit wary about it’s longevity if travelling with it to connect to things all over the place.

The USB-C Digital AV adaptor. At my desk I have USB and HDMI feeding into the LCD (which has it’s own USB hub) and power coming from the Apple-supplied USB-C charger.

Updating and rebooting for a *dongle update*? The future is bleak.

Oh and if you want a DisplayPort version – there isn’t an official one. And this is where things get a little crazy.

For years all of Apple’s laptops have shipped with combined Thunderbolt 1/2 and Mini-Display ports. These ports take either device, but are technically different protocols that share a single physical socket. The new Macbook Pro doesn’t have any of these sockets. And there’s no USB-C to Mini Display port adaptor sold by Apple.

Apple does sell the “Thunderbolt 3 (USB-C) to Thunderbolt 2 Adaptor” but this is distinctly different to the port on the older laptops, in that it only supports Thunderbolt 2 devices – there is no support for Mini DisplayPort, even though the socket looks the same.

So this adaptor is useless for you, unless you legitimately have Thunderbolt 2 devices you wish to continue using – but these tend to be a minority of the Apple user base whom purchased things like disk arrays or the Apple Cinema Display (which is Thunderbolt, not Mini DisplayPort).

If you want to connect directly to a DisplayPort screen, there are third party cables available which will do so – just remember they will consume a whole USB-C port and not provide data and power. So adding 2x screens using these sorts of adaptors to the entry level Macbook isn’t possible since you’ll have no data ports and no power left! The 4x port machines make it more feasible to attach multiple displays and use the remaining ports for other use cases.

The other option is one of the various third party USB-C/Thunderbolt-3 docks. I’d recommend caution here however, there are a number on the market that doesn’t work properly with MacOS (made for Windows boxes) and a lot of crap “first to market” type offerings that aren’t really any good.

 

My recommendation is that if you buy one of these machines, you should ideally make the investment in a new native USB-C 4K or 5K panel when you purchase this machine. Apple recommend two different LG models which look pretty good:

There is no such thing as the Apple Cinema Display any more, but these would be their logical equivalent now. These screens connect via USB-C, power your laptop (so no need to spend more on a charger, you can use the one that ships with the laptop as your carry around one) and features a 3x USB-C hub in the back of the screen.

If you’re wanting to do multiple displays, note that there are some limits:

  • 13″ Macbook Pros can drive a single 5K panel or 2x 4K panels.
  • 15″ Macbook Pros can drive two 5K panels or 4x 4K panels.

Plus remember if buying the entry level 13″, having two screens would mean no spare ports at all on the unit – so it would be vital to make sure the screens can power the machine and provide additional ports.

Also be aware that just because the GPU can drive this many panels, doesn’t mean it can drive them particularly well – don’t expect any 4K gaming for example. My high spec iMac 5k struggles at times to drive it’s one panel using an AMD Radeon card, so I’m dubious about the Intel chipset in the new Macbooks being able to drive 2x 4K panels.

 

 

 

So recommendations:

If you need maximum portability, I’d still recommend going for the Macbook 13″ Pro over the Macbook 12″ Retina. It’s slightly heaver (1.37kg vs 0.92kg) and slightly more expensive (NZ $2499 vs $2199), but the performance is far better and the portability is almost the same. The other big plus, is that the USB-C in the Macbook Pro is also a Thunderbolt-3 port, which gives you much better future proofing.

If you need a solid work horse for a DevOps engineer, the base Macbook Pro 13″ model is fine. It’s a good size for carrying around for oncall and 16GB of RAM with a Core i5 2.0Ghz is perfectly adequate for local terminals, IDEs and browsers. Anything needing more grunt can get pushed to the cloud.

No matter what model you buy, bump it to 16GB RAM. 8GB isn’t going to cut it long term and since you can’t expand later, you’ll get better lifespan by going to max RAM now. I’d rate this more worthwhile than buying a better CPU (don’t really need it for most workloads) or more SSD (can never get enough SSD anyway, so just overflow into iCloud).

If you some how can’t live with only 16GB of RAM and need 32GB you’re kind of stuck. But this is a problem across most portable lines from competitors currently, 32GB RAM is too power hungry with the current gen CPUs and memory. If you need that much memory locally you’ll have to look at the iMac 5k (pretty nice) or the Mac Pro series (bit dated/overly expensive) to get it on a Mac.

 

So is it a good machine? I think so. I feel the main problem is that the machine is ahead of the rest of the market, which means lots of adaptors and pain until things catch up and everything is USB-C. Apple themselves aren’t even ready for this machine, their current flagship iPhone still ships with an older USB 3 connector rather than a USB-C one, which leads to an amusing situation where the current gen iPhone and current gen Macbook Pro can’t be connected without first purchasing a dongle.

Posted in Uncategorized | Tagged , , , , , | 2 Comments

Quake 2016-11-14

We had a pretty large quake last night – everyone OK here, but a bit of a shock. The cats weren’t too happy either and after fleeing the house at high speed they spent most of the night hiding outside unwilling to come back in.

Still getting considerable number of aftershocks, some of them quite strong. Minimal damage thankfully. Really happy we bolted the TV to the wall a while back with a really solid bracket.

We were dangerously close to having the iMac fly off the desk however, I think the only thing that saved it was that it was trying to go in the opposite direction of the power cord. Surprised that the speakers stayed upright since they aren’t bolted to the floor, but they did OK.

A very expensive accident narrowly averted

A very expensive accident narrowly averted

Posted in Uncategorized | 1 Comment

DevOpsDays NZ 2016

I recently spoke at the inaugural DevOpsDays NZ in Wellington. The team whom put together the conference did an amazing job and it’s one of the few conferences that I’ve really enjoyed recently. If they put together a subsequent conference next year, I recommend attending if possible.

I presented about our DevOps practises and tooling at Fairfax Media / stuff.co.nz which you can find at the recording below:

 

Whilst the vast majority of the content of the conference was really good, the following were clear standouts to me that I recommend watching:

You can find these (and other) presentations from the conference on this Youtube page.

Tagged , , , , , | Leave a comment