Installing Cloud Pipes

One of the essential upgrades for the house has been the installation of computer network data cabling throughout the house. Whilst some helpful individual went to the effort of installing phone jacks in almost every room, an analogue phone line in every room isn’t that useful to me in 2014, so I decided to get the place upgraded with something a little more modern.

A few people have asked me why I didn’t go entirely WiFi instead – granted it’s a valid question given that most devices now come with WiFi and no wired ethernet (curses Apple Macbooks), but I figured there’s still some good reasons to install cables through the house:

  1. WiFi still needs some copper cables to backhaul data and even if mesh networking evolved to be good enough to eliminate the backhaul link requirements, there’s no wireless power yet, so POE is damn handy.
  2. With ports in every room, I can always plug in more APs in any room to get better coverage. Could be very handy if new tech like WiGig takes off, which may require an access point in each room due to poor performance through walls.
  3. The current WiFi tech is acceptable for transferring current HD video content over the network, but it’s not going to handle ultra-high-def content like 4K footage very well.
  4. The Cat6 cabling I’ve installed should be capable of up to 10Gbit speeds. It’s going to take us a while to get 10Gbit with wireless.

Only time will tell if I was as foolish as those who installed coaxial cabling for their 10mbit networks before they got bitten by the uptake of Cat5, but I suspect it should be useful for another 10-20 years at least. After that, who knows….

Since I’m putting the cabling into existing clad rooms, I lack the convenience of a newly built or renovated property where I can simply run cables at my leisure and put the plasterboard up afterwards. On the plus side, unlike a modern house, mine has a number of gaps from items like old brick chimneys that have been removed and a number of walls that lack the conventional 2×4 horizontal studs, which offers some places where cables can be dropped from ceiling to floor uninterrupted.

With this in mind, I gathered my tools, geared up with confidence, and set off on my cabling adventure.

OPen wide

“Trust me, I know what I’m doing”

Only one problem – one quick look up into the attic quickly confirmed for me that “yes, I do certainly dislike heights” and “yes, I also really do dislike confined spaces that are dark and smell funny”.

To overcome these problems, I recruited the services of the future-father-in-law who has a lot of experience running TV antenna cabling, so is pretty comfortable moving around in an attic and drilling holes into walls. Thankfully he agreed to assist me (thanks Neville!!) with climbing up and around the attic which allowed me to move to the next step – getting the cables in.

It's high up here :-/

3.5 metre ceilings mean you need to get comfortable with working at the top of a ladder

I decided that I wanted 4x cables into the lounge, 4x into the back bedroom/office, and then 2x into the other 3 bedrooms. I pondered just putting 4x everywhere, but had a whole bunch of 2x plates and I figure the bedrooms aren’t likely to host a small server farm any time soon. Again, one of those things where I might be cursing myself in the future, or might not ever be an issue.

The small bedroom was the easiest, being in part of the original house, there was no studs from floor to roof, so we could simply drop the cables right down and cut a hole at the bottom for the ports. Easy!

The older walls never needed studs, since the walls are lined in about 10mm of solid sarking timber made from native hardwood Rimu, these timber planks hold the building together tight and eliminate the need for the more modern practice of horizontal studs in the walls. The upside of this sarking is that the place is solid and you can put weight baring screws into almost any part of the wall, and screw the faceplates for power and data directly into the wall without needing flushboxes. The downside is that the hole saw takes some effort to cut through this hardwood native timber and it also means that WiFi penetration betweens the rooms isn’t as great. Infact when I had the WiFi access point in a cupboard before it got properly installed, I struggled to maintain connections to some locations due to the thick walls.

Hello data!

If you look carefully, can see the thickness of the walls with the plasterboard + sarking.

The back rooms – bedroom, office and lounge were a bit more challenging. Due to structural blockages like support beams and horizontal studs in the younger office and bedroom renovation, it wasn’t simply a case of dropping the cable down from the roof to each room.Instead we ran the cables through the roof and then down all together in a single bunch thought the space that was once occupied by the brick chimney, to get the cables down from the roof to under the house. Once under, we were able to run them to the required rooms and pop the cables back up into the rooms by drilling a hole into the middle of each wall from under the house.

Cloud Pipes!

Cloud Pipes!

To do the master bedroom took some creativity, but we found that the hall cupboard that backs onto the bedroom was an easy to target location and dropped the cables down there, before doing some “creative re-construction” of the cupboard walls to get the cables down and through to the other side in the bedroom.

Even this coat cupboard needs to be GigE connected

Even this coat cupboard needs to be GigE connected

Running the cables was a three-step task. First we ran a single cable being fed out of the reel until we got it to the desired location. This is required to determine the length of the cabling needed, although if we were willing to be wasteful with cable and coil the excess in the roof, we could have made estimations on the generous side and skipped this step and just have run a draw wire from the immediate start.

We then removed the cable, pulling a draw wire through after it. In my case, we use some old cat5 cable as the draw wire since it’s nice and tough. Once the original cable is recovered, we cut the required number of lengths exactly the same, then create bundles of cable as needed.

We used electrical table to bind them together (since it doesn’t add any thickness to the bundles, unlike cable ties, which get stuck on holes) and made sure to stagger the cables so that it wasn’t a single large flat end trying to get through the holes.

Once the bundles are ready, it’s just a case of attaching it to the draw wire and pulling the draw wire through, pulling the new bundle behind it. Make sure you’ve drilled your holes to be big enough to fit the whole bundle!

 

The easiest installation was that of the roof-mounted WiFi access point (Ubiquiti Unifi UAP-AC). Since it just needed a single cable which needed to be run along the attic to the patch panel, we simply drilled a hole up into the attic and fed the cable up straight from the reel, no need to mess around with draw wires.

I suspect we could have done this for all the rooms in the house just as easily, so if you decide you want to invest in WiFi APs in every room rather than wired ethernet ports, you would have a lot easier time putting the APs up.

Leave plenty of length so you don't have to crimp a cable above your head

Drill hole, feed cable, doesn’t get much simpler than this.

Rather than a socket, the roof cable is terminated with an RJ45 connector which plugs directly into the back of the AP which then fits snugly on the roof hiding all cabling.

The end result looks quite tidy, I was worried about the impact on the character ceilings and part of me did feel bad putting the drill through it so I took care to keep the holes to the absolute minimum to ensure it could be patched without too much grief in future.

All done!

The blue-sun god we worship for internet access.

The blue square on the AP is visible in the dark, but doesn’t light up the area. It’s bright enough that I’d think carefully about putting it in a bedroom unless there’s a way to turn it off in software.

 

Whilst the roof mount AP had an RJ45 port due to space constraints and aesthetics, all the room cables have been terminated at proper RJ45 jacks.

Hardware

PDL Cover, Grid (sold together), Third Party RJ45 keystone and PDL RJ45 clip.

My house is primarily PDL 500/600 series faceplates, which means I ended up sourcing the same PDL products for the data faceplates. The faceplates aren’t too outrageous at about $6 each (2-port / 4-port), but PDL charge a whopping $13.80 for their RJ45 data inserts. Given that I installed 14 sockets, that would cost $193.20 for the ports, which is crazy… I could buy several entire patch panels for that.

Fortunately the RJ45 inserts themselves (the keystones) are made by a number of vendors, you just need the PDL keystone clip to join any third party keystone with the PDL faceplates.

Hence I sourced 14x of the PDL clips at $1.38 each ($19.32 total) and then sourced 3x 5-pack of keystones on TradeMe from a seller “digitalera” for $9 per pack ($27). This brought my total spend for the RJ45 data inserts to $46.32… much much cheaper!

Result

The finished product.

All these cables terminate back in a small cupboard in the hallway, into a 24-port patch panel. There’s a bit of space space left over on it for future expansion as it occurs.

Cabling!

All patched and all ready to go

The patch panel is installed in a wall-mount 9 RU 19″ comms cabinet. I went for a 9RU and 30cm deep model, since it offers plenty of space for PDUs, cabling and also can fit lots of models of switches and routers with room to clear.

Data & Power sorted!

Data & Power sorted!

Cable management with this setup has been a little tricky – usually I’d prefer to have a cable management bar for the patch panel and a cable management bar for each switch, an arrangement that is space expensive, but the easiest and tidiest.

Unfortunately I quickly found that putting the cable management bars in the roof-height cabinet limits visibility too much, since it blocks the ability to see the switch ports or patch panel labels since you can’t look at it face on, but rather look at it upwards from the top of the ladder.

The approach I’ve ended up with is therefore a little unconventional, with a cable management bar at the very top of the rack, and cabling going up into it from the patch panel and then back down to the switch.

The downside of this approach is that cables cross the patch panel to get to the switch (arghgh I know!), but the upside is that I can still see all the other switch ports and patch panel ports and its still quite readable. I’ll understand if I’m kicked out of the data centre cabling perfection club however.

There’s still 5RU spare for some shelves for devices like VDSL model or a dedicated router, but given that the Mikrotik CRS226-24G-2S+RM RouterOS based switch can do almost everything I need including almost 200mbits routing capability, there’s no plan to add much more in there.

Currently the power and server data runs down to the floor, but next time I have an electrician on-site doing work, I’ll get a mains socket installed up by the cabinet to save a cable run and potentially a very shallow rackmount UPS to run the cabinet.

Finished! For now...

Cabling and equipment installed!

The final step was making sure everything actually worked – for that I used a $5 cable tester I picked up off Trademe – has nothing on a fancy brand like Fluke that can measure the length of cable runs and tell you the type of cabling pin out, but for a casual home installation it was great!

Remote control

Testing the cabling jobs – the meter runs through each wire in order so you can detect incorrectly punched cables or incorrect arrangements of the wires at either end.

 

I had most of the tools needed on hand already, if you’re tempted to do similar, you’re going to need the following:

  1. A decent electric drill.
  2. A hole saw (goes into your drill, makes big holes in walls). You need this to make the opening for your wall plates with enough room to sit all the RJ45 modules into the wall.
  3. Regular drill bits if you’re going up through the ceiling into the roof for WiFi APs – just need something large enough for a Cat6 cable and no more.
  4. An auger drill bit if you want to drill holes suitable for running bundles of cables through solid wood beams. Having a bit big enough to fit all your cables in your bundle a bit of slack is good.
  5. A punch down tool, this is what you use to connect each wire in the patch panel and RJ45 wall modules. Its worth buying a reasonable quality one, I had a very cheap (~$5) unit which barely survived till the end of the build since you tend to put quite a bit of force on them. The cheap tool’s cutter was so bad I ended up using a separate wire cutter to get the job done, so don’t make my mistake and get something good.
  6. A good quality crimping tool – this will allow you to terminate RJ45 (needed if you want to terminate to the plug, rather than socket for roof-mount access points), but they also tend to include a cable stripper perfectly aligned to strip the outer jacket of the cat5/6 cable. Again, don’t scrimp on this tool, I have a particular solid model which has served me really well.
  7. Needle nose pliers or wire cutters – you need something to cut out the solid plastic core of the Cat6 cable. You can do it in the crimping tool, but often the wire cutter or pliers are just easier to use.

And of course materials:

  1. A reel of Cat6 Ethernet. Generally comes in 305m boxes.
  2. A roll of black electrical tape, you’ll want to use this to attach guide cables, and to bundle cables together without adding size to the cabling runs.
  3. Cable ties are useful once you get cables into position and want tight permanent bundling of cables.
  4. RJ45 plugs if you are terminating to a plug.
  5. RJ45 modules and related wall plate hardware.
  6. Pipe/Saddle Clips can be useful for holding up cables in an orderly fashion under the house (since they’re designed for pipes, big enough to fit cable bundles) and they’re great to avoid leaving cables running across the dirt.

Note that whilst there are newer standards like Cat 6a and Cat7 for 10 GigE copper, Cat6 is readily available in NZ and is rated to do 10GigE to a max of 35-50m runs, generally well within the max length of any run you’ll be doing in a suburban house.

Tagged , , , , , , , | 1 Comment

Settling In

This blog has been a little quiet lately, mostly thanks to Lisa and I being busy adjusting to the joys of home ownership with our new house we moved into in mid-September!

I'm a trust worthy reputable resident of Wadestown now!

I’m a trust worthy reputable resident of Wadestown now!

It’s been pretty flat-out and a number of weeks have already passed us by very quickly – we had anticipated the increase in expenditure that comes with owning a properly, but the amount of time it consumes as well is quite incredible, and given that the property hasn’t had a whole lot of love for the past 5 years or so, there’s certainly a backlog of tasks that need doing.

There’s also the unexpected “joys” that come with ownership, like the burst waterpipe on our first day in the new house, or the one hob on the cooker that appears to like leaking gas when it’s used, or the front door lock that has broken after a few weeks of use. For the first time ever, I almost miss having a landlord to complain to – however the enjoyment of putting a power drill through your first wall without requiring permission cannot be understated either.

 

Amusingly despite becoming home owners, it’s actually been the outdoors that’s been occupying most of my time, with large masses of plant life that has crept over the sheds, the paths and into roof gutters. I cleared 8 wheelbarrows of soil and plant material off the upper path the other day and it’s barely made a dent.

Rediscovering the lower pathway slowly...

Rediscovering the lower pathway slowly…

So far I’ve been mostly concerned about the low level plants, I haven’t even begun to look at the wall of trees and ferns around us – a lot of them are great and we will keep them, but a few certainly need some pruning back to make them a bit tamer and let a bit more light into the property.

Ferns in the mist. Pretty kiwi as bru.

Ferns in the mist. Pretty kiwi as bru.

I’ve been discovering the awesome range of power tools that exist these days – seems tools have come a long way from the days of my fathers wired drill, I’ve now got drills, sanders and even a weedeater/line cutter which all share the same cordless battery pack!

Got 99 problems but wires ain't one.

Got 99 problems but wires ain’t one. Cordless freedom baby!

I’ve had to learn some new skills like how to use a saw or how to set a post in the ground. Of course I cheated a bit by using ready-to-pour fastcrete, but hey, I’m lazy Gen Y-er who wants the fastest easiest way to make something work. ;-)

Hole digging

Harder than it looks. Stupid solid clay ground :-(

I also have two sheds that I need to do up – the first is in pretty good shape and just needs some minor fixes and paint job. It’s even got power already wired up so you can plug in your tools and go :-)

The second shed is in a far worse state and pretty much needs complete stripping down and repairing including a whole new floor and getting rid of almost a meter high pile of detritus that has collected around the back of it over the past 100 years. Helpfully some trees also decided to then plant themselves and grow right next to it as well.

The older shed, pretty but somewhat unusable without some hard work.

The older shed and upper pathway after tidying up the over growth.

 

The house is thankfully in a better state than the garden and sheds, although there is certainly a lot of work needed in the form of overdue maintenance and improvements. The house was built in 1914 (100 years birthday this year!), but thankfully despite the age of the property, the hardest and most essential modernisation has been done for us already.

There’s been a complete replacement of electrical systems with modern cabling and both the structure and interior is in good shape with the original Totara piles having been replaced and whatever scrim wall linings that previous existed having been replaced with plasterboard.

Most of the interior decor is playing it safe with neutral coloured walls, carpet and curtains and the native timber exposed on the doors and skirting. However there are a few garish items remaining from an earlier era where style wasn’t as important, like the lovely maroon tiled fireplace or the cork flooring in the kitchen :-/

The Lounge: Where 2014 meets with 1970 head on.

The Lounge: Where 2014 meets with 1970 head on.

 

Generally the property is nice and everyone who comes over describes it as lovely – but of course nobody tells you if your baby is ugly, so it’s entirely possible everyone is questioning our tastes behind our backs… But give it time, we have a lot of plans for this place that are yet to be actioned!

Our primary task right now is dragging our 20th century house into the 21st century with a few modern requirements like data cabling, heating and decent lighting.

Oddly enough I’ve already started on the data side of things, getting Cat6 ethernet cable run through the house to all the living spaces and roof mounting a WiFi AP and installing a proper comms cabinet. Priorities!

The next major issue is heating, the house has an old wood fire and old unflued gas heater, both of which look pretty dubious. We’ve left them alone and have been using a few recently installed panel heaters, but we need to consider a more powerful whole-house solution like a modern gas fireplace to handle the cold Wellington winters.

Power drills! Holes in walls! This is what home ownership is all about.

Power drills! Holes in walls! This is what home ownership is all about.

In addition to heaters, we also need to fix up the shocking lack of insulation that is common with New Zealand properties. Whilst we have roof insulation already, the floor needs insulating and at some point there is going to be a very expensive retro-fit double glazing cost we need to investigate as well.

 

Aside from these immediate priorities, there’s the question of changes to the layout. The biggest annoyance for us right now is that the kitchen/dining space and the lounge are two separate rooms with a bedroom in-between, which doesn’t really suit modern open plan living so we are pondering the cost of knocking out a wall and re-arranging things to create a single open plan living area.

Additionally we have a really small bathroom yet we have a massive laundry that’s about twice the size just through the wall. Considering the the laundry has almost nothing but a single lonely washing machine in it, it’s a prime candidate for being annexed for a new role as a massive new bathroom.

The tiny wooden cabin bathroom.

The tiny “wooden cabin” bathroom. If it wasn’t for the skylight and our character 12 foot ceilings, it would be really dark and tiny in there. :-/

We are also thinking about how we can improve the outdoor area which is a bit weirdly organised with a large patio area detached from the house and the back deck being a tiny strip that can’t really fit much. We’re already pondering extending the deck out further, then along the full length of the house, so we can join up with the lower patio and make it a nore usable space.

World's tiniest deck.

World’s tiniest deck, not exactly that useful…

Of course all these improvements require a fair bit of capital, which is one thing we don’t have much of right now thanks to the home loan, so its going to take some careful budgeting and time to get to where we want to be. For now, we are just enjoying having the place and plotting…..

 

Aside from the garden and sorting out house improvements, the other major time sink has been unpacking. We didn’t exactly have heaps of stuff given that we just had limited bits stored at each other’s parent’s houses, so it’s pretty scary at how much has emerged and arrived at our new house. I think everyone was kind of glad to get our junk out of their houses at long last, although I’m sure my parents will miss the file server buzzing away 24×7.

It’s been a bit of a discovery of lots of stuff we didn’t realise we had, I have literally a small data center worth of tech gear including rackmount PDUs, routers, switches and other items.

I know what you're thinking "Oh how typical of Jethro, boobs on a box" - but this one ISN'T MINE, it came out of Lisa's parents house... :-/

I know what you’re thinking “Oh how typical of Jethro, boobs on a box” – but this one ISN’T MINE, it came out of Lisa’s parents house which kind of disturbs me deeply.

This is probably the biggest negative of home ownership for me – I hate owning stuff. And owning a house is a sure way to accumulate stuff very, very quickly.

Owning a house means you have space to just “store that in the cupboard for now”. Being a couple in a large 4 bedroom home means there’s a lot of space and little pressure to use it, so it’s very easy for us to end up with piles of junk that actually doesn’t serve a purpose and not feel forced to clean it out.

I came back from AU with two suitcases and I could probably have culled that down to as little as one suitcase given the chance. There’s a huge amount of tech gear I’m considering offloading and Lisa has a massive pile of childhood stuff to make some hard decisions about, because as hard as it is to get rid of things, I think both of us are keen to avoid ending up in the same hording situation like our parents.

Of course some stuff can’t be avoided. I’ve spent a small fortune at Bunnings recently obtaining tools and materials to do repairs and other DIY for the house, so there’s a lot of additions to the “stuff I have to own but hate having to own” pile.

We also needed to purchase all new furniture since we had essentially nothing after returning from Sydney. I don’t mind buying a few quality pieces, but sadly it seems impossible to buy a house load of furniture without also obtaining an entire shed worth of cardboard and polystyrene packaging that we need to dispose of. Sorry environment! :-(

Trapped by packaging.

Trapped by packaging.

We’ve gotten through most of the unpacking, but there’s still a lot of sorting and finding homes for things left to do.

I’m looking forwards to getting to the point where I can just enjoy the house and the space we have. It should be fantastic during summer especially for entertaining guests with its large backyard, patio and sunny afternoons and I’m really looking forwards to having a proper home office setup again for my geeking needs

Oh how I've missed a home office!

Got my home office! If only I had money for computer upgrades left :-(

 

So that’s an update on where we are at for now. It’s going to be a busy year I think with a lot of time spent doing up the place, and I’ll have plenty more blog posts to come on the various adventures along the way. I suspect many of them are going to be quite low-tech compared to the usual content of this blog, but maybe I’ll wake up and suddenly decide that home automation is an immediate vital task I need to complete. ;-)

If you want some more pictures of the house, there’s a copy of all the real estate agent listing photos on my Pinterest account taken by an actual competent photographer, the plan is to try and take pictures along the way as we progress with our improvements to the property to see the progress we’ve been making.

Tagged , , , , , , | Leave a comment

First thoughts and tips on EL 7

Generally one RHEL/CentOS/Scientific Linux (aka EL) release isn’t radically different to another, however the introduction of EL 7 is a bit of a shake up, introducing systemd, which means new init system, new ways of reading logs plus dropping some older utilities you may rely on and introducing new defaults.

I’m going through and upgrading some of my machines currently so I’ve prepared a few tips for anyone familiar with EL 4/5/6 and getting started with the move to EL 7.

 

systemd

The big/scary change introduced by RHEL 7 is systemd – love it or hate it, either way it’s here to stay. The good news is that an existing RHEL admin can keep doing most of their old tricks and existing commands.

Red Hat’s “service” command still works and now hooks into either legacy init scripts or new systemd processes. And rather than forcing everyone to use the new binary logging format, RHEL 7 logs messages to both the traditional syslog plain text files, as well as the new binary log format that you can access via journalctl – so your existing scripts or grep recipes will work as expected.

Rather than write up a whole bunch about systemd, I recommend you check out this blog post by CertDepot which details some of the commands you’ll want to get familiar with. The Fedora wiki is also useful and details stuff like enabling/disabling services at startup time.

I found the transition pretty easy and some of the new tricks like better integration between output logs and init are nice changes that should make Linux easier to work with for new users longer term thanks to better visibility into what’s going on.

 

Packages to Install

The EL minimum install lacks a few packages that I’d consider key, you may also want to install them as part of your base installs:

  • vim-enhanced – No idea why this doesn’t ship as part of minimum install so as a vim user, it’s very, very frustrating not having it.
  • net-tools – this provides the traditional ifconfig/route/netstat family of network tools. Whilst EL has taken the path of trying to force people onto the newer iproute tools there are still times you may want the older tools, such as for running older shell scripts that haven’t been updated yet.
  • bind-utils – Like tools like host or nslookup? You’ll want this package.
  • mailx – Provides the handy mail command for when you’re debugging your outbound mail.

 

Networking

Firstly be aware that your devices might no longer be simple named ethX, as devices are now named based on their type and role. Generally this is an improvement, since the names should line up more with the hardware on big systems for easier identification, and you can still change the device names if you prefer something else.

Changing the hostname will cause some confusion for long time RHEL users, rather than a line in /etc/sysconfig/network, the hostname is now configured in /etc/hostname like other distributions.

The EL 7 minimum installation now includes NetworkManager as standard. Whilst I think NetworkManager is a fantastic application, it doesn’t really have any place on my servers where I tend to have statically configured addresses and sometimes a few static routes or other trickiness like bridges and tunnels.

You can disable network manager (and instead use the static “network” service) by running the following commands:

systemctl stop NetworkManager
systemctl disable NetworkManager
systemctl restart network

Red Hat have documentation on doing static network configuration, although it is unfortunately weak on the IPv6 front.

Most stuff is the same as older versions, but the approach of configuring static routes bit me. On EL 5 you configured a /etc/sysconfig/network-scripts/route-ethX file to define IPv4 and IPv6 routes that should be created when that interface comes up. With EL7 you now need to split the IPv4 and IPv6 routes apart, otherwise you just get a weird error when you bring the interface up.

For example, previously on an EL 5 system I would have had something like:

# cat /etc/sysconfig/network-scripts/route-eth1
10.8.0.0/16 via 10.8.5.2 dev eth1
2001:db8:1::/48 via 2001:db8:5::2 dev eth1
#

Whereas you now need something like this:

# cat /etc/sysconfig/network-scripts/route-eth1
10.8.0.0/16 via 10.8.5.2 dev eth1
#

# cat /etc/sysconfig/network-scripts/route6-eth1
2001:db8:1::/48 via 2001:db8:5::2 dev eth1
#

Hopefully your environment is not creative enough to need static routes around the place, but hey, someone out there might always be as crazy as me.

 

Firewalls

EL 7 introduces FirewallD as the default firewall application – it offers some interesting sounding features for systems that frequently change networks such as mobile users, however I’m personally quite happy and familiar with iptables rulesets for my server systems which don’t ever change networks.

Fortunately the traditional raw iptables approach is still available, Red Hat dragged their existing iptables/ip6tables service scripts over into systemd, so you can still save your firewall rules into /etc/sysconfig/iptables and /etc/sysconfig/iptables respectively.

# Disable firewalld:
systemctl disable firewalld
systemctl stop firewalld

# Install iptables
yum install iptables-service
systemctl enable iptables
systemctl enable ip6tables
systemctl start iptables
systemctl start ip6tables

 

LAMP Stack

  • Apache has been upgraded from 2.2 to 2.4. Generally things are mostly the same, but some modules have been removed which might break some of your configuration if you take a lift+shift approach.
  • MySQL has been replaced by MariaDB (community developed fork) which means the package names and service have changed, however all the mysql command line tools still exist and work fine.
  • PHP has been upgraded to 5.4.16 which a little bit dated already – over the lifespan of EL 7 it’s going to feel very dated very quickly, so I hope Red Hat puts out some php55 or php56 packages in future releases for those whom want to take advantage of the latest features.

 

Other Resources

  1. If you haven’t already, check out Red Hat’s release notes,they detail heaps of new features and additions to the platform.
  2. To learn more about the changes from previous releases, check out Red Hat’s Migration Guide as a starter.
  3. My guide to running EL 7 on EL 5 as a Xen guest for those of you running older Xen hypervisors.
Tagged , , , , , , , , , , | 3 Comments

MacOS TTY limit

I’m currently trialling the use of MacOS as a primary workstation on my work laptop, I’m probably bit of a power user and MacOS isn’t all that happy with some of the things I throw at it.

Generally my activities tend to involve vast number of terminals – one day I suddenly started getting the following error when trying to create new sessions inside of iTerm2:

Unable to Fork iTerm cannot launch the program for this session.

Turns out I had managed to exhaust the number of tty sessions configured by default in the Darwin kernel (127 max). Thankfully as per this helpful error report it’s generally pretty easy to resolve:

# Change the current value for the running kernel
sudo sysctl -w kern.tty.ptmx_max=255

# Add the following to /etc/sysctl.conf to make it permanent:
kern.tty.ptmx_max=255

I am liking the fact that although some of what I do is a bit weird for MacOS, at least there is a UNIX underneath it that you can still poke to make things happen :-)

Tagged , , | Leave a comment

Ruby Net::HTTP & Proxies

I ran into a really annoying issue today with Ruby and the Net::HTTP class when trying to make requests out via the restrictive corporate proxy at the office.

The documentation states that “Net::HTTP will automatically create a proxy from the http_proxy environment variable if it is present.” however I was repeatedly seeing my connections fail and a tcpdump confirmed that they weren’t even attempting to transit the proxy server.

Turns out that this proxy transversal only takes place if Net::HTTP is invoked as an object, however if you invoke one of it’s methods directly it ignores the proxy environmentals entirely.

The following example application demonstrates the issue:

#!/usr/bin/env ruby

require 'net/http'

puts "Your proxy is #{ENV["http_proxy"]}"

puts "This will work with your proxy settings:"
uri       = URI('https://www.jethrocarr.com')
request   = Net::HTTP.new(uri.host, uri.port)
response  = request.get(uri)
puts response.code

puts "This won't:"
uri = URI('https://www.jethrocarr.com')
response = Net::HTTP.get_response(uri)
puts response.code

Which will give you something like:

Your proxy is http://ihateproxies.megacorp.com:8080
This will work with your proxy settings:
200
This won't:
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/net/http.rb:878:in `initialize': No route to host - connect(2) (Errno::EHOSTUNREACH)
    from /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/net/http.rb:878:in `open'
    from /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/net/http.rb:878:in `block in connect'
    from /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/timeout.rb:52:in `timeout'
    from /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/net/http.rb:877:in `connect'
    from /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/net/http.rb:862:in `do_start'
    from /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/net/http.rb:851:in `start'
    from /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/net/http.rb:582:in `start'
    from /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/net/http.rb:477:in `get_response'
    from ./proxyexample.rb:18:in `<main>'

Very annoying!

Tagged , , , , | 2 Comments

Create MacOS Mavericks Installer

Whilst Apple’s hardware has a clever feature where you can re-install the operating system directly from the internet (essentially netboot install from Apple’s servers), it’s not always suitable if you need to install a machine on an offline connection or via a slow/expensive connection.

Fortunately Apple provides Mavericks as a .dmg file download which you can get from the app store – whilst that .dmg itself isn’t bootable (sadly) you can use a binary tool Apple provides inside it to generate installer media onto a USB drive.

Firstly download this Mavericks installer from the Apple store:

Properitery Evil. Shiny shiny propietary evil.

Proprietary Evil. Shiny shiny proprietary evil.

Then format a USB drive (at least 8GB) to have a single partition of type “Mac OS Extended (Journaled)”, with a partition name of “InstallMe”.

Now you’ll either have a Mavericks installer inside your applications directory, or on your desktop as a dmg file. If on the desktop, mount the dmg. Once done, in your terminal you can run the installer application to generate an installer:

sudo /Applications/Install\ OS\ X\ Mavericks.app/Contents/Resources/createinstallmedia –volume /Volumes/InstallMe –applicationpath /Applications/Install\ OS\ X\ Mavericks.app –nointeraction

(Replace /Applications with the path to the mounted dmg if installing from inside that).

You’ll see some output as it writes to the USB stick, it can take a while if your USB stick isn’t that fast.

Erasing Disk: 0%... 10%... 20%... 100%...
Copying installer files to disk...
Copy complete.
Making disk bootable...
Copying boot files...
Copy complete.
Done.

Once done, you can reboot and by holding down option you can select the USB stick to install from.

Thanks to this forum post for posting the original answer – there are a lot of long convoluted processes mentioned on the web, this is the easiest one by far out of all the options I found.

Tagged , , , | Leave a comment

Installing EL7 onto EL5 Xen hosts

With RedHat recently releasing RHEL 7 (and CentOS promptly getting their rebuild out the door shortly after), I decided to take the opportunity to start upgrading some of my ageing RHEL/CentOS (EL) systems.

My personal co-location server is a trusty P4 3.0Ghz box running EL 5 for both host and Xen guests. Xen has lost some popularity in favour of HVM solutions like KVM, however it’s still a great hypervisor and can run Linux guests really nicely on even hardware as old as mine that lacks HVM CPU extensions.

Considering that EL 5, 6 and 7 are all still supported by RedHat, I would expect that installing EL 7 as a guest on EL 5 should be easy – and to be fair to RedHat it mostly is, the installation was pretty standard.

Like EL 5 guests, EL 7 guests can be installed entirely from the command line using the standard virt-install command – for example:

$ virt-install --paravirt \
 --name MyCentOS7Guest \
 --ram 1024 \
 --vcpus 1 \
 --location http://mirror.centos.org/centos/7/os/x86_64/ \
 --file /dev/lv_group/MyCentOS7Guest \
 --network bridge=xenbr0

One issue I had is that the installer no longer prompts for network information to use to download the rest of the installer and instead assumes you have a DHCP server, an assumption that isn’t always correct. If you want to force it to use a static address, append the following parameters to the virt-install command.

 -x 'ip=192.168.1.20 netmask=255.255.255.0 dns=8.8.8.8 gateway=192.168.1.1'

The installer will proceed and give you an option to either use VNC to get a graphical installer, or to accept the more basic/limited text mode installer. In my case I went with the text mode installer, generally this is fine for average installations, except that it doesn’t give you a lot of control over partitioning.

Installation completed successfully, but I was not able to subsequently boot the new guest, with an error being thrown about pygrub being unable to find the boot partition.

# xm create -c vmguest
Using config file "./vmguest".
Traceback (most recent call last):
  File "/usr/bin/pygrub", line 774, in ?
    raise RuntimeError, "Unable to find partition containing kernel"
RuntimeError: Unable to find partition containing kernel
No handlers could be found for logger "xend"
Error: Boot loader didn't return any data!
Usage: xm create <ConfigFile> [options] [vars]

 

Xen works a little differently than VMWare/KVM/VirtualBox in that it doesn’t try to emulate hardware unnecessarily in paravirtualised mode, so there’s no BIOS. Instead Xen ships with a tool called pygrub, that is essentially an application that implements grub and goes through the process of reading the guest’s /boot filesystem, displaying a grub interface using the config in /boot, then when a kernel is selected grabs the kernel and associated information and launches the guest with it.

Generally this works well, certainly you can boot any of your EL 5 guests with it as well as other Linux distributions with Xen paravirtulised compatible kernels (it’s merged into upstream these days).

However RHEL has moved on a bit since 2007 adding a few new tricks, such as replacing Grub with Grub2 and moving from the typical ext3 boot partition to an xfs boot partition. These changes confuse the much older utilities written for Xen, leaving it unable to read the boot loader data and launch the guest.

The two main problems come down to:

  1. EL 5 can’t read the xfs boot partition created by default by EL 7 hosts. Even if you install optional xfs packages provided by centosplus/centosextras, you still can’t read the filesystem due to the version of xfs being too new for it to comprehend.
  2. The version of pygrub shipped with EL 5 doesn’t have support for Grub2. Well, technically it’s supposed to according to RedHat, but I suspect they forgot to merge in fixes needed to make EL 7 boot.

I hope that RedHat fix this deficiency soon, presumably there will be RedHat customers wanting to do exactly what I’m doing who will apply some pressure for a fix, however until then if you want to get your shiny new EL 7 guests installed, I have a bunch of workarounds for those whom are not faint of heart.

 

For these instructions, I’m assuming that your guest is installed to /dev/lv_group/vmguest, however these instructions should work equally for image files or block devices.

Firstly, we need to check what the state of the /boot partition is – we need to make sure it is an ext3 volume, or convert it if not. If you installed via the limited text mode installer, it will be an xfs partition, however if you installed via VNC, you might be able to change the type to ext3 and avoid the next few steps entirely.

We use kpartx -a and -d respectively to expose the partitions inside the block device so we can manipulate the contents. We then use the good ol’ file command to check what type of filesystem is on the first partition (which is presumably boot).

# kpartx -a /dev/lv_group/vmguest
# file -sL /dev/mapper/vmguestp1
/dev/mapper/vmguestp1: SGI XFS filesystem data (blksz 4096, inosz 256, v2 dirs)
# kpartx -d /dev/lv_group/vmguest

Being xfs, we’re probably unable to do much – if we install xfsprogs (from centos extras), we can verify it’s unreadable by the host OS:

# yum install xfsprogs
# xfs_check /dev/mapper/vmguestp1
bad sb version # 0xb4b4 in ag 0
bad sb version # 0xb4a4 in ag 1
bad sb version # 0xb4a4 in ag 2
bad sb version # 0xb4a4 in ag 3
WARNING: this may be a newer XFS filesystem.
#

Technically you could fix this by upgrading the kernel, but EL 5’s kernel is a weird monster that includes all manor of patches for Xen that were never included into upstream, so it’s not a simple (or even feasible) operation.

We can convert the filesystem from xfs to ext3 by using another newer Linux system. First we need to export the boot volume into an image file:

# dd if=/dev/mapper/vmguestp1  | bzip2 > /tmp/boot.img.bz2

Then copy the file to another host, where we will unpack it and recreate the image file with ext3 and the same contents.

$ bunzip2 boot.img.bz2
$ mkdir tmp1 tmp2
$ sudo mount -t xfs -o loop boot.img tmp1/
$ sudo cp -avr tmp1/* tmp2/
$ sudo umount tmp1/
$ mkfs.ext3 boot.img
$ sudo mount -t ext3 -o loop boot.img tmp1/
$ sudo cp -avr tmp2/* tmp1/
$ sudo umount tmp1
$ rm -rf tmp1 tmp2
$ mv boot.img boot-new.img
$ bzip2 boot-new.img

Copy the new file (boot-new.img) back to the Xen host server and replace the guest’s/boot volume with it.

# kpartx -a /dev/lv_group/vmguest
# bzcat boot-new.img.bz2 > /dev/mapper/vmguestp1
# kpartx -d /dev/lv_group/vmguest

 

Having fixed the filesystem, Xen’s pygrub will be able to read it, however your guest still won’t boot. :-( On the plus side, it throws a more useful error showing that it could access the filesystem, but couldn’t parse some data inside it.

# xm create -c vmguest
Using config file "./vmguest".
Using <class 'grub.GrubConf.Grub2ConfigFile'> to parse /grub2/grub.cfg
Traceback (most recent call last):
  File "/usr/bin/pygrub", line 758, in ?
    chosencfg = run_grub(file, entry, fs)
  File "/usr/bin/pygrub", line 581, in run_grub
    g = Grub(file, fs)
  File "/usr/bin/pygrub", line 223, in __init__
    self.read_config(file, fs)
  File "/usr/bin/pygrub", line 443, in read_config
    self.cf.parse(buf)
  File "/usr/lib64/python2.4/site-packages/grub/GrubConf.py", line 430, in parse
    setattr(self, self.commands[com], arg.strip())
  File "/usr/lib64/python2.4/site-packages/grub/GrubConf.py", line 233, in _set_default
    self._default = int(val)
ValueError: invalid literal for int(): ${next_entry}
No handlers could be found for logger "xend"
Error: Boot loader didn't return any data!

At a glance, it looks like pygrub can’t handle the special variables/functions used in the EL 7 grub configuration file, however even if you remove them and simplify the configuration down to the core basics, it will still blow up.

# xm create -c vmguest
Using config file "./vmguest".
Using <class 'grub.GrubConf.Grub2ConfigFile'> to parse /grub2/grub.cfg
WARNING:root:Unknown image directive load_video
WARNING:root:Unknown image directive if
WARNING:root:Unknown image directive else
WARNING:root:Unknown image directive fi
WARNING:root:Unknown image directive linux16
WARNING:root:Unknown image directive initrd16
WARNING:root:Unknown image directive load_video
WARNING:root:Unknown image directive if
WARNING:root:Unknown image directive else
WARNING:root:Unknown image directive fi
WARNING:root:Unknown image directive linux16
WARNING:root:Unknown image directive initrd16
WARNING:root:Unknown directive source
WARNING:root:Unknown directive elif
WARNING:root:Unknown directive source
Traceback (most recent call last):
  File "/usr/bin/pygrub", line 758, in ?
    chosencfg = run_grub(file, entry, fs)
  File "/usr/bin/pygrub", line 604, in run_grub
    grubcfg["kernel"] = img.kernel[1]
TypeError: unsubscriptable object
No handlers could be found for logger "xend"
Error: Boot loader didn't return any data!
Usage: xm create <ConfigFile> [options] [vars]

Create a domain based on <ConfigFile>

At this point it’s pretty clear that pygrub won’t be able to parse the configuration file, so you’re left with two options:

  1. Copy the kernel and initrd file from the guest to somewhere on the host and set Xen to boot directly using those host-located files. However then kernel updating the guest is a pain.
  2. Backport a working pygrub to the old Xen host and use that to boot the guest. This requires no changes to the Grub2 configuration and means your guest will seamlessly handle kernel updates.

Because option 2 is harder and more painful, I naturally chose to go down that path, backporting the latest upstream Xen pygrub source code to EL 5. It’s not quite vanilla, I had to make some tweaks to rip out a couple newer features that were breaking it on EL 5, so I’ve packaged up my version of pygrub and made it available in both source and binary formats.

Download Jethro’s pygrub backport here

Installing this *will* replace the version installed by the Xen package – this means an update to the package on the host will undo these changes – I thought about installing it to another path or making an RPM, but my hope is that Red Hat get their Xen package fixed and make this whole blog post redundant in the first place so I haven’t invested that level of effort.

Copy to your server and unpack with:

# tar -xkzvf xen-pygrub-6f96a67-JCbackport.tar.gz
# cd xen-pygrub-6f96a67-JCbackport

Then you can build the source into a python module and install with:

# yum install xen-devel gcc python-devel
# python setup.py build
running build
running build_py
creating build
creating build/lib.linux-x86_64-2.4
creating build/lib.linux-x86_64-2.4/grub
copying src/GrubConf.py -> build/lib.linux-x86_64-2.4/grub
copying src/LiloConf.py -> build/lib.linux-x86_64-2.4/grub
copying src/ExtLinuxConf.py -> build/lib.linux-x86_64-2.4/grub
copying src/__init__.py -> build/lib.linux-x86_64-2.4/grub
running build_ext
building 'fsimage' extension
creating build/temp.linux-x86_64-2.4
creating build/temp.linux-x86_64-2.4/src
creating build/temp.linux-x86_64-2.4/src/fsimage
gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fPIC -I../../tools/libfsimage/common/ -I/usr/include/python2.4 -c src/fsimage/fsimage.c -o build/temp.linux-x86_64-2.4/src/fsimage/fsimage.o -fno-strict-aliasing -Werror
gcc -pthread -shared build/temp.linux-x86_64-2.4/src/fsimage/fsimage.o -L../../tools/libfsimage/common/ -lfsimage -o build/lib.linux-x86_64-2.4/fsimage.so
running build_scripts
creating build/scripts-2.4
copying and adjusting src/pygrub -> build/scripts-2.4
changing mode of build/scripts-2.4/pygrub from 644 to 755

# python setup.py install

Naturally I recommend reviewing the source code and making sure it’s legit (you do trust random blogs right?) but if you can’t get it to build/lack build tools/like gambling, I’ve included pre-built binaries in the archive and you can just do

# python setup.py install

Then do a quick check to make sure pygrub throws it’s help message, rather than any nasty errors indicating something went wrong.

# /usr/bin/pygrub

 

We’re almost ready to try booting again! First create a directory that the new pygrub expects:

# mkdir /var/run/xend/boot/

Then launch the machine creation – this time, it should actually boot and run through the usual systemd startup process. If you installed with /boot set to ext3 via the installer, everything should just work and you’ll be up and running!

If you had to do the xfs to ext3 conversion trick, the bootup process will explode with scary errors like the following:

.......
[ TIME ] Timed out waiting for device dev-disk-by\x2duuid-245...95b2c23.device.
[DEPEND] Dependency failed for /boot.
[DEPEND] Dependency failed for Local File Systems.
[DEPEND] Dependency failed for Relabel all filesystems, if necessary.
[DEPEND] Dependency failed for Mark the need to relabel after reboot.
[  101.134423] systemd-journald[414]: Received request to flush runtime journal from PID 1
[  101.658465] type=1305 audit(1405735466.679:4): audit_pid=476 old=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:auditd_t:s0 res=1
Welcome to emergency mode! After logging in, type "journalctl -xb" to view
system logs, "systemctl reboot" to reboot, "systemctl default" to try again
to boot into default mode.
Give root password for maintenance
(or type Control-D to continue):

The issue is that the conversion of the filesystem changed it’s UUID, plus the filesystem type in /etc/fstab no longer matches.

We can fix this easily by dropping to the recovery shell by entering the root password above and executing the following commands:

guest# sed -i -e '/boot/ s/UUID=[0-9\-]*/\/dev\/xvda1/' /etc/fstab
guest# sed -i -e '/boot/ s/xfs/ext3/' /etc/fstab
guest# cat /etc/fstab | grep '/boot'

Make sure the cat returns a valid /boot line, it should be using /dev/xvda1 as the device and ext3 as the filesystem now.

Finally, stop and start the instance (reboots seem to hang for me):

guest# shutdown -h now
xm create -c vmguest1

It should now boot correctly! Go forth and enjoy your new VM!

CentOS Linux 7 (Core)
Kernel 3.10.0-123.el7.x86_64 on an x86_64

This is certainly a hack – doing this backport of pygrub solved my personal issue, but it’s entirely possible it may break other things, so do your own testing and determine whether it’s suitable for you and your environment or not.

Tagged , , , , , , , , , | 4 Comments

Rescuing a corrupt tarfile

Having upgraded OS recently, I was using a poor quality sneakernet of free USB sticks to transfer some data from my previous installation. This dodgy process strangely enough managed to result in some data corruption of my .tar.bz2 file, leaving me in the position of having to go to other backups to recover my data. :-(

$ tar -xkjvf corrupt_archive.tar.bz2
....
jcarr/Pictures/fluffy_cats.jpg
jcarr/Documents/favourite_java_exceptions.txt

bzip2: Data integrity error when decompressing.
    Input file = (stdin), output file = (stdout)

It is possible that the compressed file(s) have become corrupted.
You can use the -tvv option to test integrity of such files.

You can use the `bzip2recover' program to attempt to recover
data from undamaged sections of corrupted files.

tar: Unexpected EOF in archive
tar: Error is not recoverable: exiting now

This is the first time I’ve ever experienced a corruption like this with .tar.bz2. The file was the expected size, so it wasn’t a case of a truncated file, the data was there but something part way through the file was corrupted and causing bzip2 to fail with decompression.

Bzip2 comes with a recovery utility, which works by rescuing each block into an individual file. We then run -t over them to identify any blocks which are clearly corrupt, and delete them accordingly.

$ bzip2recover corrupt_archive.tar.bz2
$ bzip2 -t rec*.tar.bz2

Then we can put the blocks back together in an uncompressed form of the original file (in this case tar);

$ bzip2 -dc rec*.tar.bz2 > recovered_data.tar

Finally we want to extract the actual tar file itself to get the data. However, tar might not be too happy about having lost some blocks inside it, or having other forms of corruption.

# tar -xvf recovered_data.tar
...
jcarr/Pictures/fluffy_cats.jpg
jcarr/Documents/favourite_java_exceptions.txt
tar: Skipping to next header
tar: Archive contains ‘\223%\322TGG!XہI.’ where numeric off_t value expected
tar: Exiting with failure status due to previous errors

I couldn’t figure out a way to get tar to skip over, or repair the file, however I did find a few posts online suggesting the use of the much older cpio utility that still exists on most unixes today.

$ cpio -ivd -H tar < recovered_data.tar

This worked perfectly! cpio complained about some files it couldn’t recover, but it recovered the vast majority of the damaged contents. Of course I can’t trust any files completely that I’ve restored, always possible there is some small corruption after such a restore, however if you lack backups, or your backups themselves are corrupted, this could be the way to go to get back some of your precious data.

In this case I was lucky that the header of the file was still intact  – if bzip2 or tar can’t read the file header to identify it as a tar.bz2 to being with, other measures may need to be taken. There’s heaps of suggestions online, just make a copy of the corrupted file first then try the different suggested methods till you find an approach that (hopefully) works for you.

Tagged , , , , , , , , | 1 Comment

2degrees or not 2degrees?

Coming back to New Zealand from Australia, I was faced with needing to pick a telco to use. I’ve used all three New Zealand networks in the past few years (all pre-4G/LTE) and don’t have any particular reason/loyalty to use any specific network.

I decided to stay on the 2degrees network that I had parked my number on before going to Sydney, so I figured I’d put together a brief review of how I’ve found them and what I think about it so far.

Generally there were three main incentives for me to stay on 2degrees:

  1. AU/NZ mobile or landline minutes are all treated equally. As I call and SMS my friends and colleagues in AU all the time, this works very nicely. And if I need to visit AU, their roaming rates aren’t unaffordable.
  2. All plans come with free data sharing between devices – I can share my data with up to 5 devices at no extra cost. Laptop with 3G, tablet, spare phone? No worries, get a SIM card and share away.
  3. Rollover minutes & data – what you don’t use in one month accrues for up to a year.

And of course their pricing is sharp – coming into the New Zealand market as the underdog, 2degrees started going after the lower end prepay market, before moving up to offer a more sophisticated data network.

For $29, I’m now getting 1GB of data, 300 minutes AU/NZ and unlimited SMS AU/NZ. But also received an additional once-off bonus of 2GB data for moving to a no-commitment plan and another 200MB per month as a bonus for my data shared device; it’s insanely good value really.

 

Of course good pricing and features aren’t any good if the quality of the service is poor or the data-rate substandard. 2degrees still lack 4G/LTE in Wellington (has just been introduced in Auckland) which is going to set them back a bit, however they do still deliver quite a decent result.

Performance of my 1 year old Samsung Galaxy Note 2 (LTE/4G model operating on 3G-only network) was good with a 22.16 Mb/s download and 2.56 MB/s upload from my CBD apartment. It’s actually faster than the apartment WiFi ISP provider currently. (Unsure why the ping below is so bad, it’s certainly not that bad when testing… possibly some issue with the app on my device).

889233841It does pay to have a good device – older devices aren’t necessarily capable of the same speeds, the performance with my 4 year old Lenovo X201i with Qualcomm Gobi 2000 built-in 3G hardware is quite passable, but it’s not quite the speed demon of my cellphone at only 6.16 Mb/s down and 0.36 Mb/s up. Still faster than many ADSL connections however, I was only getting about 4 Mb/s down in my Sydney CBD apartment recently!

3618043332Whilst I haven’t got any metrics to show, the performance outside of the cities in regional and rural areas is still reasonable. 2degrees roams onto Vodafone for parts of their coverage outside the main areas, which means that you need to make sure your phone/device is configured to allow national data roaming (or you’ll get *no* data coverage), and it also means you’re suspectable to Vodafone’s network performance, which is one of the worst I’ve used (yes AU readers, even worse than Vodafone AU).

Generally the performance is perfectly fine for my requirements – I don’t download heaps of data, but I do use a lot of applications that are latency and packet loss sensitive. I look forwards to seeing what it’s like once 2degrees get their LTE network in Wellington and I can get the full capability out of my phone hardware.

2degrees is also trailing a service of offering free WiFi access – I’m in the trial and have been testing, generally the WiFi is very speedy and I was getting speeds of around 21 Mb/s down and 9 Mb/s up whilst walking around, but it’s let down by the poor transition that Android (and presumably other vendors) make when moving between WiFi and 3G networks. Because the WiFi signal hangs on longer than it can actually sustain traffic, it leads to small service dropouts in data when moving between the two networks – this isn’t 2degrees’ fault, rather a limitation of WiFi and the way Android handles it, but it reflects badly on telco hybrid WiFi/GSM network approaches.

 

It’s not all been perfect – I’ve had some issues with 2degrees, mostly when using them as a prepay provider. The way data is handled for prepay differs to on-plan, and it’s possible to consume all your available data, then eat through your credit without any warning, something that cost me a bit more than I would like a couple of times when on prepay.

This is fixed with on-plan, which gives you tight spend control (define how much you want to cap your bill at) and also has a mode that allows you to block non-plan based data spend, to avoid some unexpected usage generating you an expensive bill. I’d recommend going with one of their plans rather than their prepay because of this functionality alone, not to mention that the plans tend to offer a bit better value.
On the plus side, their twitter support was fantastic and sorted me out with extra data credit in compensation. Their in-store support has also been great, when I went to buy an extra SIM ($5) to data share to my laptop, the guy at the counter told me about a promotion, gave me a free SIM and chucked 200MB/month on it, all that I wasn’t expecting.

It’s a nice change, generally telco customer service is some of the worst examples around, so it’s nice to have a positive interaction, although 2degrees do need to make an effort to stop having certain spend protections limited to their plan customers and not prepay – A good customer service interaction is nice, but not having to talk to them in the first place is even better.

 

So how do I find 2degrees compared to the other networks? I’ve found NZ networks generally a mixed bag in the past – Telecom XT has been the best performing one, but I’ve always found their pricing a bit high and Vodafone is just all round poor in both customer service and data performance. With the current introduction of 4G/LTE by all the networks, it’s a whole another generation of technology and what’s been a good or bad network in the past, may no longer apply, but we need to wait another year or so for the coverage and uptake to increase to see how it performs under load.

For now the low cost and free data sharing of up to 5 devices will keep me on 2degrees for quite some time. If someone else was paying, maybe I’d consider Telecom XT for the bit better performance, but the value of 2degrees is too good to ignore.

Like anything, your particular use case and requirements may vary – shop around and see what makes sense for your requirements.

Tagged , , , , , , , | Leave a comment

Funny tasting Squid Resolver

squid_logoSquid is a very popular (and time tested) proxy server, it’s generally the go-to solution for a  proxy server in a *nix environment and is capable of providing general caching proxy services (including transparent) as well as more sophisticated reverse proxy solutions.

I recently ran into an issue where Squid was refusing to resolve some DNS addresses on our network – not an uncommon problem if using a public DNS server instead of an internal-only DNS server by mistake.

The first step was to check the nameservers listed in /etc/resolv.conf and make sure they were correct and returning valid results. In this case they were, all the name servers correctly resolved the address without any issue.

Next step was to check for specific configuration in Squid – some applications like Squid and Nginx allow you to specifically set their nameservers to something other than the contents of /etc/resolv.conf. In this case, there was no such configuration, in fact there was no configuration relating to DNS at all, meaning it would have to fall back to the operating system resolver.

Or does it? Generally Linux applications use the OS resolver which follows a set order to discover hosts defined explicitly in /etc/hosts, or tries the nameservers in /etc/resolv.conf. When either file is changed, the changes are reflected immediately on the next query for those addresses.

However Squid has it’s own approach. Unless it’s using DNS name servers specifically defined in it’s configuration file, instead of using the OS resolver it reads the configuration in /etc/resolv.conf as a once-off startup action, then continues to use the name servers that were defined for the lifetime of the process.

You can see this in the logs – at startup time Squid logs the servers it’s using in cache.log:

# grep nameserver /var/log/squid/cache.log
2014/07/02 11:57:37| Adding nameserver 192.168.1.10 from /etc/resolv.conf

From this, the sequence of events is simple to figure out:

  1. A server was brought online, using a public DNS server that lacked some of our internal records.
  2. Squid was started up, reading in that DNS server from /etc/resolv.conf.
  3. The DNS server addresses were corrected and immediately resolved all other applications – but Squid stuck with the old address still, so continued to refuse the queries.

Resolving the immediate issue is as simple as restarting the Squid process to force it to pickup the new resolver settings. But what if your DNS server values could change at any future stage without warning?

If you’re using Puppet, you could use a custom fact (like this one) that exposes the current name servers on the system, then writes them into the Squid configuration file using the dns_nameservers configuration parameter and notifies the Squid service to reload on any change of the configuration file.

Or if your squid server is always going to be using a particular DNS server, regardless of what the host is using, you can simply set the dns_nameservers parameter in Squid to point to the desired servers.

Tagged , , , , , | Leave a comment