Tag Archives: opinions

I’m a highly opinionated person and always up for a good debate over something. There are my personal opinions and don’t necessarily reflect that of my company, my clients or other business involvements.

DNC NZ submission

The DNC has proposed a new policy for .nz WHOIS data which unfortunately does not in my view address the current issues with lack of privacy of the .nz namespace. The following is my submission on the matter.

Dear DNC,

I have strong concerns with the proposed policy changes to .nz WHOIS information and am writing to request you reconsider your stance on publication of WHOIS information.

#1: Refuting requirement of public information for IT and business related contact

My background is working in IT and I manage around 600 domains for a large NZ organisation. This would imply that WHOIS data would be useful, as per your public good statement, however I don’t find this to be correct.

My use cases tend to be one of the following:

1. A requirement to get a malicious (phishing, malware, etc) site taken down.

2. Contacting a domain owner to request a purchase of their domain.

3. A legal issue (eg copyright infringement, trademarks, defamation).

4. Determining if my employer actually owns the domain marketing is trying to use today. :-)

Of the above:

1. In this case, I would generally contact the service provider of the hosting anyway since the owners of such domains tend to be unreliable or unsure how to even fix the issue. Service providers tend to have a higher level of maturity of pulling such content quickly. The service provider details can be determined via IP-address lookup and finding the hosting provider from there, rather than relying on the technical contact information which often is just the same as the registrant and doesn’t reflect the actual company hosting the site. All the registrant information is not required to complete this requirement, although email is always good for a courtesy heads up.

2. Email is satisfactory for this. Address & phone is not required.

3. Given any legal issue is handled by a solicitor, a legal request could be filed with DNC to release the private ownership information in the event that the email address of the domain owner was non responsive.

4. Accurate owner name is more than enough.

#2: Internet Abuse

I publish a non-interesting and non-controversial personal blog. I don’t belong to any minorities ethnic groups. I’m born in NZ. I’m well off. I’m male. The point being that I don’t generally attract any kind of abuse or harassment that is sadly delivered to some members of the online community.

However even I end up receiving abuse relating to my online presence on occasion in the form of anonymous abusive emails. This doesn’t phase me personally, but if I was in one of the many online minorities that can (and still do) suffer real-word physical abuses, I might not be so blasé knowing that it doesn’t take much to suddenly turn up at my home and throw abuse in person.

It’s also extremely easy for an online debate to result in a real world incident. It isn’t hard to trace a person’s social media comments to their blog/website and from there, their real world address. Nobody likes angry morons abusing them at 2am outside their house with a tire iron about their Twitter post.

#3. Cold-blooded targeting

I’ve discussed my needs as an IT professional for WHOIS data, the issue of internet abuse. Finally I wish to point out the issue of exposing one’s address publicly when we consider what a smart, malicious player can do with the information.

* With a target’s date of birth (thanks Facebook!) and their address (thanks DNC policy!) you’re in the position to fake someone’s identity for a number of NZ organisations including insurance and medical whom use these two (weak) forms of validation.

* Tweet a picture of your coffee at Mojo this morning? Excellent, your house is probably unoccupied for 8 hours, I need a new TV.

* Posting blogs about your amazing international trip? Should be a couple good weeks to take advantage of this – need a couch to go with that TV.

* Mentioned you have a young daughter? Time to wait for them at your address after school events and intercept there. Its not hard to be “Uncle Bob from the UK to take you for candy” when you have address, names, habits thanks to the combined forces of real world location and social media disclosure.

Not exposing information that doesn’t need to be public is a text-book infosec best practise to prevent social engineering type attacks. We (try to be) cautious around what we tell outsiders because lots of small bits of information becomes very powerful very quickly. Yet we’re happy for people to slap their real world home address on the internet for anyone to take advantage of because no harm could come of this?

To sum up, I request the DNC please reconsider this proposed policy and:

1. Restrict the publication of physical address and phone numbers for all private nz domains. This information has little real use and offer avenues for very disturbing and intrusive abuse and targeting. At least email abuse can be deleted from the comfort of your couch.

2. Retain the requirement for a name and contact email address to be public.However permit the publicly displayed named to be a pseudonym to preserve privacy for users whom consider themselves at risk, with the owner’s real/legal name to be held by DNC for legal contact situations.

I have no concerns if DNC was to keep business-owned domain information public. Ltd companies director contact details are already publicly available via the companies registry, and most business-owned domains simply list their place of business and their reception phone number which doesn’t expose any particular person. My concern is the lack of privacy for New Zealanders rather than businesses.

Thank you for reading. I am happy for this submission to be public.

regards.

Jethro

Your cloud pricing isn’t webscale

Thankfully in 2015 most (but not all) proprietary software providers have moved away from the archaic ideology of software being licensed by the CPU core – a concept that reflected the value and importance of systems back when you were buying physical hardware, but rendered completely meaningless by cloud and virtualisation providers.

Taking it’s place came the subscription model, popularised by Software-as-a-Service (or “cloud”) products. The benefits are attractive – regular income via customer renewal payments, flexibility for customers wanting to change the level of product or number of systems covered and no CAPEX headaches in acquiring new products to use.

Clients win, vendors win, everyone is happy!

Or maybe not.

 

Whilst the horrible price-by-CPU model has died off, a new model has emerged – price by server. This model assumes that the more servers a customer has, the bigger they are and the more we should charge them.

The model makes some sense in a traditional virtualised environment (think VMWare) where boxes are sliced up and a client runs only as many as they need. You might only have a total of two servers for your enterprise application – primary and DR – each spec’ed appropriately to handle the max volume of user requests.

But the model fails horribly when clients start proper cloud adoption. Suddenly that one big server gets sliced up into 10 small servers which come and go by the hour as they’re needed to supply demand.

DevOps techniques such as configuration management suddenly turns the effort of running dozens of servers into the same as running a single machine, there’s no longer any reason to want to constrain yourself to a single machine.

It gets worse if the client decides to adopt microservices, where each application gets split off into it’s own server (or container aka Docker/CoreOS). And it’s going to get very weird when we start using compute-less computing more with services like Lambda and Hoist because who knows how many server licenses you need to run an application that doesn’t even run on a server that you control?

 

Really the per-server model for pricing is as bad as the per-core model, because it no longer has any reflection on the size of an organisation, the amount they’re using a product and most important, the value they’ve obtaining from the product.

So what’s the alternative? SaaS products tend to charge per-user, but the model doesn’t always work well for infrastructure tools. You could be running monitoring for a large company with 1,000 servers but only have 3 user accounts for a small sysadmin team, which doesn’t really work for the vendor.

Some products can charge based on volume or API calls, but even this is risky. A heavy micro-service architecture would result in large number of HTTP calls between applications, so you can hardly say an app with 10,000 req/min is getting 4x the value compared to a client with a 2,500 req/min application – it could be all internal API calls.

 

To give an example of how painful the current world of subscription licensing is with modern computing, let’s conduct a thought exercise and have a look at the current pricing model of some popular platforms.

Let’s go with creating a startup. I’m going to run a small SaaS app in my spare time, so I need a bit of compute, but also need business-level tools for monitoring and debugging so I can ensure quality as my startup grows and get notified if something breaks.

First up I need compute. Modern cloud compute providers *understand* subscription pricing. Their models are brilliantly engineered to offer a price point for everyone. Whether you want a random jump box for $2/month or a $2000/month massive high compute monster to crunch your big-data-peak-hipster-NoSQL dataset, they can deliver the product at the price point you want.

Let’s grab a basic Digital Ocean box. Well actually let’s grab 2, since we’re trying to make a redundant SaaS product. But we’re a cheap-as-chips startup, so let’s grab 2x $5/mo box.

Screen Shot 2015-11-03 at 21.46.40

Ok, so far we’ve spent $10/month for our two servers. And whilst Digital Ocean is pretty awesome our code is going to be pretty crap since we used a bunch of high/drunk (maybe both?) interns to write our PHP code. So we should get a real time application monitoring product, like Newrelic APM.

Screen Shot 2015-11-03 at 21.37.46

Woot! Newrelic have a free tier, that’s great news for our SaaS application – but actually it’s not really that useful, it can’t do much tracing and only keeps 24 hours history. Certainly not enough to debug anything more serious than my WordPress blog.

I’ll need the pro account to get anything useful, so let’s add a whopping $149/mo – but actually make that $298/mo since we have two servers. Great value really. :-/

 

Next we probably need some kind of paging for oncall when our app blows up horribly at 4am like it will undoubtably do. PagerDuty is one of the popular market leaders currently with a good reputation, let’s roll with them.

Screen Shot 2015-11-03 at 21.52.57

Hmm I guess that $9/mo isn’t too bad, although it’s essentially what I’m paying ($10/mo) for the compute itself. Except that it’s kinda useless since it’s USA and their friendly neighbour only and excludes us down under. So let’s go with the $29/mo plan to get something that actually works. $29/mo is a bit much for a $10/mo compute box really, but hey, it looks great next to NewRelic’s pricing…

 

Remembering that my SaaS app is going to be buggier than Windows Vista, I should probably get some error handling setup. That $298/mo Newrelic APM doesn’t include any kind of good error handler, so we should also go get another market leader, Raygun, for our error reporting and tracking.

Screen Shot 2015-11-03 at 22.00.54

For a small company this isn’t bad value really given you get 5 different apps and any number of muppets working with you can get onboard. But it’s still looking ridiculous compared to my $10/mo compute cost.

So what’s the total damage:

Compute: $10/month
Monitoring: $371/month

Ouch! Now maybe as a startup, I’ll churn up that extra money as an investment into getting a good quality product, but it’s a far cry from the day when someone could launch a new product on a shoestring budget in their spare time from their uni laptop.

 

Let’s look at the same thing from the perspective of a large enterprise. I’ve got a mission critical business application and it requires a 20 core machine with 64GB of RAM. And of course I need two of them for when Java inevitably runs out of heap because the business let muppets feed garbage from their IDE directly into the JVM and expected some kind of software to actually appear as a result.

That compute is going to cost me $640/mo per machine – so $1280/mo total. And all the other bits, Newrelic, Raygun, PagerDuty? Still that same $371/mo!

Compute: $1280/month
Monitoring: $371/month

It’s not hard to imagine that the large enterprise is getting much more value out of those services than the small startup and can clearly afford to pay for that in relation to the compute they’re consuming. But the pricing model doesn’t make that distinction.

 

So given that we know know that per-core pricing is terrible and per-server pricing is terrible and (at least for infrastructure tools) per-user pricing is terrible what’s the solution?

“Cloud Spend Licensing” [1]

[1] A term I’ve just made up, but sounds like something Gartner spits out.

With Cloud Spend Licensing, the amount charged reflects the amount you spend on compute – this is a much more accurate indicator of the size of an organisation and value being derived from a product than cores or servers or users.

But how does a vendor know what this spend is? This problem will be solving itself thanks to compute consumers starting to cluster around a few major public cloud players, the top three being Amazon (AWS), Microsoft (Azure) and Google (Compute Engine).

It would not be technically complicated to implement support for these major providers (and maybe a smattering of smaller ones like Heroku, Digital Ocean and Linode) to use their APIs to suck down service consumption/usage data and figure out a client’s compute spend in the past month.

For customers whom can’t (still on VMWare?) or don’t want to provide this data, there can always be the fallback to a more traditional pricing model, whether it be cores, servers or some other negotiation (“enterprise deal”).

 

 

How would this look?

In our above example, for our enterprise compute bill ($1280/mo) the equivalent amount spent on the monitoring products was 23% for Newrelic, 3% for Raygun and 2.2% for PagerDuty (total of 28.2%). Let’s make the assumption this pricing is reasonable for the value of the products gained for the sake of demonstration (glares at Newrelic).

When applied to our $10/month SaaS startup, the bill for this products would be an additional $2.82/month. This may seem so cheap there will be incentive to set a minimum price, but it’s vital to avoid doing so:

  • $2.82/mo means anyone starting up a new service uses your product. Because why not, it’s pocket change. That uni student working on the next big thing will use you. The receptionist writing her next mobile app success in her spare time will use you. An engineer in a massive enterprise will use you to quickly POC a future product on their personal credit card.
  • $2.82/mo might only just cover the cost of the service, but you’re not making any profit if they couldn’t afford to use it in the first place. The next best thing to profit is market share – provided that market share has a conversion path to profit in future (something some startups seem to forget, eh Twitter?).
  • $2.82/mo means IT pros use your product on their home servers for fun and then take their learning back to the enterprise. Every one of the providers above should have a ~ $10/year offering for IT pros to use and get hooked on their product, but they don’t. Newrelic is the closest with their free tier. No prizes if you guess which product I use on my personal servers. Definitely no prizes if you guess which product I can sell the benefits of the most to management at work.

 

But what about real earnings?

As our startup grows and gets bigger, it doesn’t matter if we add more servers, or upsize the existing ones to add bigger servers – the amount we pay for the related support applications is always proportionate.

It also caters for the emerging trend of running systems for limited hours or using spot prices – clients and vendor don’t have to worry about figuring out how it fits into the pricing model, instead the scale of your compute consumption sets the price of the servers.

Suddenly that $2.82/mo becomes $56.40/mo when the startup starts getting successful and starts running a few computers with actual specs. One day it becomes $371 when they’re running $1280/mo of compute tier like the big enterprise. And it goes up from there.

 

I’m not a business analyst and “Cloud Spend Licensing” may not be the best solution, but goddamn there has to be a more sensible approach than believing someone will spend $371/mo for their $10/mo compute setup. And I’d like to get to that future sooner rather than later please, because there’s a lot of cool stuff out there that I’d like to experiment with more in my own time – and that’s good for both myself and vendors.

 

Other thoughts:

  • I don’t want vendors to see all my compute spend details” – This would be easily solved by cloud provider exposing the right kind of APIs for this purpose eg, “grant vendor XYZ ability to see sum compute cost per month, but no details on what it is“.
  • I’ll split my compute into lots of accounts and only pay for services where I need it to keep my costs low” – Nothing different to the current situation where users selectively install agents on specific systems.
  • This one client with an ultra efficient, high profit, low compute app will take advantage of us.” – Nothing different to the per-server/per-core model then other than the min spend. Your client probably deserves the cheaper price as a reward for not writing the usual terrible inefficient code people churn out.
  • “This doesn’t work for my app” – This model is very specific to applications that support infrastructure, I don’t expect to see it suddenly being used for end user products/services.

2degrees or not 2degrees?

Coming back to New Zealand from Australia, I was faced with needing to pick a telco to use. I’ve used all three New Zealand networks in the past few years (all pre-4G/LTE) and don’t have any particular reason/loyalty to use any specific network.

I decided to stay on the 2degrees network that I had parked my number on before going to Sydney, so I figured I’d put together a brief review of how I’ve found them and what I think about it so far.

Generally there were three main incentives for me to stay on 2degrees:

  1. AU/NZ mobile or landline minutes are all treated equally. As I call and SMS my friends and colleagues in AU all the time, this works very nicely. And if I need to visit AU, their roaming rates aren’t unaffordable.
  2. All plans come with free data sharing between devices – I can share my data with up to 5 devices at no extra cost. Laptop with 3G, tablet, spare phone? No worries, get a SIM card and share away.
  3. Rollover minutes & data – what you don’t use in one month accrues for up to a year.

And of course their pricing is sharp – coming into the New Zealand market as the underdog, 2degrees started going after the lower end prepay market, before moving up to offer a more sophisticated data network.

For $29, I’m now getting 1GB of data, 300 minutes AU/NZ and unlimited SMS AU/NZ. But also received an additional once-off bonus of 2GB data for moving to a no-commitment plan and another 200MB per month as a bonus for my data shared device; it’s insanely good value really.

 

Of course good pricing and features aren’t any good if the quality of the service is poor or the data-rate substandard. 2degrees still lack 4G/LTE in Wellington (has just been introduced in Auckland) which is going to set them back a bit, however they do still deliver quite a decent result.

Performance of my 1 year old Samsung Galaxy Note 2 (LTE/4G model operating on 3G-only network) was good with a 22.16 Mb/s download and 2.56 MB/s upload from my CBD apartment. It’s actually faster than the apartment WiFi ISP provider currently. (Unsure why the ping below is so bad, it’s certainly not that bad when testing… possibly some issue with the app on my device).

889233841It does pay to have a good device – older devices aren’t necessarily capable of the same speeds, the performance with my 4 year old Lenovo X201i with Qualcomm Gobi 2000 built-in 3G hardware is quite passable, but it’s not quite the speed demon of my cellphone at only 6.16 Mb/s down and 0.36 Mb/s up. Still faster than many ADSL connections however, I was only getting about 4 Mb/s down in my Sydney CBD apartment recently!

3618043332Whilst I haven’t got any metrics to show, the performance outside of the cities in regional and rural areas is still reasonable. 2degrees roams onto Vodafone for parts of their coverage outside the main areas, which means that you need to make sure your phone/device is configured to allow national data roaming (or you’ll get *no* data coverage), and it also means you’re suspectable to Vodafone’s network performance, which is one of the worst I’ve used (yes AU readers, even worse than Vodafone AU).

Generally the performance is perfectly fine for my requirements – I don’t download heaps of data, but I do use a lot of applications that are latency and packet loss sensitive. I look forwards to seeing what it’s like once 2degrees get their LTE network in Wellington and I can get the full capability out of my phone hardware.

2degrees is also trailing a service of offering free WiFi access – I’m in the trial and have been testing, generally the WiFi is very speedy and I was getting speeds of around 21 Mb/s down and 9 Mb/s up whilst walking around, but it’s let down by the poor transition that Android (and presumably other vendors) make when moving between WiFi and 3G networks. Because the WiFi signal hangs on longer than it can actually sustain traffic, it leads to small service dropouts in data when moving between the two networks – this isn’t 2degrees’ fault, rather a limitation of WiFi and the way Android handles it, but it reflects badly on telco hybrid WiFi/GSM network approaches.

 

It’s not all been perfect – I’ve had some issues with 2degrees, mostly when using them as a prepay provider. The way data is handled for prepay differs to on-plan, and it’s possible to consume all your available data, then eat through your credit without any warning, something that cost me a bit more than I would like a couple of times when on prepay.

This is fixed with on-plan, which gives you tight spend control (define how much you want to cap your bill at) and also has a mode that allows you to block non-plan based data spend, to avoid some unexpected usage generating you an expensive bill. I’d recommend going with one of their plans rather than their prepay because of this functionality alone, not to mention that the plans tend to offer a bit better value.
On the plus side, their twitter support was fantastic and sorted me out with extra data credit in compensation. Their in-store support has also been great, when I went to buy an extra SIM ($5) to data share to my laptop, the guy at the counter told me about a promotion, gave me a free SIM and chucked 200MB/month on it, all that I wasn’t expecting.

It’s a nice change, generally telco customer service is some of the worst examples around, so it’s nice to have a positive interaction, although 2degrees do need to make an effort to stop having certain spend protections limited to their plan customers and not prepay – A good customer service interaction is nice, but not having to talk to them in the first place is even better.

 

So how do I find 2degrees compared to the other networks? I’ve found NZ networks generally a mixed bag in the past – Telecom XT has been the best performing one, but I’ve always found their pricing a bit high and Vodafone is just all round poor in both customer service and data performance. With the current introduction of 4G/LTE by all the networks, it’s a whole another generation of technology and what’s been a good or bad network in the past, may no longer apply, but we need to wait another year or so for the coverage and uptake to increase to see how it performs under load.

For now the low cost and free data sharing of up to 5 devices will keep me on 2degrees for quite some time. If someone else was paying, maybe I’d consider Telecom XT for the bit better performance, but the value of 2degrees is too good to ignore.

Like anything, your particular use case and requirements may vary – shop around and see what makes sense for your requirements.

Adjusting from Sydney to Wellington

It’s a been a good few months back home in Wellington, getting settled back into the city and organising catch ups with old friends. It’s also been a very busy couple months, with me getting straight back into work and projects, as well as looking for a house to buy with Lisa!

Obligatory couplesy photo.

Obligatory couplesy photo. I should really take better ones of these…

I’m happy to be home here in New Zealand, certainly loving the climate and the lifestyle a lot more than Sydney, although there are certainly a number of things I miss from/about Sydney.

 

The most noticeable change is that I’m feeling healthier and fitter than ever before, probably on account of doing a lot more physical activity, wandering around the city and suburbs on foot and climbing up hills all the time. The lower pollution probably isn’t bad either – by international scales Sydney is a “clean” city, but when compared to a small New Zealand city, it was very noticeably polluted and I can smell the difference in air quality.

Being only a short distance from the outdoors at all times is a pretty awesome perk of being home. Once I get a car and a mountain bike, a lot more will open up to me, currently I’ve just invested in some good walking boots and have been doing close wanders to the city like Mt Kaukau, up over Roseneath and around Miramar Peninsula.

Wind turbines, rolling hills, sunlight... wait, this isn't a data center!

Wind turbines, rolling hills, sunlight… wait, this isn’t a data centre! What’s wrong with me?? Why am I here?

 

The other very noticeable difference for me has been my work lifestyle. Moving from working in the middle of the main office for a large company to working semi-remotely from a branch office is a huge change when you consider the loss of daily conversation and informal conversations with my colleagues in the office, as well as the ease of being involved in incidents and meetings when there in person.

Saving journalism in the 21st century.

Work battle station. Loving the dual vertical 24″ ATM, but I lose them in a week when we move to the new office. :'(

The Wellington staff I work with are awesome, but I do miss the time I spent with the operational engineers in Sydney. Working with lots of young engineers who lived for crazy shit like 10 hour work days then spending all evening at the pub arguing about GNU/Linux, Ruby code, AWS, Settlers of Catan and other important topics was a really awesome experience.

Wellington also has far fewer of my industry peers than Sydney, simply due to it’s scale. It was a pretty awesome experience bumping into other Linux engineers late at night on Sydney streets, recognised as one of the clan by the nerdy tshirt jokes shared between strangers. And of course Sydney generally has far more (and larger) meetups and what I’d describe as a general feel of wealth and success in my field – people are in demand, getting rewarded for it, and are generally excited about all the developments in the tech space.

Not that you can’t get this in Wellington – but the scale is less. Pay is generally a lot lower, company sizes smaller, and customer bases small… there aren’t many places in New Zealand where I could work and look after over a thousand Linux servers serving millions of unique visitors a day for example.

I personally don’t see myself working for any New Zealand companies for a while, at this current point in time, I think the smart money for young kiwis working in technology is to spend some time in Australia, get a reputation and line up some work you can bring home and do remotely. New Zealand has a lot of startups, as well as the traditional telcos and global enterprise integrators, but the work I’ve seen in the AU space is just another step up in both challenge and remuneration. Plus they’re crying out for staff and companies are more willing to consider more flexible relationships and still pay top dollar.

It’s not all negative of course –  Wellington still has a good number of IT jobs, and in proportion to other lines of work, they pay very well still – you’re never going to do badly working domestically. Plus there’s the fact that Wellington is home to a hotbed of startup companies including the very successful Xero which has gone global… Longer term I hope a lot of these hopeful companies succeed and really help grow NZ as a place for developing technology and exporting it globally whilst still retaining NZ-based head offices, giving kiwis a chance to work on world-class challenges.

 

Moving home means I’ve also been enjoying  Wellington’s great food and craft beer quite a bit, and I’m probably spending more here than in Sydney on brunch, dinners, coffee and of course delicious craft beer. Hopefully all the walking around the hills of Wellington compensates for it!

Sydney is known for being an expensive place to live, but I’m finding Wellington is much more expensive for coffee and food. The upside is that the general quality and standard is high, whereas I’d find Sydney quite hit and miss, particularly with coffee.

I suspect the difference is due to the economy of scale – if you have a hole-in-the-wall coffee shop in Sydney, you’ll probably serve 100x as many people as you will in Wellington, even after paying higher rents, it works out in your favour. Additionally essential foods are GST free, which makes them instantly 15% less than in New Zealand.

Doesn't get more kiwi than chocolate fish

Doesn’t get more kiwi than complementary chocolate fish with your coffee.

The craft beer scene here is also fantastic, I’m loving all the new beers that have appeared whilst I’ve been away, as well as the convenience of being able to pickup single bottles of quality craft beer at the local supermarket. I’ve been enjoying Tuatara, Epic and Stoke heavily lately, however they’re just a fraction of the huge market in NZ that’s full of small breweries as well as brew-pubs offering their own unique local fare.

Delicious

Delicious pale ale with NZ hops from Tuatara, a very successful craft brewery in the Wellington region.

I’m still amazed at how poor the beer selection was in Sydney’s city bars and bottle stores. It’s bad enough that you can’t buy alcohol at the supermarket, but the bottlestores placed near to them have very little quality craft beer available for selection.

I remember the bottlestore in Pyrmont (Sydney’s densest residential suburb) had a single fridge for “craft” beer which was made up of James Squire’s which is actually a Lion brand masquerading as a craft beer, and Little Creatures which although is quite good, happens to be owned by Lion as well.

Drinking out at the pubs had the same issue, with many pubs offering only brews from C.U.B and Lion and often no craft beers on tap. Sure, there were specific pubs one could go to for a good drink, but they were certainly in the minority in the city, where as Wellington makes it hard not to find good beer.

Just before I left Sydney, The Quarryman opened up in Pyrmont which brought an excellent range of AU beers to a great location near my home and work, however it’s a shame that this sort of pub was generally an infrequent find.

There’s a good write up on the SMH about the relationship between the big two breweries and the pubs, which mentions that the Australian Competition and Consumer Commission (ACCC) is looking into the situation – would be nice if some action gets taken to help the craft beers make their way into the pubs a bit more.

I might be enjoying my craft beer a bit *too* much! ;-)

I might be enjoying my craft beer a bit *too* much! ;-)

 

The public transport is also so different back here in Wellington. Being without a car in both cities, I’ve been making heavy use of buses and trains to get around – particularly since I’ve been house hunting and going between numerous suburbs over the course of a single day.

Sydney Rail far beats anything Wellington – or New Zealand for that matter – has to offer. Going from the massive 8 carriage double-decker Sydney trains that come every 3-15mins to Wellington’s single decker 2 carriage trains that come every 30-60mins makes it feel like a hobby railway line. And having an actual conductor come and clip your paper-based ticket? Hilarious! At least Wellington has been upgrading most of it’s trains, the older WW2-era relics really did make it feel like a hobby/historic railway….

No mag swipe on this ticket!

No magnetic swipe on this train ticket!

But not everything is better in Sydney on this front – Wellington buses have been a bliss to travel on compared to Sydney, on account of actually having an integrated electronic smartcard system on the majority of buses.

I found myself avoiding buses in Sydney because of their complicated fare structure and as such I tended to infrequently go to places that weren’t on the rail network due to the hassle it entailed. Whereas in Wellington, I can jump on and off anything and not have to worry about calculating the number of sections and having the right type of ticket.

The fact that Sydney is *still* working on pushing out smartcards in 2014 is just crazy when you think about the size of the city and it’s position on the world stage. Here’s hoping the Opal rollout goes smoothly and my future trips around Sydney are much easier.

 

Finally the other most noticeable change? It’s so lovely and cold! I seriously prefer the colder climate, although lots of people think I’m nuts for giving up the hot and sunny days of Sydney, but it just feels so much more comfortable to me – I guess I tend to just “run hot”, I’m always pumping out heat… guess it works well for a cold climate. :-)

In event of a Wellington winter, your Thinkpad can double as a heating device.

In event of a Wellington winter, your Thinkpad can double as a heating device.

Jethro does Mac: Terminals

With a change in job, I recently shifted from my primary work computer being a Lenovo X1 Carbon running GNU/Linux to using an Apple Macbook Pro Retina 15″ running MacOS.

It’s not the first time that I’ve used MacOS as my primary workstation, but I’ve spent the vast majority of my IT life working in a purely GNU/Linux environment so it was interesting having to try and setup my usual working habits and flow with this new platform.

I’m going to do a few blog posts addressing my thoughts and issues with this platform and how I’ve found it compared to my GNU/Linux laptops. I’m going to look at both hardware and software and note down a few fixes and tricks that I’ve learnt along the way.

 

Part 4: Terminals and adventures with keybindings

I’ve already written about the physical issues with the Macbook keyboard, but there’s another issue with this input device – keybindings.

As mentioned previously, the Macbook lacks various useful keys, such as home and end, instead, you need to use a key combination such as Apple + Left/Right to achieve the same result. However for some inexplicable reason, Apple decided that the Terminal should have it’s own special behaviour, so it does not obey the same keybindings. In any other MacOS program, using these key combinations will achieve the desired results. But with Terminal, it results in random junk appearing in the terminal – or nothing at all.

For an engineer like myself this is the single most frustrating issue I’ve had with MacOS to date – having the Terminal essentially broken out of the box on their own hardware is quite frankly unacceptable and I suspect a reflection on how Apple cares far more about consumer users than power users.

Whilst Apple’s Terminal offers the ability to configure keybindings, it has two major problems that make it unusable:

  1. Whilst there is an entry for keys called “Home” and “End”, these entries seem to map to actual physical Home/End keys, but not the key combinations of Apple + Home/End, which appear as-is to the OS. So any configuration done for the Home/End won’t help.
  2. Instead we need to configure a key combination with a modifier of Apple key. But MacOS Terminal doesn’t allow the Apple key to be used as a modifier.
MacOS stock terminal keybinding configuration.

MacOS stock terminal keybinding configuration – no Apple key option here!

The result is, there’s no way to properly fix the MacOS Terminal and in my view, it’s essentially useless. Whilst if I was using an external keyboard with a physical home/end key it wouldn’t be too much of a problem since I can set a keybinding, there are times I do actually want to be able to use the laptop keyboard effectively!

I ended up fixing it by installing the popular iTerm2 third party terminal application- in many ways it’s similar to the stock terminal, but it offers various additional configuration options.

iTerm2 and MacOS Terminal alongside each other.

iTerm2 and MacOS Terminal alongside each other, configured to use same colour scheme and fonts.

For me the only thing that I really care about is the fact that it adds the ability to setup keybindings with the Apple + Home/End key options.

iTerm2

The killer feature – the ability to set key combinations with the Apple key!

Setting the above and then creating an ~/.input.rc file (as per these instructions) resolved the keybinding issues for me, and made iTerm2 consistent with all the MacOS applications.

"\e[1~": beginning-of-line
"\e[4~": end-of-line
"\e[5~": history-search-backward
"\e[6~": history-search-forward
"\e[3~": delete-char
"\e[2~": quoted-insert
"\e[5C": forward-word
"\e[5D": backward-word
"\e\e[C": forward-word
"\e\e[D": backward-word
set completion-ignore-case On

The key combinations work correct in the local shell, Vim and also via SSH connections to other systems. Perfect! I just wish I didn’t have to do this in the first place…

 

See other posts in this series via the jethro does mac tag as I explore using MacOS after years of GNU/Linux only.

The awesome tablet money can’t buy

Most people working in technology and have heard of and formulated some opinion about the Microsoft Surface tablet, now in it’s second generation of hardware. For some it’s a poor attempt to compete with the iPad, for others it’s the greatest laptop replacement they’ve ever seen, destined to bring the brilliance of Windows 8.1 to the masses.

Surface

Whilst I think it’s good for Apple and Android to have some competition in the tablet market, the Windows platform itself is of no interest to a GNU/Linux using, free-software loving individual like myself. What I do find interesting about the Microsoft Surface, is not the software, but rather the excellent high-specification hardware they’ve managed to cram into 980g of handheld excellence.

I’ve been using my Lenovo X201i Thinkpad for about 4 years now and it’s due for an upgrade – whilst still very functional, the lack of AES-NI and a low resolution display and poor GPU is starting to get quite frustrating, not to mention the weight!

The fully speced Microsoft Surface 2 Pro features a Core i5 CPU, 8GB of RAM and 512GB SSD, plus the ability to drive up to two external displays – qualities that would make it suitable as my primary workstation, whether on the go, or docked into larger displays at home – essentially a full laptop replacement.

Since the Microsoft Surface Pro 2 is x86-based and supports disabling secure-boot, it is possible to run GNU/Linux natively on the device, suddenly making it very attractive for my requirements.

It’s not a perfect device of course – the unit is heavy by tablet standards, and the lack of a 3G or LTE modem is a frustrating limitation. Battery life of the x86 Pro series is not where near as good as a low power ARM chip (although the Haswell Core i5 certainly has improved things over the generation 1 device)..

There’s also the question of cost, the fully speced unit is around $2,600 NZD which puts it in the same bracket as high end expensive laptops.

Remember ads? Before adblocker?

Personally I feel these ads need more design effort other than the product name and the convincing slogan of “Get it”!

Microsoft has certainly spared no expense advertising the Surface. With billboards, placement marketing in TV series and internet advertising, it’s hard not to notice them. Which is why it’s even more surprising that Microsoft made the monumental mistake of not stocking enough units to buy.

I’m not sales or marketing expert, but generally my understanding is that if you want people to buy something, you should have stock to sell to them. If I walk into the Apple store down the road, I can buy an iPad in about 5mins. But if I try to buy the competing Microsoft Surface, I get the depressing statement that the unit is “Out of Stock”:

All models of Surface Pro 2, out of stock on AU online store.

All models of Surface Pro 2, out of stock on AU online store.

In fact, even it’s less loved brother, the ARM-based Windows RT version which can’t run anything other than Microsoft Store applications is out of stock as well.

Even the fundamentally flawed RT-family of devices is sold out.

Even the feature limited RT-family of devices is sold out.

The tablets seem to have been out of stock since around December 2013, which suggests that the Christmas sales exhausted all the stock and Microsoft has been unable to resupply it’s distributors.

Possibly Microsoft limited the volume of units manufactured in fear of ending up with unsold units (like the difficult to shift Surface RT Gen-1 series that got written down) and didn’t manufacture as many units as they otherwise would have – a gamble that has shown itself to be a mistake. I wonder how many missed sales have resulted, where people gave up waiting and either went for a third party Windows tablet, or just purchased an iPad?

Microsoft hasn’t even provided an ETA for more stock or provided an email option to be advised and get first dibs on new stock when it arrives eventually.

Of interest, when comparing NZ and AU stock availability and pricing, the price disparity isn’t too bad. The top model Surface Pro 2 costs AUD $1854 excluding GST, whereas the New Zealand model sells for NZD $2260 excluding GST, which is currently around AUD $2137.

This is a smallish difference of around $283, but this is probably due to Microsoft pricing the tablet when the exchange rate was around $0.80 AUD to $1 NZD.

What I would expect, is that when they (eventually!) import additional stock to replenish supplies, the pricing should be re-adjusted to suit the current exchange rate – which is more around $0.95 AUD to $1 NZD.

Kiwi pricing

Kiwi pricing is a bit more eh bru? 15% GST vs 10% GST in Australia is the biggest reason for the disparity.

Whether they do this or not, remains to be seen – but considering how expensive it is, if they can drop the price without impacting the profit margin it could only help make it more attractive.

For now, I’m just keeping an eye on the stock – in many ways not being able to buy one certainly helps the house fund, but the fact is that I need to upgrade my Lenovo laptop at some point in the next year at the latest. If Microsoft can sort out their stock issues, the Surface could well be that replacement.

Jethro does Mac: The Apple Input Devices

With a change in job, I recently shifted from my primary work computer being a Lenovo X1 Carbon running GNU/Linux to using an Apple Macbook Pro Retina 15″ running MacOS.

It’s not the first time that I’ve used MacOS as my primary workstation, but I’ve spent the vast majority of my IT life working in a purely GNU/Linux environment so it was interesting having to try and setup my usual working habits and flow with this new platform.

I’m going to do a few blog posts addressing my thoughts and issues with this platform and how I’ve found it compared to my GNU/Linux laptops. I’m going to look at both hardware and software and note down a few fixes and tricks that I’ve learnt along the way.

 

Part 3: The Apple Input Devices

I’ll freely admit that I’m a complete and total keyboard snob and I’ve been spoilt with quality desktop keyboards (Das Keyboard, IBM Model M) and the best possible laptop keyboard on the market – the classic IBM/Lenovo Thinkpad keyboard (pre-chiclet style).

Keeping this snobbery and bias in mind, I’m bitterly disappointed by the quality of the Apple keyboard. It’s a surprising slip-up by a company that prides itself on perfection and brilliant hardware design, I fear that the keyboard is just an unfortunate casualty of that design-focused mentality.

I have two main issues with this keyboard. Firstly, the shallowness and feeling of the keys, and secondly the layout and key selection.

The shallowness is the biggest issue with the keyboard. Laptops certainly aren’t known for their key travel distance, but there have been a few exceptions to the rule – IBM/Lenovo’s Thinkpad series is widely recognised as featuring one of the better keyboards in the market with decent sized keys, good layout and enough key depth to get a reasonable amount of travel when typing. (Side note: talking about the classic Thinkpad keyboards here… I’m undecided about their new chiclet style keyboards on recent models…)

On the Macbook, the key depth and travel distance is very short, there’s almost no movement when pressing keys and I lose the ability to effectively bounce between the different keys. I personally find that repeatedly doing the small motions needed to type on this keyboard for 4 hours or more causes me physical discomfort in my hands and I fear that if I were to use the Macbook keyboard as my primary computer, I would be looking at long term RSI problems.

Having said that, to be fair to Apple I probably couldn’t handle more than 6-8 hours a day on my Thinkpad keyboard – whilst it’s better than the Apple one, it’s still fundamentally a laptop keyboard with all the limitations it suffers. Realistically, for the amount of time I spend on a computer (12+ hours a day), I require an external keyboard whether it’s plugged into a Macbook, a Thinkpad or some other abomination of typing quality.

The other issue I have with Apple’s keyboard is that despite the large size of the laptop (mine is a 15.6″ unit), they’ve compromised the keyboard layout and removed various useful keys – the ones that I miss the most are home, end, insert, delete, page up and page down, all of which require the use of key combinations to be achieved on the Macbook.

The Thinkpad line has always done pretty well at including these keys, even if they have a VERY annoying habit of moving around their positions with different hardware generations, and their presence on the keyboard is very appreciated when doing terminal work – which for me, is a fast part of my day. It’s a shame that Apple couldn’t have used a bit more of the surface space on the laptop to add some of these keys in.

Overall, if I examine the Macbook keyboard as something I’m only going to use when out of the home/work office, then it’s an acceptable keyboard. I’ve certainly used far worse laptop keyboards and it sure beats tapping away on a tablet touchscreen or using something like the Microsoft Surface foldout keyboards.

And of course both the Thinkpad and the Macbook pale in comparison to a proper external keyboard – as long as I have a decent home/work office external keyboard it’s not too much of a deal breaker, but I’d certainly weigh in the keyboard as a negative if I was considering a machine for a role as a travelling consultant, where I could be spending weeks at a client site with unknown facilities and maybe needing to rely on the laptop itself.

 

Despite the insistence of some people that the keyboard is the only thing a computer needs, you’ll probably also want to use some kind of cursor moving thing if you want to effectively make use of the MacOS GUI.

The Macbook ships with a large touchpad centered in the middle of the laptop beneath the keyboard. This is a pretty conventional design, although Apple has certainly been pushing the limits on getting the largest possible sized touchpad on a laptop – a trend that other vendors appear to have been following in recent years.

Personally I hold a controversial opinion where I vastly prefer Trackpoint-style pointers on laptops over touchpads. I’m sure that a case could be made to accuse me of Thinkpad fanboyism, but I’ve used and enjoyed Trackpoints on Toshiba, HP and Lenovo computers in the past with great success.

The fundamental reason I prefer the Trackpoint, is that whilst it takes longer to get used to and feels weird at first, once it’s mastered, it’s possible to rapidly jump between typing and cursor moving with minimal effort.

Generally my fingers are resting on keys right next to the Trackpoint, or sometimes even I rest my finger on the Trackpoint itself whilst waiting, so it’s easy to jump between typing and cursoring. Plus on the Thinkpad design, my thumb rests just above the 3-button mouse, which is fantastically convenient.

http://xkcd.com/243/

http://xkcd.com/243/

Whilst the Macbook’s large touchpad is by far the best touchpad I’ve ever used, it still has the fundamental flaw of the layout forcing me to make a large movement to go between the keyboard and the touchpad each time.

This is technically a Macbook air, but the keyboard and touchpad is the same across the entire product line.

This is technically a Macbook air in the picture, but the keyboard and touchpad is the same across the entire product line…. this laptop was closer to my camera. :-)

It also has the issue of then sitting right in the way of my palm, so it took me a while to get used to not hitting the touchpad with my palm whilst typing. I’ve gotten better at this, although it still happens from time to time and does prevent me from resting my palm in my preferred natural position.

Admittedly I am nitpicking. To their credit, Apple has done a far better job of touchpads than most other vendors I’ve ever used. Generally laptop touchpads are too small (you can see how tiny the one on my Thinkpad is – I just disabled it entirely in favour of the Trackpoint) and even vendors who are busy cloning Apple’s design haven’t always gotten the same feel of sturdiness that Apple’s touchpad offers.

The gesture integration with MacOS is also excellent – I’ve found that I’m often using the three-finger swipe to switch between workspaces and the two-finger scrolling is very easy to use when doing web browsing, nicer and more natural feeling than using the cursor keys or a scroll wheel even.

 

Overall it’s a decent enough machine and beats most other laptop vendors in the market. I’d personally still go for the Thinkpad if all things other than keyboard were identical, simply due to how much I type and code, but the Macbook keyboard and touchpad is an acceptable second place for me and a good option for most general users.

See other posts in this series via the jethro does mac tag as I explore using MacOS after years of GNU/Linux only.

Jethro does Mac: GPU Woes

With a change in job, I recently shifted from my primary work computer being a Lenovo X1 Carbon running GNU/Linux to using an Apple Macbook Pro Retina 15″ running MacOS.

It’s not the first time that I’ve used MacOS as my primary workstation, but I’ve spent the vast majority of my IT life working in a purely GNU/Linux environment so it was interesting having to try and setup my usual working habits and flow with this new platform.

I’m going to do a few blog posts addressing my thoughts and issues with this platform and how I’ve found it compared to my GNU/Linux laptops. I’m going to look at both hardware and software and note down a few fixes and tricks that I’ve learnt along the way.

 

Part 2: Dual GPU Headaches

Whilst the Macbook line generally features Intel GPUs only, the flagship Macbook Pro 15″ model like mine features dual GPUs – the low power Intel GPU, as well as a high(er) performance Nvidia GPU for when graphical performance is required for certain business applications (*cough* Minecraft *cough*).

MacOS dynamically switches between the different GPUs as it deems necessary, which is a smart idea – except that MacOS seems to get led astray by malware such as Flash Player which launches in the background of some webpage somewhere and proceeds to force the GPU to run on Nvidia only, chewing up battery yet not even rendering anything.

To be fair to Apple, this is a fault with the crapiness of Flash Player and not MacOS. It certainly gives ammunition to Apple’s decision to ditch having Flash Player pre-installed on MacOS systems in 2010 to conserve battery life, the Nvidia GPU certainly shortens my laptop’s battery life by about 30mins when just sitting idle.

Annoyingly the only way I found out that my Mac wasn’t using the Intel GPU most of the time, was by installing a third party tool gfxCardStatus which shows the apps blocking low-power GPU selection and also allows forcing a particular GPU manually.

Not content with hogging CPU, Flash Player found itself wanting to hog GPU as well.

Not content with hogging CPU, Flash Player found itself wanting to hog GPU as well.

The other issue with the dual GPU design, is that it makes running GNU/Linux on these models of Macbook complex – it can be done, but you have to use MacOS to select one GPU or another before then booting into GNU/Linux and sticking with that selected GPU.

This may get better overtime, but it’s worth anyone who’s considering ditching MacOS to keep in mind.

 

See other posts in this series via the jethro does mac tag as I explore using MacOS after years of GNU/Linux only.

Jethro does Mac: Retina Display

With a change in job, I recently shifted from my primary work computer being a Lenovo X1 Carbon running GNU/Linux to using an Apple Macbook Pro Retina 15″ running MacOS.

It’s not the first time that I’ve used MacOS as my primary workstation, but I’ve spent the vast majority of my IT life working in a purely GNU/Linux environment so it was interesting having to try and setup my usual working habits and flow with this new platform.

I’m going to do a few blog posts addressing my thoughts and issues with this platform and how I’ve found it compared to my GNU/Linux laptops. I’m going to look at both hardware and software and note down a few fixes and tricks that I’ve learnt along the way.

 

Part 1: The Retina Display

Apple is known for their hardware quality – the Macbook Pro Retina 15″ I am using is a top-of-the-line machine with a whopping Core i7, 16GB of RAM and 512GB SDD, Nvidia GPU and the massive 2880×1800 pixel Retina LCD display. Whilst the hardware is nice, it’s something that can be found with other vendors – what really makes it interesting, is the massive high resolution display.

Shiny shiny

Shiner than a Thinkpad. But is it just a showoff?

Unfortunately for all the wonderfulness that Retina advertises, it’s given me more grief than happiness so far. My main issue, is how Apple handles this massive high resolution display.

Out of the box you get a scaled resolution that looks like any standard MacOS laptop, rather than the full native resolution of the display. Apple then does some weird black magic with their UI layer, where the actual display is rendered on a massive virtual 3360 x 2100 resolution virtual display and is then scaled down to the actual display size of 2880 x 1800 pixels.

The actual resolutions available to the end user aren’t real resolutions, but rather different modes that essentially look/feel like 1920×1200, 1680×1050, 1440×900 (the default for Retina), 1280×800 and 1024×640, but in the background MacOS is just scaling application windows to these sizes.

There’s some more details about the way the Retina display and MacOS work on the AnandTech review here.

If you come from a Windows or GNU/Linux world where the screen resolution is what it says on the box, it’s a really weird mindshift. You’ll quickly find this approach is common to the Apple ecosystem – so much stuff that I understand about computers is difficult to figure out with MacOS due to the way Apple hides everything and instead of using the technical terminology, hides it behind their own terminology designed to make it “easier” for normal users. And maybe it does… but for me, it’s more of a hindrance, rather than a help.

Apple's Settings window isn't that helpful at explaining the real resolutions underneath. Use

Apple’s Settings window isn’t that helpful at explaining the real resolutions underneath, in my case I had to get “screenresolution” from Brew in order to figure out what resolution this machine was actually displaying.

So which size and mode do I use? The stock screen resolution is OK for a laptop and maybe you’ll like it perfectly if you’re using Retina optimised applications (eg Aperture) where having a lower effective resolution, but high DPI for the content is useful.

Default scaled mode - effectively 1440x900

Default scaled mode – effectively 1440×900

However for me, where most of my use case is email, terminal and a browser, I wanted the ability to fit the most possible information onto the screen, so I ended up using the “More Space” resolution, which drives the display at a 1920×1200-like scaled resolution.

The "More Space" mode is handy for fitting decent amounts of console output.

The “More Space” mode is handy for fitting decent amounts of console output.

Whilst the Retina display is an excellent equal to a 24″ monitor (which have a resolution around 1920×1080, almost the same as the “More Space” mode), it doesn’t quite meet my dream hope which was that it would equal a 27″ monitor.

27″ monitors are the holy grail for me, since they have a resolution of 2560 x 1080, which is big enough to fit two large A4 sized windows on the screen at the same exact time.

Good, but not as good as a nice 27" panel.

It’s functional, but not as natural-feeling as doing the same on a 27″ monitor – still feels like trying to squeeze everything in.

It is possible to bypass Apple’s limitations on resolution get a higher resolution using third party tools, but I can only just read the 1920×1200 comfortably. I tried DisplayMenu (as suggested by Kai in the comments), but whilst the resulting resolution is amazing, I find reading text on it just a bit too small for prolonged periods.

The full 2880x1800 is lovely, but I might need glasses to read it...

The full 2880×1800 is lovely, but I might need glasses to read it…

The other issue with the Retina displays, is that due to the way Apple does the scaling, some applications just end up looking bad and fuzzy due to bitmap stretching and other nastiness – this impacted me with KeepassX, as well as some company-internal web applications.

But when you do get a properly Retina compatible application, things do look beautiful – Google Maps both in vector map and also satellite view look incredibly sharp and clear.

Vectorised graphics were made for Retina

Vectorised graphics were made for Retina

If I was choosing between a laptop with a high resolution display like this and one without, I’d be choosing the former all other factors being considered equal. But I’m not convinced that it’s worth splashing lots of cash on for my particular requirements of terminals and browsing – the Retina screen probably wouldn’t add much for me over a laptop that features a resolution like 1920×1200 native instead of downscaling.

 

See other posts in this series via the jethro does mac tag as I explore using MacOS after years of GNU/Linux only.

linux.conf.au 2014

I’ve just returned from my annual pilgrimage to linux.conf.au, which was held in Perth this year. It’s the first time I’ve been over to West Australia, it’s a whole 5 hour flight from Sydney –  longer than it takes to fly to New Zealand.

Perth’s climate is a very dry heat compared to Sydney, so although it was actually hotter than Sydney for most of the week, it didn’t feel quite as unpleasant – other than the final day which hit 45 degrees and was baking hot…

It’s also a very clean/tidy city, the well maintained nature was very noticeable with the city and gardens being immaculately trimmed – not sure if it’s always been like this, or if it’s a side effect of the mining wealth in the economy allowing the local government to afford it more effectively.

The towering metropolis of mining wealth.

The towering metropolis of mining wealth.

As usual, the conference ran for 5 full days and featured 4-5 concurrent streams of talks during the week. The quality was generally high as always, although I feel that content selection has shifted away from a lot of deep dive technical talks to more high level talks, and that OpenStack (whilst awesome) is taking up far too much of the conference and really deserves it’s own dedicated conference now.

I’ve prepared my personal shortlist of the talks I enjoyed most of all for anyone who wants to spend a bit of time watching some of the recorded sessions.

 

Interesting New(ish) Software

  1. RatticDB – A web-based password storage system written in Python written by friends in Melbourne. I’ve been trialling it and since then it’s growing in popularity and awareness, as well as getting security audits (and fixes) [video] [project homepage].
  2. MARS Light – This is an insanely awesome replacement for DRBD designed to address the issues of DRBD when replicating over slower long WAN links. Like DRBD, MARS Light is a block-level replication, so ideal for entire datacenter and VM replication. [video] [project homepage].
  3. Pettycoin – Proposal/design for an adjacent network to Bitcoin designed for microtransactions. It’s currently under development, but is an interesting idea. [video] [project homepage].
  4. Lua code in Mediawiki – the Mediawiki developers have added the ability for Wikipedia editors to write Lua code that is executed server side which is pretty insanely awesome when you think about how normally nobody wants to allow untrusted public the ability to remote execute code on systems. The developers have taken Lua and created a “safe” version that runs inside PHP with restrictions to make this possible. [video] [project homepage].
  5. OpenShift – RedHat did a demonstration on their hosted (and open source) PAAS platform, OpenShift. It’s a solution I’ve been looking at before, if you’re a developer whom doesn’t care about infrastructure management, it looks very attractive. [video] [project homepage].

 

Evolution of Linux

  1. D-Bus in the Kernel – Lennart Pottering (of Pulseaudio and SystemD fame) presented the efforts he’s been involved in to fix D-Bus’s shortcomings and move it into the kernel itself and have D-Bus as a proper high speed IPC solution for the Linux kernel. [video]
  2. The Six Stages of SystemD – Presentation by an engineer who has been moving systems to SystemD and the process he went through and his thoughts/experience with SystemD. Really showcases the value that moving to SystemD will bring to GNU/Linux distributions. [video]
  3. Development Tools & The UNIX Philosophy – Excellent talk by a Python developer on how we should stop accepting command-line only tools as being the “right” or “proper” UNIX-style tools. Some tools (eg debuggers) are just better suited for graphical interfaces, and that it still meets the UNIX philosophy of having one tool doing one thing well. I really like the argument he makes and have to agree, in some cases GUIs are just more suitable for some tasks. [video]

 

Walkthroughs and Warstories

  1. TCP Tuning for the Web – presented by one of the co-founders of Fastly showing the various techniques they use to improve the performance of TCP connections and handle issues such as DDOS attacks. Excellent talk by a very smart networking engineer. [video]
  2. Massive Scaling of Graphite – very interesting talk on the massive scaling issues involved to collect statistics with Graphite and some impressive and scary stats on the lifespans and abuse that SSDs will tolerate (which is nowhere near as much as they should!). [video]
  3. Maintaining Internal Forks – One of the FreeBSD developers spoke on how his company maintains an internal fork of FreeBSD (with various modifications for their storage product) and the challenges of keeping it synced with the current releases. Lots of common problems, such as pain of handling new upstream releases and re-merging changes. [video]
  4. Reverse engineering firmware – Mathew Garrett dug deep into vendor firmware configuration tools and explained how to reverse engineer their calls with various tools such as strace, IO and memory mapping tools. Well worth a watch purely for the fact that Matthew Garrett is an amazing speaker. [video]
  5. Android, The positronic brain – Interesting session on how to build native applications for Android devices, such as cross compiling daemons and how the internal structure of Android is laid out. [video]
  6. Rapid OpenStack Deployment – Double-length Tutorial/presentation on how to build OpenStack clusters. Very useful if you’re looking at building one. [video]
  7. Debian on AWS – Interesting talk on how the Debian project is using Amazon AWS for various serving projects and how they’re handling AMI builds. [video]
  8. A Web Page in Seven Syscalls – Excellent walk through on Varnish by one of the developers. Nothing too new for anyone who’s been using it, but a good explanation of how it works and what it’s used for. [video]

 

Other Cool Stuff

  1. Deploying software updates to ArduSat in orbit by Jonathan Oxer – Launching Arduino powered satelittes into orbit and updating them remotely to allow them to be used for educational and research purposes. What could possibly be more awesome than this? [video].
  2. HTTP/2.0 and you – Discussion of the emerging HTTP/2.0 standard. Interesting and important stuff for anyone working in the online space. [video]
  3. OpenStreetMap – Very interesting talk from the director of OpenStreetMap Team about how OpenStreetMap is used around disaster prone areas and getting the local community to assist with generating maps, which are being used by humanitarian teams to help with the disaster relief efforts. [video]
  4. Linux File Systems, Where did they come from? – A great look at the history and development cycles of the different filesytems in the Linux kernel – comparing ext1/2/3/4, XFS, ReiserFS, Btrfs and others. [video]
  5. A pseudo-random talk on entropy – Good explanation of the importance of entropy on Linux systems, but much more low level and about what tools there are for helping with it. Some cross-over with my own previous writings on this topic. [video]

Naturally there have been many other excellent talks – the above is just a selection of the ones that I got the most out from during the conference. Take a look at the full schedule to find other talks that might interest, almost all sessions got recorded during the conference.