Tag Archives: geek

Anything IT related (which is most things I say) :-)

The value of a tweet?

As we’ve previously established, I use Twitter. A lot. Too much, some might say. ;-)

I’ve been using twitter since 2009, during that time I’ve made a huge number of friends all over the world and keep in touch with many of them on an almost daily basis.

I’ve even met someone special on Twitter, stayed with friends in AU met purely on twitter and gotten myself into plenty of trouble and debates :-)

But Twitter has it’s downsides – time, dramas,  network lock-in and ever expanding social scene which is difficult to address.

  • Time: Twitter consumes massive amounts of time and I consider it highly addictive for an infoholic like myself – I need to keep refreshing, getting the next message, reloading.
  • Chains: Whenever I consider leaving twitter, there is always the realization that a vast number of the awesome people that I know could not have been met via any other means and that I wouldn’t keep in touch with them otherwise – my social circle has been massively expanded by Twitter, I can go to most major cities in AU and NZ now and have friends there I can meet with, which is pretty amazing for someone who spends most of his time on his computer writing code.
  • Proprietary Network: Most people don’t care so much, but I really dislike how Twitter is a single proprietary entity that is controlling all this communication –  Ideally I’d like to be using an open platform, for example StatusNet/Identica – but that then brings one back to the issue of being locked into Twitter due to the user base all being there….
  • Business: Your twitter profile is your personal conversational space, but it’s searchable for all employers, investors and journalists to access in future – there are already tones of cases of information being used against people in ways they didn’t expect which has come back to bite them. Of course one could make their profile private…. but that then discourages new people from following and making new friends.

And then there’s the dramas….. Twitter is basicly a giant high school clic at times and this can be pretty stressful at times.

  • Follows, Unfollows: People follow and unfollow all the time – since I’ve started using twitter, the selection of people I follow has changed heaps – maybe the person’s interests changed, maybe I kept getting into arguments with them, maybe they keep reminding me of that hawt evening long ago or maybe they just turn out to be douchebags. However unfollows tend to get taken personally and maybe Twitter users in general have this problem where we place too much value on a follow-based relationship.
  • Twittercide: Sometimes people get tired of Twitter – maybe for one of the reasons above – and end up quitting, often just deleting their account without saying goodbye – this actually really upsets me, would you just suddenly stop talking with any IRL friends without saying anything?
  • Past Hurts: In close communities, particular combinations like small cities like Wellington + Twitter, it’s easy to keep bumping into past regrets, exs or people you generally dislike. Worse even when you see people praising people who you know are complete jerks.

So as you can probably tell, there’s a lot about Twitter than I’m unhappy about – so I’m looking at making some improvements to the way I follow people and use it:

  • Enforce Limits: 150 friends, 100 other max. Science suggest that we can only maintain social relationships with 100-230 people at a time, and I’m not going to argue with science. That’s for conservatives to do.
  • Value Communication. Sure it’s amusing to follow certain people, but I don’t really need to read 100 messages a day about their partner dramas or the type of cushions their cat sits on. The key question I’ve been asking myself is, does this tweet really add value to my life? I learn some excellent things at times from industry peers, valued friends and such, if I’m going to limit the amounts, try and follow those who provide quality content – after all, my time is short and valuable, make sure I’m using it productively.
  • Realise that I don’t need to follow everyone. For a long time I’ve only really followed people who start engaging with me, but maybe I don’t need to follow people even if they do – if they want to have good conversations with me, that’s great – I’ll happily engage, but maybe they post too much crap at other times to be worth a follow.
  • Determine Friends from Fans: Fans will follow then get tired and unfollow you the next day without saying a word, friends are people you regularly engage with and wouldn’t want to miss – not everyone who follows is really a friend – and maybe I should try and cull who I follow back to be those who I actually care about.
  • I can block retweets. Some otherwise great people retweet some complete crap. But it’s possible to block just their retweets and I’m going to use this with more frequency to improve the messaging quality.
  • Less Time: I can get *amazing* amounts of geekery done when I’m not distracted by twitter every 5minutes – I’m noticing too many evenings where I do little more than twitter, and I’d rather spend a week of evenings doing geekery and then catching up with friends at a specific time.

 

As much as it pains me to say it, I’m pretty tired with Twitter. But I can’t quit, there’s too much value in the relationships there – so I’m trying to find a middle ground, between not having it and being totally addicted.

Maybe long term I’ll move off twitter, perhaps more use of my blog and IM will eliminate some of the uses I have for it, but whilst those allow me to maintain current relationships, I’m not sure if it really enables me to grow and find new ones.

Other ideas include more use of email lists and chatrooms around specific topics to hang out with like minded indiviuals. Or maybe write some bots to automate social interaction for me and send me summaried updates :-)

 

Standards people, use them!

I’ve been driving around in my mighty Toyota Starlet 1997 for about 18 months+ now and have finally gotten tired of only having a radio as my only source of audio.

I can get away with using radio when in the CBD with good alternative stations like Active who don’t have too many ads, but when doing roadtrips often there are large sections with no coverage or only poor quality commerical stations.

So I decided to buy a new stereo and settled on a Sony CDXGT500U stereo – primarily due to it meeting my two requirements in the cheapest formfactor – both an AUX 3.5mm input jack AND a USB socket for taking MP3s (sadly no Ogg or Flac tho).

Being a sucker for DIY I decided to have a go at installing it myself – I didn’t need anything too flash like new speakers or cable runs, just wanted the inputs really. Fortunately the installation of the stereo was pretty easy, but I ran into the good old problem of proprietary connectors/standards used by the different vendors.

 

  1. There’s no single standard for the mounting of devices in the car dash – in the case of this stereo, the mounting brackets supplied aren’t required and instead it bolts directly into the Japanese-style mounts.
  2. Sony doesn’t use a standard for their stereos.
  3. Neither does Toyota use a standard for their cars.

To make it work (without going to the pain of soldering/custom wire wrapping) I had to buy *two* different adapters – once for Sony->ISO and another for Toyota->ISO which cost a good $15 each from retailers.

We all love lots of daisy chained adapters!

 

On the plus side, I now have a new stereo installed, dragging my car out of the 80s and into 2011. It’s also the most expensive thing in the car now, although being a Starlet, it’s hardly a theft magnet.

This Starlet be totally pimped yo!

 

#geekflat cleanout

With the departure of John and the continual annoyance of all my junk piled around the flat, I’m going through a bit clean up and listing everything on trademe.

There’s a mix of general flat stuff as well as far too much computing equipment, including rackmount equipment such as cable management bars, servers, UPSes and more.

Lots of ex-Amberdms kit, ideal for Linux deployments and personal development labs and a nice collection of networking kit.

There’s still more to list over the next couple of weeks, so add my *two* trademe accounts to your favorites to get notifications. (There are separate accounts for Amberdms vs Personal listings).

 

Sony & Identity Theft

By now most people have heard of the Sony Playstation Network getting hacked and around 75 million accounts worth of information being obtained.

Ignoring the whole fact that someone owned Sony so badly and that they’re not even sure if credit card details got exploited, I want to examine the information that is being stored with Sony.

There are three key bits of information obtained from the breach:

  1. Login credentials of PSN users.
  2. User identify information, consisting of phone number, email address and age.
  3. Possibly credit card information.

The last mention is the most important – obviously any credit card breach is bad (also PCI-DSS compliance, WTF Sony?), but Sony isn’t sure if the card DB has been exposed or not at this stage and is making a general just-in-case recommendation.

Login credentials may be an issue depending how smart you are – if you’re one of those people who uses the same login on every site, this is a clear example of why you shouldn’t, and you can now enjoy changing the login details on every single site you use… (how many more provider compromises does it take till you learn this is bad??)

So assuming you didn’t use credit cards and used unique credentials, this limits the exposure to user identity information – this is causing huge outcry in the media, with some great quotes from different countries police stating how this is going to lead to widespread identity theft.

Which raises the following points:

  • Why are bank and other key systems requiring identification so poorly setup that all that you need is name, age and address to obtain?
  • All these details are already available online for anyone with a bit of sense, it’s hard to keep all this stuff private in the days of social networking.
  • What are the penalties for companies not conducting the proper validation and security checks on people signing up to things like loans?

Sure it’s bad that the information got compromised, but let’s consider that most of the identity information is already public.

Birthdates are easy to get with the widespread popularity of social networking, same for addresses which can be found from domain records, social networking, websites and more, along with contact details.

If this information is enough to then take out a loan or a bank account, then I think those providers have some pretty heavy explaining to do – far too many have sloppy validation checks which don’t reflect the realities of the 21st century.

Just last week, I had to “validate” my home address to obtain a driver’s license. All that’s required to prove my identity is some photo ID and a service bill with an address on it.

Faking a bill is hardly complex, most laser printers will make something that’s good enough to pass any regular inspection, it’s a step that is only going to catch out the most clueless of exploiters.

Wake up companies, seriously….

I know that some providers to take precautions, even when this may lead to some customer inconvenience/annoyance.

  • National Bank (NZ) would refuse to tell me anything about my account, unless I rang them from a number that matched their records for my account.
  • Visiting banks in person often requires photo ID, which can be faked, but takes a bit more effort.
  • My approach in business has always been to ensure a customer was emailing/calling from a known account, otherwise we would call back to confirm requests on their recorded number.

Although some of these approaches are becoming less trust worthy…

  • Email accounts are commonly broken into – because of this, if we get unusual requests or password reset requests, we often call back the client to confirm.
  • With the adoption of VoIP technologies, it’s becoming easier to assume someone’s phone number and send/recieve phone calls on their behalf.

Sadly there isn’t really a truly valid fix, there’s no identification that can be issued that can truly validate people’s identity and secret words or passwords are usually weakened by the fact that humans suck and choose terrible words or reuse them often.

I think the best fix is simply making sure service providers validate information such as ensuring customers have their last invoice & account number before making changes and that financial institutions or credit agencies follow strict security procedures such as photo identification.

Android VPN Rage

Having obtained a shiny new Nexus S to replace my aging HTC Magic, I’ve been spending the last few days setting it up as I want it – favorite apps, settings, email, etc.

The setup is a little more complex for me, since I run most of my services behind a secure internal VPN – this includes email, SIP and other services.

 

On my HTC Magic, I ran OpenVPN which was included in Cynogenmod – this is ideal, since I run OpenVPN elsewhere on all my laptops and servers and it’s a very reliable, robust VPN solution.

With the Nexus S, I want to stick to stock firmware, but this means I only have the options of a PPTP or IPsec/L2TP VPN solution, both of which I consider to be very unpleasant solutions.

I ended up setting up IPsec (OpenSwan) + L2TP (xl2tp + ppp) and got this to work with my Android phone to provide VPN connectivity. For simplicity, I configured the tunnel to act as a default route for all traffic.

 

Some instant deal breakers I’ve discovered:

  1. Android won’t remember the VPN user password – I can fix this for myself by potentially moving to certificates, but this is a deal breaker for my work VPN with it’s lovely 32-char password as mandated by infrastructure team.
  2. Android disconnects from the VPN when changing networks – eg from 3G to wifi….. and won’t automatically reconnect.
  3. I’m unable to get the VPN to stand up on my internal RFC 1918 wifi range, for some reason the VPN establishes and then drops, yet works fine over 3G to the same server.

 

I love Android and I suspect many other platforms won’t be much better, but this really is a bit shit – I can only see a few options:

  1. Get OpenVPN modules onto my phone and setup OpenVPN tunnels for the stock firmware – for this, I will need to root the device, compile the Nexus kernel with tun module support, copy onto the phone and then install one of the UIs for managing the VPN.
  2. Switch to Cynogenmod to gain these features, at the cost of the stability of using the stable releases from Google/Samsung.
  3. Re-compile the source released by Samsung and apply the patches I want for OpenVPN support in the GUI from Cynogenmod.
  4. Re-compile the source released by Samsung and apply patches to the VPN controls in Android to fix VPN handling properly. Although this still doesn’t fix the fact that IPsec is a bit shit in general.

 

All of these are somewhat time intensive activities as well as being way beyond the level of a normal user, or even most technical users for that matter.

I’m wondering if option 3 is going to be the best from a learning curve and control perspective, but I might end up doing 1 or 2 just to get the thing up and running so I can start using it properly.

It’s very frustrating, since there’s some cool stuff I can now do on Android 2.3, like native SIP support that I just need to get the VPN online for first. :-(

I hate Tuesday

Today has been a trial of frustrations and annoyances…. I love IT completely, but sometimes even I have a bad day.

In summary, my day:

  • Personal server has crashed 2x today with no error messages or displayed panics. This is a pretty big deal, since it’s a modern box, runs about 25 of my development virtual machines and is encrypted making a PITA to boot back up, not to mention I use it daily for development and informational purposes.
  • That server also runs the #geekflat network which meant calls from flatmates begging for precious internets.
  • After waiting weeks (months?) for a NIC to be added to a customer server, I discovered that the engineer was trying to install a PCIe card into a totally incompatible PCI slot.
  • A complex script for processing files at a customer site has been broken after the file format changed unexpectedly and needs to be fixed.
  • I found a bug in my perfect code that gave me some very frustrating headaches and which I’ll have to fix.
  • A customer I did support work for a year ago has a number of general desktop issues, claims these are a fault and demanding I fix it seeing as I was the one who installed anti-virus. O_o
  • I got called several times by people for questions that could have been answered by themselves.

All in all, a very frustrating and annoying day. I just hope that tomorrow is better. :-/

The biggest headache is really the problems with the server instability – I rely on that server for a lot of services and having a fault that reports no specific error or message is immensly frustrating.

At this stage, I’m being to suspect some hardware – it could be PSU/CPU/RAM/MB fault which is causing inconsistant stability, but these sorts of issues are extremely difficult to try and trace down, there is nothing in the logs at this stage to indicate.

I might consider switching UPS and maybe PSUs if I need to and see if that resolves the issue – although it’s very difficult to tell since the last time this server had any stability problems was in Jan…

Arhghgh!

Day 23 – Post a review of an application that you use

This late post is part of my 30 days of geek challenge.

I figured it would be a bit too naracistic to review my own software and a bit boring to review some of my ever day applications, so instead I’m going to do a post about a rather geeky application – KVM virtualisation.

 

About Virtualisation

For those unfamiliar with virtualisation (hi Lisa <3), it’s a technology that allows one physical computer to run multiple virtual computers – with computers getting more and more powerful compared to relatively stable workloads, virtualisation allows us to make much better use of system resources.

I’ve been using virtualisation on Linux since RHEL 5 first shipped with Xen support – this allowed me to transform a single server into multiple speedy machines and I haven’t looked back since – being able to condense 84U of rackmount servers down into a big black tower in my bedroom is a pretty awesome ability. :-)

 

Background – Xen, KVM

I’ve been using Xen in production for a couple years now, whilst it’s been pretty good, there have also been a large number of quite serious bugs at times – combined with the lack of upstream kernel support, it’s given Xen a bit of a bad taste.

Recently I built a new KVM server at home running RHEL 6 to replace my data center, which was costing me too much in power and space. I chose to dump Xen and switch to KVM, which is included in the upstream Linux kernel and is a much smaller simpler code base, since KVM relies on the hardware virtualisation capabilities of the CPU rather than software emulation or paravirtualisation.

In short, KVM is pretty speedy since it’s not emulating much, instead giving the CPU the hardwork. You can then combine paravirtualisation for things like network and storage to boost performance even further.

 

My Platform

I ended up building my KVM server on RHEL 6 Beta 2 (before it was released) and am currently running around 25 virtual machines on it with stable experiences.

Neither the server or guests have needed restarts after running for a couple months without interruption and on a whole, KVM seems a lot more stable and bug free than Xen on RHEL 5 ever was for me. **

(** I say Xen on RHEL 5, since I believe that Xen has advanced a lot since XenSource was snapshotted for RHEL 5, so it may be unfair to compare RHEL 5 Xen against KVM, a more accurate test would be current Xen releases against KVM).

 

VM Supend to Disk

VM suspend to disk is somewhat impressive, I had to take the host down to install a secondary NIC (curse you lack of PCI hotswap!) and KVM suspended all the virtual machines to disk and resumed them on reboot.

This saves you from needing to reboot all your virtual systems, although there are some limitations:

  • If your I/O system isn’t great, it may actually take longer to write the RAM of each VM to disk than it would take to simply reboot the VMS. Make sure you’re using the fastest disks possible for this.
  • If you have a lot of RAM (eg 16GB like me) and forget to make your filesystem on the host OS big enough to cope…..
  • You can’t apply kernel updates to all your VMs in one go by simply rebooting the host OS, you need to restart each VM that requires the update.

In my tests it performed nicely, out of 25 running VMs, only one experienced an issue, which was a crashed NTP process, quickly identified by Nagios and restarted manually.

 

I/O Performance

I/O performance is always interesting with virtualised systems. Some products, typically desktop end user focused virtualisation solutions, will just store the virtual servers as files on the local filesystem.

This isn’t quite so ideal for a server where performance and low overhead is key – by storing a file system ontop of another filesystem, you are adding much more overhead to the block layer which will translate into decreased performance, not so much around raw read/write, but around seek performance (in my tests anyway).

Secondly, if you are running a fully emulated guest, KVM has to emulate virtual IDE disks, which really impacts performance, since doing I/O consumes much more CPU. If your guest OS supports it, paravirtualised drivers will make a huge improvement to performance.

I’m running KVM guests inside Linux logical volumes, ontop of an encrypted block device underneath (which does impact performance a lot) however I did manage to obtain some interesting statistics showing the performance of paravirtualisation vs IDE emulation.

View KVM IDE Emulation vs Paravirtualisation Results

They show noticeable improvement in the paravirtualised disk, especially around seek times… of interest, at the time of the tests, the other server workloads were idle, so the CPU was mostly free for I/O.

I suspect if I were to run the tests again on a CPU occupied server, paravirtualisation’s advantages would become even more apparent, since IDE emulation will be very susceptible to CPU load.

 

The above tests were run on a host server running RHEL 6 kernel 2.6.32-71.14.1.el6.x86_64 ontop of an encrypted RAID 6 LVM volume, with 16GB RAM, Phenon II Quad Core and SATA disks.

In both tests, the guest was a KVM virtual machine running CentOS 5.5 with kernel 2.6.18-194.32.1.el5.x86_64 and 256MB RAM – so not much memory for disk caching – to a 30GB ext3 partition that was cleanly formatted between tests.

Bonnie++ 1.03e was used with CLI options of -n 512 and -s 1024.

Note that I don’t have perfect guest to host I/O comparison test results, but similar tests run against a RAID 5 array on the same server suggests that may be around a 10% performance impact with KVM paravirtualisation which is pretty hard to notice.


Problems

I’ve had some issues with stability which I believe I traced to one of the earlier beta kernels with RHEL 6, since upgrading to 2.6.32-71.14.1.el6.x86_64 the server has been solid, even with large virtual network transfers.

In the past when I/O was struggling (mostly before I had upgraded to paravirtualised disk) I experienced some strange networking issues, as per the post here and identified KVM limitations around the I/O resource allocation space.

Other than the above, I haven’t experienced many other issues with the host and future testing and configuration is ongoing  – I should be blogging a lot of Xen to KVM migration notes in the near future and will be testing CentOS 6 more throughly once released, maybe some other distributions as well.

Day 22 – Release some software under an open source license that you haven’t released before.

This late post is part of my 30 days of geek challenge.

I’ve released a bit of software before under open source licenses – originally mostly scripts and various utilities, before moving on to starting my own open source company (Amberdms Ltd) which resulted in various large applications, such as the Amberdms Billing System and centralised authentication components like LDAPAuthManager.

The other day I released my o4send application, which is a utility for sending bluetooth messages to any phones supporting OPP and today I pushed a new release of LDAPAuthManager (version 1.2.0) out to the project tracker.

 

I haven’t talked about LDAPAuthManager much before – it’s a useful web-based application that I developed for several customers that makes LDAP user and group management easy for anyone to use without needing to understand the pain that is LDAP.

It’s been extended to provide optional radius attribute support, for setting additional values on a per-user or per-group, making LDAPAuthManager part of a wider centralised authentication solution.

 

For other open source goodness, all my current open source components developed by Amberdms can be found on our Indefero project tracker at www.amberdms.com/projects/.

There’s a lot that I have yet to release – releasing means I need to validate the documentation, package, test and then upload so I can be sure that everyone gets the desired experience with the source, so it can be tricky to find the time sometimes :-/

Introducing o4send

Awhile ago, Amberdms was contracted to develop an application for sending messages to bluetooth enabled mobile phones for the NZ world expo.

Essentially the idea was that people would visit the expo, receive a file on their mobiles and receive some awesome content about New Zealand. The cool thing about this was that you didn’t need to be paired, any phone with bluetooth active would get this message.

Apparently this worked quite nicely, although I’m not convinced that OPP will be much use for the future, with the two major smartphone platforms (Android and iPhone/iOS) not providing support for it – we found that it worked best with Nokia Symbian phones.

To make this work, I wrote a perl script and coupled it with a CSV or MySQL database backend to track the connections and file distributions – I bundled this into a little application called “o4send” which I’ve now released the source publicly.

You can check out the source and download the application at the Amberdms project tracker at: https://www.amberdms.com/projects/p/oss-o4send/

Take care with this application, it can talk to a lot of mobile phones and I’m not sure of the legality of sending unsolicited messages to bluetooth devices – but I figured this source might be useful to somebody oneday for a project – or at the very least, a “hey that’s cool” moment.

30 days of geek takes off?

Readers who have been around for a little while may recall my 30 days of geek blogging challenge, which I sadly ran out of time to complete the last few questions.

Recently @CyrisXD has taken up the idea and has been promoting it to get a whole bunch of other geeks blogging and talking about it, which is pretty awesome. He has a list of people doing the challenge, starting up on the 1st of April on his website at eguru.co.nz and there seems to be a lot of buzz around it.

It’s pretty awesome to see it take off and it would be shame if I don’t complete it myself, so I’m going to start making a post a day to complete the 30 days of geek challenge myself. :-)

As a side note, I’m also making some effort to go back and tag all the articles on this blog better – I have a few categories, but there’s lots more content that tends to get hidden and hopefully tagging it will make it more accessible to casual readers, so I’ll be doing this over the next week or so.