Day 23 – Post a review of an application that you use

This late post is part of my 30 days of geek challenge.

I figured it would be a bit too naracistic to review my own software and a bit boring to review some of my ever day applications, so instead I’m going to do a post about a rather geeky application – KVM virtualisation.

 

About Virtualisation

For those unfamiliar with virtualisation (hi Lisa <3), it’s a technology that allows one physical computer to run multiple virtual computers – with computers getting more and more powerful compared to relatively stable workloads, virtualisation allows us to make much better use of system resources.

I’ve been using virtualisation on Linux since RHEL 5 first shipped with Xen support – this allowed me to transform a single server into multiple speedy machines and I haven’t looked back since – being able to condense 84U of rackmount servers down into a big black tower in my bedroom is a pretty awesome ability. :-)

 

Background – Xen, KVM

I’ve been using Xen in production for a couple years now, whilst it’s been pretty good, there have also been a large number of quite serious bugs at times – combined with the lack of upstream kernel support, it’s given Xen a bit of a bad taste.

Recently I built a new KVM server at home running RHEL 6 to replace my data center, which was costing me too much in power and space. I chose to dump Xen and switch to KVM, which is included in the upstream Linux kernel and is a much smaller simpler code base, since KVM relies on the hardware virtualisation capabilities of the CPU rather than software emulation or paravirtualisation.

In short, KVM is pretty speedy since it’s not emulating much, instead giving the CPU the hardwork. You can then combine paravirtualisation for things like network and storage to boost performance even further.

 

My Platform

I ended up building my KVM server on RHEL 6 Beta 2 (before it was released) and am currently running around 25 virtual machines on it with stable experiences.

Neither the server or guests have needed restarts after running for a couple months without interruption and on a whole, KVM seems a lot more stable and bug free than Xen on RHEL 5 ever was for me. **

(** I say Xen on RHEL 5, since I believe that Xen has advanced a lot since XenSource was snapshotted for RHEL 5, so it may be unfair to compare RHEL 5 Xen against KVM, a more accurate test would be current Xen releases against KVM).

 

VM Supend to Disk

VM suspend to disk is somewhat impressive, I had to take the host down to install a secondary NIC (curse you lack of PCI hotswap!) and KVM suspended all the virtual machines to disk and resumed them on reboot.

This saves you from needing to reboot all your virtual systems, although there are some limitations:

  • If your I/O system isn’t great, it may actually take longer to write the RAM of each VM to disk than it would take to simply reboot the VMS. Make sure you’re using the fastest disks possible for this.
  • If you have a lot of RAM (eg 16GB like me) and forget to make your filesystem on the host OS big enough to cope…..
  • You can’t apply kernel updates to all your VMs in one go by simply rebooting the host OS, you need to restart each VM that requires the update.

In my tests it performed nicely, out of 25 running VMs, only one experienced an issue, which was a crashed NTP process, quickly identified by Nagios and restarted manually.

 

I/O Performance

I/O performance is always interesting with virtualised systems. Some products, typically desktop end user focused virtualisation solutions, will just store the virtual servers as files on the local filesystem.

This isn’t quite so ideal for a server where performance and low overhead is key – by storing a file system ontop of another filesystem, you are adding much more overhead to the block layer which will translate into decreased performance, not so much around raw read/write, but around seek performance (in my tests anyway).

Secondly, if you are running a fully emulated guest, KVM has to emulate virtual IDE disks, which really impacts performance, since doing I/O consumes much more CPU. If your guest OS supports it, paravirtualised drivers will make a huge improvement to performance.

I’m running KVM guests inside Linux logical volumes, ontop of an encrypted block device underneath (which does impact performance a lot) however I did manage to obtain some interesting statistics showing the performance of paravirtualisation vs IDE emulation.

View KVM IDE Emulation vs Paravirtualisation Results

They show noticeable improvement in the paravirtualised disk, especially around seek times… of interest, at the time of the tests, the other server workloads were idle, so the CPU was mostly free for I/O.

I suspect if I were to run the tests again on a CPU occupied server, paravirtualisation’s advantages would become even more apparent, since IDE emulation will be very susceptible to CPU load.

 

The above tests were run on a host server running RHEL 6 kernel 2.6.32-71.14.1.el6.x86_64 ontop of an encrypted RAID 6 LVM volume, with 16GB RAM, Phenon II Quad Core and SATA disks.

In both tests, the guest was a KVM virtual machine running CentOS 5.5 with kernel 2.6.18-194.32.1.el5.x86_64 and 256MB RAM – so not much memory for disk caching – to a 30GB ext3 partition that was cleanly formatted between tests.

Bonnie++ 1.03e was used with CLI options of -n 512 and -s 1024.

Note that I don’t have perfect guest to host I/O comparison test results, but similar tests run against a RAID 5 array on the same server suggests that may be around a 10% performance impact with KVM paravirtualisation which is pretty hard to notice.


Problems

I’ve had some issues with stability which I believe I traced to one of the earlier beta kernels with RHEL 6, since upgrading to 2.6.32-71.14.1.el6.x86_64 the server has been solid, even with large virtual network transfers.

In the past when I/O was struggling (mostly before I had upgraded to paravirtualised disk) I experienced some strange networking issues, as per the post here and identified KVM limitations around the I/O resource allocation space.

Other than the above, I haven’t experienced many other issues with the host and future testing and configuration is ongoing  – I should be blogging a lot of Xen to KVM migration notes in the near future and will be testing CentOS 6 more throughly once released, maybe some other distributions as well.

This entry was posted in Uncategorized and tagged , . Bookmark the permalink.

Leave a Reply