Monthly Archives: March 2012

Travel Plans: Wellington Easter

I’ll be coming down to Wellington this Easter from Thursday 4th until Monday 9th of April to catch up with family and friends.

I’m planning one main “catch up” event to meet up with lots people, this naturally calls for a decent venue with a large selection of delicious beverages, thus I propose:

Saturday 7th April at 16:00
Fork & Brewer
14 Bond St

It’s a great venue with good atmosphere and 40 or so beers on tap, so come along for a catch up and a drink. I expect I’ll be out in town till some time that evening, so send me a message if you want to catch up later that evening. And there could be post-beer curry.

If you can make it, let me know via comments or drop me an email/IM. :-) If you can’t make it, but still would like to catch up, get in touch as I’m pretty flexible on this trip and a few coffee sessions are always welcome.

Lisa won’t be with me this time, she’s heading to the Hawke’s Bay to see her family, so there will be 100% less soppy couple cuteness, but probably 200% more Linux geeking. Take your pick at which is worse. ;-)

Porting to 2degrees

Having been a long-suffering victim of poor experiences with performance on Vodafone’s data network in NZ and expensive pricing, I’ve now shifted to NZ’s third and youngest mobile provider, 2degrees.

Upgrade from 32k to 128k of SIM memory, woot! ;-)

Two major incentives – firstly unhappiness at Vodafone’s 3G data performance and secondly, unhappiness at the fact that my personal telecommunications expenses are around $350 per month (welcome to NZ, land of expensive comms) and seeking to reduce these somewhat.

I was originally paying $59 a month for my Vodafone service – 120mins, 250 SMS and 300MB data (although boosted to 3GB due to a grandfathered plan promotion). It was pretty good deal when it came out, I signed onto the plan when the first Android phone in NZ launched (HTC Magic) and good data plans for mobiles that didn’t cost a fortune were kind of a new thing.

With 2degrees, I’ve now dropped my bill down to $39 a month, which provides 220mins, 2500 SMS, 100MB data, plus an additional 1GB data bonus for the next 12 months.

There’s a bit of a loss on datacap size, down from Vodafone’s 3GB, but my smartphone and laptop use no more than 1GB all up when combined in regular use, so it’s not really going to impact me.

I also went and dropped the Telecom XT data SIM in my laptop – whilst convenient and bloody fast data, it wasn’t worth the cost for how often I need it – and I can’t really justify when my phone can pair and share the 1.3GB of monthly data it has.

Number porting went very smoothly – after requesting the port online with 2degrees, I got a txt about 3 hrs later confirming it was complete. 2degrees even went to the effort of informing Vodafone and having them close my account which was handy.

It’s been going great since, so far I haven’t encountered any cell towers dropping ~90% of packet data without anybody at Vodafone noticing yet and performance seems speedy and reliable.

Infact the performance of the 2degreees network around Auckland actually beats my DSL at times, especially for the upload, which is pretty tragic. :-/

Sadly the results for 3G performance are sometimes better than my ADSL :-/

I haven’t gone on a rural trip since moving to 2degrees, but it should be just as good as I used to get with Vodafone, as 2degrees uses Vodafone for roaming when outside of their own network zones.

Their plans certainly seem popular – I’ve had at least 2 other friends move to 2degrees, even if you want expensive smartphones, it’s often cheaper to buy the phone outright and use 2degrees no-term monthly than to sign with Telecom or Vodafone due to the savings in plan costs over 24 months, not to mention freedom and flexibility to change plans.

Introducing Smokegios

Having a reasonably large personal server environment of at least 10 key production VMs along with many other non-critical, but still important machines, a good monitoring system is key.

I currently use a trio of popular open source applications: Nagios (for service & host alerting), Munin (for resource graphing) and Smokeping (for latency response graphs).

Smokeping and Nagios are particularly popular, it’s rare to find a network or *NIX orientated organization that doesn’t have one or both of these utilities installed.

There are other programs around that offer more “combined” UI experiences, such as Zabbix, OpenNMS and others, but I personally find that having the 3 applications that do each specific task really well, is better than having one maybe not-so-good application. But then again I’m a great believer in the UNIX philosophy. :-)

The downside of having these independent applications is that there’s not a lot of integration between them. Whilst it’s possible to link programs such as Munin & Nagios or Nagios & Smokeping to share some data from the probes & tests they make, there’s no integration of configuration between the components.

This means in order to add a new host to the monitoring, I need to add it to Nagios, then to Munin and then to Smokeping – and to remember to sync any changes across all 3 applications.

So this weekend I decided to write a new program called Smokegios.

TL;DR summary of Smokegios

This little utility checks the Nagios configuration for any changes on a regular cron-controlled basis. If any of the configuration has changed, it will parse the configuration and generate a suitable Smokeping configuration from it using the hostgroup structures and then reload Smokeping.

This allows fully autonomous management of the Smokeping configuration and no more issues about the Smokeping configuration getting neglected when administrators make changes to Nagios. :-D

Currently it’s quite a simplistic application in that it only handles ICMP ping tests for hosts, however I’m intended to expand in future with support for reading service & service group information for services such as DNS, HTTP, SMTP, LDAP and more to generate service latency graphs.

This is a brand new application, I’ve run a number of tests against my Nagios & Smokeping packages, but always possible your environment will have some way to break it – if you find any issues, please let me know, keen to make this a useful tool for others.

To get started with Smokegios, visit the project page for all the details including installation instructions and links to the RPM repos.

If you’re using RHEL 5/6/derivatives, I have RPM pages for Smokegios as well as Smokeping 2.4 and 2.6 series on amberdms-custom and amberdms-os repositories.

It’s written in Perl5, not my most favorite language, but it’s certainly well suited for this configuration file manipulation type tasks and there was a handy Nagios-Object module courtesy of Duncan Ferguson that saved writing a Nagios parser.

Let me know if you find it useful! :-)

Google Search & Control

I’ve been using Google for search for years, however it’s the first time I’ve ever come across a DMCA takedown notice included in the results.

Possibly not helped by the fact that Google is so good at finding what I want, that I don’t tend to scroll down more than the first few entries 99% of the time, so it’s easy to miss things at the bottom of the page.

Lawyers, fuck yeah!

Turns out that Google has been doing this since around 2002 and there’s a form process you can follow with Google to file a notice to request a search result removal.

Sadly I suspect that we are going to see more and more situations like this as governments introduce tighter internet censorship laws and key internet gatekeepers like Google are going to follow along with whatever they get told to do.

Whilst people working at Google may truly subscribe to the “Don’t be evil” slogan, the fundamental fact is that Google is a US-based company that is legally required to do what’s best for the shareholders – and the best thing for the shareholders is to not try and fight the government over legalization, but to implement as needed and keep selling advertising.

In response to concerns about Google over privacy, I’ve seen a number of people to shift to new options, such as the increasingly popular and open-source friendly Duck Duck Go search engine, or even Microsoft’s Bing which isn’t too bad at getting decent results with a UI looking much more like early Google.

However these alternatives all suffer from the same fundamental problem – they’re centralized gatekeepers who can be censored or controlled – and then there’s the fact that a centralised entity can track so much about your online browsing. Replacing Google with another company will just leave us in the same position in 10 years time.

Lately I’ve been seeking to remove all the centralized providers from my online life, moving to self-run and federated services – basic stuff like running my own email, instant messaging (XMPP), but also more complex “cloud” services being delivered by federated or self-run servers for tasks such as browser syncing, avatar handling, contacts sync, avoiding URL shortners and quitting or replacing social networks.

The next big one of the list is finding an open source and federated search solution – I’m currently running tests with a search engine called YaCy, which is a peer-to-peer decentralised search engine that is made up of thousands of independent servers, sharing information between themselves.

To use YaCy, you download and run your own server, set it’s search indexing behavior and let it run and share results with other servers (it’s also possible to run it in a disconnected mode for indexing your internal private networks).

The YaCy homepage has an excellent write up of their philiosophy and design fundamentals for the application.

It’s still a bit rough, I think the search results could be better – but this is something that having more nodes will certainly help with and the idea is promising – I’m planning to setup a public instance on my server in the near future for adding all my sites to the index and providing a good test of it’s feasibility.

Takapuna Beach Low Tide

As part of my regular exercise routine, I wander along Takapuna beach – the size of the beach will vary a lot depending on whether the tide is in or out, but the amount it varies is quite dramatic.

This is the first time that I’ve lived right next to a beach and it makes you realize how it’s possible for people to get into trouble with walking along beaches and getting trapped when the tide rises.

Low tide showing off the gradual slope of the entire beach

Normally the waves are lapping up against the rocks by the cliff. Will have to time a trip to walk down past the rocks and onto the other beach one day.

Quite weird to be walking along areas that at times I’ve been swimming in… From what I can tell, the beach continues on a long way at this decline, there were a few swimmers out even further during low tide, so it certainly continues on like that for some way.

Mozilla Firefox “Pin as App”

In a moment of madness, I decided to RTFM the latest Mozilla Firefox Feature List and came across this nifty ability called “Pin as App”.

nawww baby tabs!

It’s pretty handy, I’m using it to maintain tabs of commonly access websites or web applications that I need many times a day, easy to find since it’s always on the left in the defined order, and much smaller than the full tab size.

Only issue is that you need your remote site/app to have a decent favicon – if they don’t, you’ll just end up with a dashed square placeholder and there’s no way in Firefox to set a custom icon for that pin that I can see.

Incur the Wrath of Linux

Linux is a pretty hardy operating system that will take a lot of abuse, but there are ways to make even a Linux system unhappy and vengeful by messing with available resources.

I’ve managed to trigger all of these at least once, sometimes I even do it a few times before I finally learn, so I’ve decided to sit down and make a list for anyone interested.

 

Disk Space

Issue:

Running out of disk. This is a wonderful way to cause weird faults with services like databases, since processes will block (pause) until there is sufficient disk space available again to allow writes to complete.

This leads to some delightful errors such as websites failing to load since the dynamic pages are waiting on the database, which in return is waiting on disk. Or maybe apache can’t write anymore PHP session files to disk, so no PHP based pages load.

And mail servers love not having disk, thankfully in all the cases I’ve seen, Sendmail & dovecot just halt and retain messages in memory without causing a loss of data. (although a reboot when this is occurring could be interesting).

Resolution:

For production systems I always carefully consider the partition table structure, so that an issue such as out-of-control logging processes or tmp directories can’t impact key services such as databases, by creating separate partitions for their data.

This issue is pretty easy to fix with good monitoring, packages such as Nagios include disk usage checks in the stock versions that can alert at configurable intervals (eg 80% of disk used).

 

Disk Access

Issue:

Don’t unplug a disk whilst Linux is trying to use it. Just don’t. Really. Things get really unhappy and you get to look at nice output from ps aux showing processes blocked for disk.

The typical mistake here is unplugging devices like USB hard drives in the middle of a backup process causing the backup process to halt and typically the kernel will spewing the system logs with warnings about how naughty you’ve been.

Fortunately this is almost always recoverable, the process will eventually timeout/terminate and the storage device will work fine on the next connection, although possibly with some filesystem errors or a corrupt file if halfway through writing to disk.

Resolution:

Don’t be a muppet. Or at least educate users that they probably shouldn’t unplug the backup drive if it’s flashing away busy still.

 

Networked Storage

Issue:

When using networked storage the kernel still considers the block storage to be just as critical as local storage, so if there’s a disruption accessing data on a network file system, processes will again halt until the storage returns.

This can have mixed blessings – in a server environment where the storage should always be accessible, halting can be the best solution since your programs will wait for the storage to return and hopefully there will be no data loss.

However for a mobile environment this can cause problems to hang indefinetly waiting for storage that might not be able to be reconnected.

Resolution:

In this case, the soft option can be used when mounting network shares, which will cause the kernel to return an error to the process using the storage if it becomes unavailable so that the application (hopefully) warns the user and terminates gracefully.

Using a daemon such as autofs to automatically mount and unmount network shares on demand can help reduce this sort of headache.

 

Low Memory

Issue:

Running out of memory. I don’t just mean RAM, but swap space (pagefile for you windows users). When you run out of RAM on almost any OS, it won’t be that happy – Linux handles this situation by killing off processes using the OOM in order to free up memory gain.

This makes sense in theory (out of memory, so let’s kill things that are using it), but the problem is that it doesn’t always kill the ones you want, leading to anything from amusement to unmanageable boxes.

I’ve had some run-ins with the OOM before, killing my ssh daemon on overloaded boxes preventing me from logging into them. :-/

One the other hand, just giving your system many GB of swap space so that it doesn’t run out of memory isn’t a good fix either, swap is terribly slow and your machine will quickly grind to a near-halt.

The performance of using swap is so bad it’s sometimes difficult to even log in to a heavily swapping system.

 

 Resolution:

Buy more RAM. Ideally you shouldn’t be trying to run more than possible on a box – of course it’s possible to get by with swap space, but only to a small degree due to the performance pains.

In a virtual environment, I’m leaning towards running without swap and letting OOM just kill processes on guests if they run out of memory, usually it’s better to take the hit of a process being killed than the more painful slowdown from swap.

And with VMs, if the worst case happens, you can easily reboot and console into the systems, compared to physical hosts where you can’t afford to lose manageability at all costs.

Of course this really depends on your workload and what you’re doing, best solution is monitoring so that you don’t end up in this situation in the first place.

Sometimes it just happens due a once-off process and is difficult to always forsee memory issues.

 

Incorrect Time

Issue:

Having the incorrect time on your server may appear only a nuisance, but it can lead to many other more devious faults.

Any applications which are time-sensitive can experience weird issues, I’ve seen problems such as samba clients being unable to see newer files than the system time and having bind break for any lookups. Clock issues are WEIRD.

Resolution:

We have NTP, it works well. Turn it on and make sure the NTP process is included in your process monitoring list.

 

Authentication Source Outages

Issue:

In larger deployments it’s often common to have a central source of authentication such as LDAP, Kerberos, Radius or even Active Directory.

Linux actually does a remarkable amount of lookups against the configured authentication sources in regular operation. Aside from the need to lookup whenever a user wishes to login, Linux will lookup the user database every time the attributes of a file is viewed (user/group information) which is pretty often.

There’s some level of inbuilt caching, but unless you’re running a proper authentication caching daemon allowing off-line mode, a prolonged outage to the authentication server will make it impossible for users to login, but also break simple queries such as ls as the process will be trying to make user/group information lookups.

Resolution:

There’s a reason why we always have two or more sources for key network services such as DNS and LDAP, take advantage of the redundancy built into the design.

However this doesn’t help if the network is down entirely, in which case the best solution is having the system configured to quickly failover to local authentication or to use the local cache.

Even if failover to a secondary system is working, a lot of the timeout defaults are too high (eg 300 seconds before trying the secondary). Whilst the lookups will still complete eventually, these delays will noticely impact services, so it’s recommended to lookup the authentication methods being used and adjust the timeouts down to a couple seconds tops.

 

This is just a few of simple yet nasty ways to break Linux systems in ways that cause weird application behaviour, but not nessacarily in a form that’s easy to debug.

In most cases, decent monitoring will help you avoid and handle many of these issues better by alerting to low resource situations – if you have nothing currently, Nagios is a good start.

Mozilla Collusion

This week Mozilla released an add-on called Collusion, an experimental extension which shows and graphs how you are being tracked online.

It’s pretty common knowledge how much you get tracked online these days, if you just watch your status bar when loading many popular sites you’ll always see a few brief hits to services such as Google Analytics, but there’s also a lot of tracking down with social networking services and advertisers.

The results are pretty amazing, I took these after turning it on for myself for about 1 day of browsing, every day I check in the graph is even bigger and more amazing.

The web actually starting to look like a web....

As expected, Google is one of the largest trackers around, this will be thanks to the popularity of their Google Analytics service, not to mention all the advertising technology they’ve acquired and built over the years including their acquisition of DoubleClick.

I for one, welcome our new Google overlords and would like to remind them that as a trusted internet celebrity I can be useful for rounding up other sites to work in their code mines.

But even more interesting is the results for social networks. I ran this test whilst logged out of my Twitter account, logged out of LinkedIn and I don’t even have Facebook:

Mark Zuckerberg knows what porn you look at.

Combine 69+ tweets a day & this information and I think Twitter would have a massive trove of data about me on their servers.

Linkedin isn't quite as linked at Facebook or Twitter, but probably has a simular ratio if you consider the userbase size differences.

When you look at this information, you can see why Google+ makes sense for the company to invest in. Google has all the data about your browsing history, but the social networks are one up – they have all your browsing information with the addition of all your daily posts, musings, etc.

With this data advertising can get very, very targeted and it makes sense for Google to want to get in on this to maintain the edge in their business.

It’s yet another reason I’m happy to be off Twitter now, so much less information that can be used by advertisers for me. It’s not that I’m necessarily against targeted advertising, I’d rather see ads for computer parts than for baby clothes, but I’m not that much of a fan of my privacy being so exposed and organisations like Google having a full list of everything I do and visit and being able to profile me so easily.

What will be interesting will be testing how well the tracking holds up once IPv6 becomes popular. On one hand, IPv6 can expose users more, if they’re connecting with a MAC-based address, but on the other hand, could privatise more using IPv6 address randomisation when assigning systems IP addresses.

Mozilla Sync Server RPMs

A few weeks ago I wrote about the awesomeness that is Mozilla’s Firefox Sync, a built-in feature of Firefox versions 4 & later which allows for synchronization of bookmarks, history, tabs and password information between multiple systems. (historically known as Weave)

I’ve been running this for a few weeks now on my servers using fully packaged components and it’s been working well, excluding a few minor hick-ups.

It’s taken a bit longer than I would have liked, but I now have stable RPM packages for RHEL/CentOS 5 and 6 for both i386 and x86_64 available publicly.

I always package all software I use on my servers (and even my desktops most of the time) as it makes re-building, upgrading and supporting systems far easier in the long run. By having everything in packages and repos, I can rebuild a server entirely simply by knowing what list of packages were originally installed and their configuration files.

Packaging one’s software is also great when upgrading distribution, as you can get a list of all non-vendor programs and libraries installed and then use the .src.rpm files to build new packages for the new OS release.

 

Packaging Headaches

Mozilla Sync Server was much more difficult to package than I would have liked, mostly due  to the documentation clarity and the number of dependencies.

The primary source of pain was that I run CentOS 5 for a lot of my production systems,which ships with Python 2.4, whereas to run Mozilla Sync Server, you will need Python 2.6 or later.

This meant that I had to build RPMs for a large number (upwards of 20 IIRC) python packages to provide python26 versions of existing system packages. Whilst EPEL had a few of the core ones (such as python26 itself), many of the modules I needed either weren’t packaged, or were only had EPEL packages for Python 2.4.

The other major headache was due to unclear information and in some cases, incorrect documentation from Mozilla.

Mozilla uses the project source name of server-full in the setup documentation, however this isn’t actually the entire “full” application – rather it provides the WSGI executable and some libraries, however you also need server-core, server-reg and server-storage plus a number of python modules to build a complete solution.

Sadly this isn’t entirely clear to anyone reading the setup instructions, the only setup information relates to checking out server-full and running a build script which will go through and download all the dependencies (in theory, it often broke for me) and build a working system, complete with paster web server.

Whilst this would be a handy resource for anyone doing development, it’s pretty useless for someone wanting to package a proper system for deployment since you need to break all the dependencies into separate packages.

(Note that whilst Mozilla refer to having RPM packages for the software components, these have been written for their own inhouse deployment and are not totally suitable for stock systems, not to mention even when you have SPEC files for some of the Mozilla components, you still lack the SPEC files for dependencies.)

To top it off, some information is just flat out wrong and can only be found out by first subscribing to the developer mailing list – in order to gain a login to browse the list archives – so that you can ind such gems as “LDAP doesn’t work and don’t try as it’s being re-written”.

Toss in finding a few bugs that got fixed right around the time I was working on packaging these apps and you can understand if I’m not filled with love for the developers right this moment.

Of course, this is a particularly common open source problem – the team clearly released in a way that made sense to them, and of course everyone would know the difference between server-core/full/reg/storage, etc right?? ;-) I know I’m sometimes guilty of the same thing.

Having said that, the documentation does appear to be getting better and the community is starting to contribute more good documentation resources. I also found a number of people on the mailing list quite helpful and the Mozilla Sync team were really fast and responsive when I opened a bug report, even when it’s a “stupid jethro didn’t hg pull the latest release before testing” issue.

 

Getting My Packages

All the new packages can be found in the Amberdms public package repositories, the instructions on setting up the CentOS 5 or CentOS 6 repos can be found here.

 

RHEL/CentOS 5 Repo Instructions

If you are running RHEL/CentOS 5, you only need to enable amberdms-os, since all the packages will install in parallel to the distribution packages. Nothing in this repo should ever clash with packages released by RedHat, but may clash/be newer than dag or EPEL packages.

 

RHEL/CentOS 6 Repo Instructions

If you are running RHEL/CentOS6, you will need to enable both amberdms-os and amberdms-updates, as some of the python packages that are required are shipped by RHEL, but are too outdated to be used for Mozilla Sync Server.

Note that amberdms-updates may contain newer versions of other packages, so take care when enabling it, as I will have other unrelated RPMs in there. If you only want my newer python packages for mozilla sync, set includepkgs=python-* for amberdms-updates

Also whilst I have tested these packages for Mozilla Sync Server’s requirements, I can’t be sure of their suitability with existing Python applications on your server, so take care when installing these as there’s always a chance they could break something.

 

RHEL/CentOS 5 & 6 Installation Instructions

Prerequisites:

  1. Configured Amberdms Repositories as per above instructions.
  2. Working & configured Apache/httpd server. The packaged programs will work with other web servers, but you’ll have to write your own configuration files for them.

Installation Steps:

  1. Install packages with:
    yum install mozilla-sync-server
  2. Adjust Apache configuration to allow access from desired networks (standard apache IP rules).
    /etc/httpd/conf.d/mozilla-sync-server.conf
  3. Adjust Mozilla Sync Server configuration. If you want to run with the standard sqllite DB (good for initial testing), all you must adjust is line 44 to set the fallback_node value to the correct reachable URL for Firefox clients.
    vi /etc/mozilla-sync-server/mozilla-sync-server.conf
  4. Restart Apache – due to the way mozilla-sync-server uses WSGI, if you make a change to the configuration, there might still be a running process using the existing config. Doing a restart of Apache will always fix this.
    /etc/init.d/httpd restart
  5. Test that you can reach the sync server location and see if anything breaks. These tests will fail if something is wrong such as missing modules or inability to access the database.
    http://host.example.com/mozilla-sync/
    ^ should return 404 if working - anything else indicated error
    
    http://host.example.com/mozilla-sync/user/1.0/a/
    ^ should return 200 with the page output of only 0
  6. There is also a heartbeat page that can be useful when doing automated checks of the service health, although I found it possible to sometimes break the server in ways that would stop sync for Firefox, but still show OK for heartbeat.
    http://host.example.com/mozilla-sync/__heartbeat__
  7. If you experience any issues with the test URLs, check /var/log/httpd/*error_log*. You may experience problems if you’re using https:// with self-signed certificates that aren’t installed in the browser as trusted too, so import your certs properly so they’re trusted.
  8. Mozilla Sync Server is now ready for you to start using with Firefox clients. My recommendation is to use a clean profile you can delete and re-create for testing purposes and only add sync with your actual profile once you’ve confirmed the server is working.

 

Using MySQL instead of SQLite:

I tend to standardise on using MySQL where possible for all my web service applications since I have better and more robust monitoring and backup tools for MySQL databases.

If you want to setup Mozilla Sync Server to use MySQL, it’s best to get it working with SQLite first and then try with MySQL to ensure you don’t have any issues with the basic setup before doing more complex bits.

  1. Obviously the first step should be to setup MySQL server, if you haven’t done this yet, the following command will set it up and take you through a secure setup process to password protect the root DB accounts:
    yum install -y mysql-server
    /etc/init.d/mysqld start
    chkconfig --level 345 mysqld on
    /usr/bin/mysql_secure_installation
  2. Once the MySQL server is running, you’ll need to create a database and user for Mozilla Sync Server to use – this can be done with:
    mysql -u root -p
    # or without -p if no MySQLroot password
    CREATE DATABASE mozilla_sync;
    GRANT ALL PRIVILEGES ON mozilla_sync.* TO mozilla_sync@localhost IDENTIFIED BY  'examplepassword';
    flush privileges;
    \q
  3. Copy the [storage] and [auth] sections from /etc/mozilla-sync-server/sample-configs/mysql.conf to replace the same sections in /etc/mozilla-sync-server/mozilla-sync-server.conf. The syntax for the sqluri line is:
    sqluri = mysql://mozilla_sync:examplepassword@localhost:3306/mozilla_sync
  4. Restart Apache (very important, failing todo so will not apply config changes):
    /etc/init.d/httpd restart
  5. Complete! Test from a Firefox client and check table structure is created with SHOW TABLES; MySQL query to confirm successful configuration.

 

Other Databases

I haven’t done any packaging or testing for it, but Mozilla Sync Server also supports memcached as a storage database, there is a sample configuration file supplied with the RPMs I’ve built, but you may need to also built some python26 modules to support it.

 

Other Platforms?

If you want to package for another platform, the best/most accurate resource on configuring the sync server currently is one by Fabian Wenk about running it on FreeBSD.

I haven’t seen any guides to packaging the application, the TL;DR version is that you’ll essentially need server-full, server-core, server-reg and server-storage, plus all the other python-module dependencies – take a look at the RPM specfiles to get a good idea.

I’ll hopefully do some Debian packages in the near future too, will have to work on improving my deb packaging foo.

 

Warnings, issues, small print, etc.

These packages are still quite beta, they’ve only been tested by me so far and there’s possibly some things in them that are wrong.

I want to go through and clean up some of the Python module RPMs at some stage as I don’t think the SPEC files I have are as portable as they should be, commits back always welcome. ;-)

If you find these packages useful, please let me know in comments or emails, always good to get an idea how people find this stuff and whether it’s worth the late nighters. ;-)

And if you have any problems, feel free to email me or comment on this page and I’ll help out the best I can – I suspect I’ll have to write a Mozilla Sync Server troubleshooting guide at some stage sooner or later.

IBM x3500 M3 Server

I recently got to play with a nice shiny new IBM x3500 M3 server ordered for a customer to replace a previous IBM x3400 M2 that had become a bit too acquainted with a sprinkler system….

These machines offer a good mix of features that makes them suitable for small and medium businesses, with the option for both SAS and SATA drive, dual CPU sockets and up to 192GB RAM in a (large) tower format.

Whilst not for everyone, I love the IBM xseries industrial design.

The only issue is that they sometimes miss certain handy features that competitors like Dell are shipping in their machines – one such feature being ESATA, which I find really handy for small business customers doing backups onto external hard disks.

With the x3500 M3 the server ships with UEFI instead of a legacy BIOS, sadly it doesn’t seem to speed up the server boot time but hopefully as they start to build a better design around UEFI this issue will improve in future releases.

I still have high hopes for what they could accomplish with UEFI, but so far it seems to be mostly a system for booting a BIOS-like mode so I’m not sure what has actually been accomplished other than to add more layers worthy of Inception.

As standard these machines ship with a single power supply, for redundancy you will probably want to order the Redundant Cooling & Power kit to get a second supply, along with several more fans you don’t really want or need.

(Tip: On older models, if you dislodged any fans by accident, the server will think there’s been a fan failure and will run all the other fans at maximum speed which is incredibility loud. In normal operation, it should be reasonably quiet with the fans speed dynamically slowing.)

Enough fans for a small hurricane.

IBM is moving towards 2.5″ drives being the size of choice, so take care when ordering disks to suit. In the case of the model we purchased, it shipped with 8x 2.5″ SATA/SAS bays as well as a big general bay area and mounts of older existing 3.5″ disks.

I presume this large bay is where additional 2.5″ bays could also be installed if you have particularly large storage requirements.

I do love the tiny new 2.5" drives, pitythey can't reduce the size of the rest of the server to suit....

Most likely you’ll be ordering the machine with additional memory to install, take note that these servers (like many of IBMs) are particularly explicit about which slot there memory modules must be installed into.

And if you’re ordering a lot of RAM take a careful read of the product manual – what I see with the memory installation instructions hints that certain DIMM slots are only usable with a second CPU.

Memory installation instructions are on the side panel/lid.

The best part of the x3500 M3 is that it ships with an IBM Integrated Management Module as a standard feature. This allows full management of the server including viewing the screen all the way from power on, through UEFI/BIOS and to the OS remotely via a web brower, eliminating any need for a network connected KVM.

This is particularly great for us, since a customer who is ordering a tower server typically only has a couple machines at the most and isn’t going to want to invest extra money for remote access – having it as a standard feature makes our lives a bit easier without costing extra.

Kernel paniced your box? No worries, a reboot is just a click away!

I was also happy to find that instead of some nasty flash plugin or windows-only application, the IMM browser interface works fine on my Linux machine and even the Java-based KVM functionality works fine under Linux and OpenJDK.

Don't mess with those BIOS settings in that tiny server room, do it from the pub! (or maybe don't, alcohol and BIOS settings sounds like a recipie for disaster....)

The one problem I did have with the IMM is that they made the process of the first login a bit harder than needed, with some obscure default admin user/password details, but then allowing the user to continue to use these insecure credentials for ongoing maintenance of the server.

Naturally you’ll want to change the passwords of the IMM because having randoms login and reboot your server isn’t exactly desirable… You should also setup and force HTTPS as well, to ensure there aren’t any insecure connections established sending keystrokes without encryption.

 

I think the IBM x3500 M3 series servers certainly have room to improve – they’re physically overly large, UEFI still boots slowly, the H/W RAID configuration interface needs a lot to be desired for and a lack of a built-in ESATA port is very annoying.

But when it comes to the manageability and expandability of the platform, they hold their own and for businesses with a single primary server I think they’re a great option without needing massive investment in management infrastructure.