Rescuing a customer on a failed ubuntu upgrade

November 18th, 2010 by

One of our customers this evening mailed us to report that he’d upgraded Ubuntu on his colocated server and it had gone wrong. The machine refused to boot, and he’d managed to wipe out the serial settings in his grub configuration so he couldn’t alter the boot line in the configuration to add rootdelay=30. Could we help?

With a bit of fiddling, we could. On bootup the machine dropped out into busybox in the initrd.

ls /dev/mapper/system-system

revealed that the device for the lvm root volume was missing.

lvm vgchange -a y system

activated the root partition inside LVM so we could see it in /dev/mapper

fstype /dev/mapper/system-system

revealed the filesystem to be xfs

modprobe xfs
mkdir /mnt
mkdir /mnt/root
cd /mnt
mount /dev/mapper/system-system root

mounted the root filesystem inside of busybox.

modprobe ext3
mount /dev/sda1 root/boot

mounted the /boot partition

cd root
./usr/bin/vim.tiny boot/grub/menu.lst

brought up a minimal vim editing the grub configuration. I could then add the serial console lines,

serial --unit=0 --speed=115200 --word=8 --parity=no --stop=1
terminal --timeout=15 serial console

to the grub config and rootdelay=30 to the kernel line, reboot and the machine came up.

If this is the sort of thing you could have figured out yourself, we’re always happy to accept CVs at our jobs page. If this scares you we’d suggest you’d be interested in our managed hosting where we do these bits for you.

DNS Api

November 15th, 2010 by

At a customer request we’ve added a programmatic API for updating DNS records stored with our primary DNS servers. This is immediately available for all customers with a domain purchased from us at no extra charge. You can see the instructions on our support pages under Primary DNS API.

Peering with Google

October 4th, 2010 by

We’re now peering with Google over Lonap.

Debian Barbeque

August 31st, 2010 by

On Sunday I dropped in at the Debian Barbeque and provided a firkin of Pegasus from the Milton Brewery. Thanks to all the Debian developers for their hard work and have a pint on us.

10 GigE networking

August 5th, 2010 by

In May we upgraded our Telecity Meridian Gate site to have 10 Gigabit at the core. Early this week we upgraded the core network in Telecity Sovereign House to run at 10 Gigabit. We are planning to upgrade Telecity Harbour Exchange in the near future and to continue the rollout of 10GigE from the core switches. This means we’ve plenty of spare capacity for very high bandwidth customers in our docklands data centres.

Power failure in Telehouse North

July 22nd, 2010 by

Yesterday we believe was a power failure in Telehouse North. Mythic Beasts don’t have any equipment located in Telehouse but the effects were quite noticeable.

Two internet exchanges, LONAP and LINX were affected. The LONAP looking glass and traffic graph tell us that LONAP saw a all of the peers located in Telehouse North disconnect.

Lonap Traffic Graph



We don’t believe that LINX was directly affected by the power failure, but all sessions on the Brocade LAN were reset and brought up slowly over the course of about an hour, as you can see from the looking glass.

LINX Looking glass for the Brocade LAN

whereas the Extreme LAN wasn’t affected at all.

LINX Looking glass for the Extreme LAN

LINX Traffic Graph



Mythic Beasts saw no overall change in our traffic levels; we escaped unscathed.

Mythic Beasts Total Traffic



but we did see a brief drop on LONAP as various high bandwidth peers disconnected in Telehouse North.

Mythic Beasts LONAP Traffic



we didn’t see any measurable effect over Edge-IX (this traffic pattern is normal for this time of day)

Mythic Beasts Edge-IX Traffic



Mythic Beasts doesn’t currently peer directly on LINX, but we have two partial transit suppliers that do. Partial transit suppliers provide us with routes only from their peering partners so when they lose contact with a peer, we stop being able to route traffic to that network through them.

This partial transit supplier has 10G into the LINX Brocade LAN, 1G into the LINX Extreme LAN and 2G into LONAP plus private peers.

Mythic Beasts Transit 1



This partial transit supplier has 10G into LINX Brocade, 10G into LINX Extreme, 10G into AMSIX, 2.5G into Decix, 2G into LONAP and 1G into Edge-ix plus private peers.

Mythic Beasts Transit 2



We take partial transit from two suppliers, one in Telecity HEX 6/7, one in Telecity MER. Whilst this is more expensive than a single supplier or joining LINX ourselves, we’ve always felt that the additional redundancy was worth paying extra for. We discovered today that one partial transit supplier has almost no redundancy in the event of a failure of the LINX Brocade LAN. We’ve brought this up with the transit in question and will be pressuring them to add resiliency to their partial transit service. We do intend to join LINX, but when we do so we’ll join both the peering LANs from different data centres to maximise our resiliency.

IPv6

July 17th, 2010 by

We’re pleased to announce that as a result of tonight’s connectivity changes our core network, all four data centres now have IPv6 connectivity available. In the next weeks we’ll be contacting all our peering partners to enable direct IPv6 peerings where possible to improve our IPv6 connectivity.

If any customers would like to enable IPv6 on their colocated, dedicated or virtual servers please contact us and we’ll allocate you an address range and provide you with connectivity details.

Until the end of August 2010, all IPv6 bandwidth will be free.

Bandwidth upgrade for Cambridge hosted servers

July 16th, 2010 by

Our Cambridge hosting centre has two physically redundant private circuits linking it back to our network core in Telecity Sovereign House and Telecity Harbour Exchange. We’re pleased to report that we’ve now completed upgrades on both links increasing the available bandwidth to Cambridge customers by a factor of 2.5.

As a result we have increased all standard bandwidth customers to 250GB bandwidth quotas, and now offer higher bandwidth packages for dedicated and colocated customers in Cambridge.

New Mac mini – any good as a server?

June 29th, 2010 by

A couple of weeks ago, Apple unveiled the latest incarnation of the Mac mini.  Naturally, we dashed out to buy a few to see if they’re going to be any good as servers.  Externally, this is the biggest revision of the Mac Mini yet, with a thinner all-aluminium case.  We always get a bit nervous when Apple unveil a new Mac mini as there’s a chance that they’ll ruin the formula that make it such a great server in the name of creating a fun toy for your living room.

The most noticeable change, aside from the new case, is the removal of the power brick.  The old minis relied on an external power supply that really was the size and shape of a brick.  Getting rid of these will make racking them a lot simpler, as well as saving space.  We reckon we should be able to get 24 machines and 3 APC Masterswitches into 6U of rackspace.  The C7 power connectors should be more secure too.

The next thing that we approve of is the easily accessible RAM slots.  Traditionally, Apple have charged silly money for ordering machines with extra RAM, so we’ve always done the upgrades ourselves.  Upgrading the old minis was a real pain, requiring the cover to be prised off with putty knives, so this is a welcome change.

The most important factor for us, and the thing that lead us to use the original PPC minis, is power consumption, as power is the primary cost when hosting a machine.  There’s good news here too: according to our power meter, power consumption is down from 26W to 12W (idle) and 44W to 40W (max).

Unfortunately, the reduction in hosting cost is offset somewhat by they usual price hike for the newer hardware, but we’re pleased to see that Apple have retained the “server” version of the mini with two hard drives, allowing us to continue to offer machines running software RAID.

Overall, the new Macs seem like a decent improvement on the old ones for our purposes.  Apple make a fuss over the great variety of ports now available on the back of the new mini, but inexplicably they haven’t provided the one thing we’d want: a good old-fashioned serial port.

We still need to make the minis work with our custom net-booting bootloader, but once that is done, we’ll be offering them as dedicated servers.

New Xen-based virtual servers

March 16th, 2010 by

It’s been nearly six years since we first launched our Virtual Dedicated Servers.  At the time, the choice for virtualisation technology was easy: User Mode Linux.  Initially, UML was a well supported option, with UML patches being incorporated into the Linux kernel.  Over time, we’ve been following the development of other technologies such as Xen and KVM and at the end of last year we concluded that we should make the switch to Xen.

Getting Xen working reliably with our server management code has taken somewhat longer than expected, but we’re pleased to announce the the service is now live.

We’ve also taken the opportunity to roll out new hardware, allowing us to offer substantially higher specced VDSs for the same prices, with our base machine now coming in at 256MB RAM, 20GB, and an increased bandwidth allowance of 100GB/month for £15/month including VAT – less if you pay annually.

Although we’ve changed the virtualisation platform, we’ve retained the other key features of our virtual servers including:

Host servers with hot-swap hardware RAID.  Although these are significantly more expensive, we figure that reliability is something that can be shared particularly effectively through virtualisation: over the years, our VDS host servers have seen a fair few disk failures and replacements, but typically our customers don’t even know that they’ve happened.

Nightly backup to other host servers, allowing us to resume service quickly in the event of a serious hardware failure.

As part of the upgrade, we’ve also deployed a new approach to providing disk images which offers significantly better IO performance than the standard approach of storing the VDS filesystem as a file on the host filesystem.

The new VDSs are available now, and we’ll be contacting all existing customers in the near future to arrange migration to the new platform.