Running to beat depression

April 18th, 2011 by

On the left is a picture of the remarkably glamorous Steph, shortly after she finished running the London Marathon in aid of the Samaritans and in memory of her brother Chris Lightfoot, one of the founders of Mythic Beasts and a dearly missed close friend of ours.

We went to congratulate Steph in person, drop off our donation to the Samaritans and toast her finishing in under four hours. As someone who’s run the London Marathon before I can say that I wished I’d looked that good at the start, let alone the finish.

Hosting the complete ipv6 reverse zone file

April 1st, 2011 by

We’ve been running IPv6 for a while and one of the unresolved issues we’re having is how to handle reverse dns. For IPv4 we have a control panel which allows customers to set their reverse dns records. For IPv6 we’ve been putting individual records in or delegating the address space to the end customers DNS server. We don’t think that making all of our customers run a DNS server just to do reverse DNS is particularly desirable but there are issues in hosting several billion reverse records per customer if they happen to come up with an application that uses their entire address space.

This got me wondering, how hard would it be to host the complete IPv6 reverse zone file. It’s roughly 3.4 x 10^38 addresses. Storing this in memory for speedy lookup would be desirable. Flash is made out of silicon which is made out of sand. wiki.answers.com under ‘How many grains of sand are there in the world’ and ‘How many atoms are there in a grain of sand’ give the answers 7.5 x 10^ 18 grains of sand and 2 x 10^ 19 atoms per grain. Multiplying these together we get roughly 1.5 x 10^38 atoms in total for the whole world.

So if we take all the sand in the world and manufacture it into DRAM we need to store roughly two reverse lookups per atom to store the whole zone file. Answers on a postcard.

Monitoring API

February 9th, 2011 by

We’ve added a new API for managing our monitoring system. Now you can script shutting down the monitoring of your application while it’s offline for update and script the post deployment switch-on too.

http://www.mythic-beasts.com/monitoring.html

IPv6 Update

February 3rd, 2011 by

As you may be aware there’s been a flurry of IPv6 related excitement in the past few days. IANA has allocated the last of the IPv4 address space to the regional registries. This means that obtaining IP addresses is going to become steadily more difficult from here and attempting to migrate the whole Internet to IPv6 is looking like more of an immediate priority.

We’ve been running an IPv6 network for over six months, yesterday we enabled IPv6 on two of our customer facing hosting servers, yali.mythic-beasts.com and onza.mythic-beasts.com. We also made their control panels and all services hosted on them available over IPv6.

pete@ubuntu:~$ dig AAAA yali.mythic-beasts.com +short
2a00:1098:0:86:1000::10

Temporarily an automated scripted cleaned up the A record for the hosts disabling access to all the services over IPv4 to all of our customers that don’t have IPv6 connectivity. As our support mailqueue testified, the majority of our customers do not have working IPv6 connectivity yet.

Unrelated to this activity we also discovered that by default linux limits the number of ipv6 routes to 4096. You can update this by doing,

echo 32768 >/proc/sys/net/ipv6/route/max_size 

this is a good idea on any linux machine that sees the full routing table, the IPv6 routing table is now about 4300 routes.

Exim 4 remote root vulnerability

December 13th, 2010 by

If you are running Exim 4 you should be aware that a remote root vulnerability was discovered on Friday 10th December. This means that someone sending a specially crafted email to your server can completely take control of it.

If you are a managed server customer, you do not need to worry. All managed server customers were fully updated by the end of Saturday 11th December, including where necessary building non standard exim packages from source.

If you are not a managed customer then upgrading exim is your responsibility. We have notified all customers who look like they may be running a vulnerable version of exim.

If you’re running Debian Lenny

Make sure /etc/apt/sources.list contains the line

deb http://security.debian.org/ lenny/updates main

then run

apt-get update
apt-get upgrade

this will install a patched exim for you.

If you’re running Centos

yum update

will installed a patched exim for you.

If you’re running Debian Etch

there is no security update provided by Debian. You will have to roll your own Debian package with the fix or upgrade your server or exim package to Debian Lenny.

If you’re running an LTS edition of Ubuntu

You should make sure you have the appropriate security lines in your apt configuration and follow the instructions for Debian Lenny above.

If you don’t know what to do

You should be purchasing a managed service from us and we will manage it for you, contact us at support@mythic-beasts.com.

If you think that building a centos 5.5 backport of exim for a customer who’s compelled to run an early version of Fedora is both possible and fun, contact us at our jobs page and we’ll let you know when we’re hiring.

Rescuing a customer on a failed ubuntu upgrade

November 18th, 2010 by

One of our customers this evening mailed us to report that he’d upgraded Ubuntu on his colocated server and it had gone wrong. The machine refused to boot, and he’d managed to wipe out the serial settings in his grub configuration so he couldn’t alter the boot line in the configuration to add rootdelay=30. Could we help?

With a bit of fiddling, we could. On bootup the machine dropped out into busybox in the initrd.

ls /dev/mapper/system-system

revealed that the device for the lvm root volume was missing.

lvm vgchange -a y system

activated the root partition inside LVM so we could see it in /dev/mapper

fstype /dev/mapper/system-system

revealed the filesystem to be xfs

modprobe xfs
mkdir /mnt
mkdir /mnt/root
cd /mnt
mount /dev/mapper/system-system root

mounted the root filesystem inside of busybox.

modprobe ext3
mount /dev/sda1 root/boot

mounted the /boot partition

cd root
./usr/bin/vim.tiny boot/grub/menu.lst

brought up a minimal vim editing the grub configuration. I could then add the serial console lines,

serial --unit=0 --speed=115200 --word=8 --parity=no --stop=1
terminal --timeout=15 serial console

to the grub config and rootdelay=30 to the kernel line, reboot and the machine came up.

If this is the sort of thing you could have figured out yourself, we’re always happy to accept CVs at our jobs page. If this scares you we’d suggest you’d be interested in our managed hosting where we do these bits for you.

DNS Api

November 15th, 2010 by

At a customer request we’ve added a programmatic API for updating DNS records stored with our primary DNS servers. This is immediately available for all customers with a domain purchased from us at no extra charge. You can see the instructions on our support pages under Primary DNS API.

Peering with Google

October 4th, 2010 by

We’re now peering with Google over Lonap.

Debian Barbeque

August 31st, 2010 by

On Sunday I dropped in at the Debian Barbeque and provided a firkin of Pegasus from the Milton Brewery. Thanks to all the Debian developers for their hard work and have a pint on us.

Power failure in Telehouse North

July 22nd, 2010 by

Yesterday we believe was a power failure in Telehouse North. Mythic Beasts don’t have any equipment located in Telehouse but the effects were quite noticeable.

Two internet exchanges, LONAP and LINX were affected. The LONAP looking glass and traffic graph tell us that LONAP saw a all of the peers located in Telehouse North disconnect.

Lonap Traffic Graph



We don’t believe that LINX was directly affected by the power failure, but all sessions on the Brocade LAN were reset and brought up slowly over the course of about an hour, as you can see from the looking glass.

LINX Looking glass for the Brocade LAN

whereas the Extreme LAN wasn’t affected at all.

LINX Looking glass for the Extreme LAN

LINX Traffic Graph



Mythic Beasts saw no overall change in our traffic levels; we escaped unscathed.

Mythic Beasts Total Traffic



but we did see a brief drop on LONAP as various high bandwidth peers disconnected in Telehouse North.

Mythic Beasts LONAP Traffic



we didn’t see any measurable effect over Edge-IX (this traffic pattern is normal for this time of day)

Mythic Beasts Edge-IX Traffic



Mythic Beasts doesn’t currently peer directly on LINX, but we have two partial transit suppliers that do. Partial transit suppliers provide us with routes only from their peering partners so when they lose contact with a peer, we stop being able to route traffic to that network through them.

This partial transit supplier has 10G into the LINX Brocade LAN, 1G into the LINX Extreme LAN and 2G into LONAP plus private peers.

Mythic Beasts Transit 1



This partial transit supplier has 10G into LINX Brocade, 10G into LINX Extreme, 10G into AMSIX, 2.5G into Decix, 2G into LONAP and 1G into Edge-ix plus private peers.

Mythic Beasts Transit 2



We take partial transit from two suppliers, one in Telecity HEX 6/7, one in Telecity MER. Whilst this is more expensive than a single supplier or joining LINX ourselves, we’ve always felt that the additional redundancy was worth paying extra for. We discovered today that one partial transit supplier has almost no redundancy in the event of a failure of the LINX Brocade LAN. We’ve brought this up with the transit in question and will be pressuring them to add resiliency to their partial transit service. We do intend to join LINX, but when we do so we’ll join both the peering LANs from different data centres to maximise our resiliency.