Debugging IPv6 support

March 27th, 2013 by

One of our customers is running a monitoring network First2Know and has one of the monitoring nodes hosted with us in one of our London sites. He chose us because we have IPv4 and IPv6 support and the monitoring network does full IPv4/IPv6 monitoring from every location. He kept seeing connectivity issues between his Raleigh node hosted by RootBSD and his London node hosted by us.

Initial investigation indicated that only some IPv6 hosts on our network were affected, in particular he could reliably ping only one of two machines with IPv6 addresses in the same netblock hosted on the same switch within our network. We escalated the issue with us and RootBSD and they helpfully gave me a VM on their network so I could do some end to end testing.

Analysing at both ends with tcpdump indicated that packets were only being lost on the return path from RootBSD to Mythic Beasts, on the out path they always travelled fine. Testing more specifically showed that the connectivity issue was reproducible based on source/destination address and port numbers.

This connect command never succeeds,

# nc -p 41452 -6 2607:fc50:1:4600::2 22

This one reliably works,

# nc -p 41451 -6 2607:fc50:1:4600::2 22
SSH-2.0-OpenSSH_5.3

What’s probably happening is somewhere along the line the packets are being shared across multiple links using a layer3 hash, this means the link is chosen by an implementation like

md5($source_ip . $source_port . $destination_ip . $destination_port) % (number of links)

This means that each connection always sees the packets travel down the same physical link minimising the risk of a performance loss due to out of order packet arrival, but each connection effectively gets put down a different link at random.

Statistically we think that either 1 in 2 or 1 in 3 links at the affected point were throwing our packets away on this particular route. Now nobody in general has noticed because in dual stack implementations it falls back to IPv4 if the IPv6 connection doesn’t connect. We only found it because this application is IPv6 only; our IPv6 monitoring is single stack IPv6 only.

Conversation with RootBSD confirmed that the issue is almost certainly within one of the Tier 1 providers on the link between our networks, neither of us have any layer 3 hashing options enabled on any equipment on the path taken by the packets.

Now in this case we also discovered that we had some suboptimal IPv6 routing, once we’d fixed the faulty announcement our inbound routes changed and became shorter via a different provider and all the problems went away and we were unable to reproduce the issues again.

However as a result of this we’ve become a customer of First2Know and we’re using their worldwide network to monitor our global IPv4 and IPv6 connectivity so we can be alerted and fix issues like these well before our customers find them.

If this sounds like the sort of problem you’d like to work on, we’re always happy to accept applications at our jobs page.

IPv6 by default

June 21st, 2012 by

We’ve now enabled IPv6 by default for all customers hosted on either onza or yali. The control panels for these machines have been running IPv6 for over a year, we’ve now enabled it for all customer websites too.

RaspberryPi mirror

February 24th, 2012 by

We’ve long been drinking partners of Eben & Liz who are building the Raspberry Pi ultra-cheap Linux machine which will hopefully breed the next generation of employees for Mythic Beasts. We’ve got secret plans for what we’re going to do with the one that’s on its way to us – to be revealed in a future blog post. Today, in addition to donating all the bandwidth for the main Raspberry Pi website, we’ve added a mirror for Raspberry Pi image downloads using one of our virtual servers. It’s got a full gigabit of IPv4 & IPv6, so download away.

IPv6 glue for .com/.net/.org

January 5th, 2012 by

We’ve now implemented IPv6 glue for .com / .net / .org domains through our control panel and other country codes that support glue records.

Hosting the complete ipv6 reverse zone file

April 1st, 2011 by

We’ve been running IPv6 for a while and one of the unresolved issues we’re having is how to handle reverse dns. For IPv4 we have a control panel which allows customers to set their reverse dns records. For IPv6 we’ve been putting individual records in or delegating the address space to the end customers DNS server. We don’t think that making all of our customers run a DNS server just to do reverse DNS is particularly desirable but there are issues in hosting several billion reverse records per customer if they happen to come up with an application that uses their entire address space.

This got me wondering, how hard would it be to host the complete IPv6 reverse zone file. It’s roughly 3.4 x 10^38 addresses. Storing this in memory for speedy lookup would be desirable. Flash is made out of silicon which is made out of sand. wiki.answers.com under ‘How many grains of sand are there in the world’ and ‘How many atoms are there in a grain of sand’ give the answers 7.5 x 10^ 18 grains of sand and 2 x 10^ 19 atoms per grain. Multiplying these together we get roughly 1.5 x 10^38 atoms in total for the whole world.

So if we take all the sand in the world and manufacture it into DRAM we need to store roughly two reverse lookups per atom to store the whole zone file. Answers on a postcard.