IPv6, End users starting to care

September 17th, 2013 by

We’ve had an IPv6 aware network for quite some time, and we’ve been gradually rolling it out to our services with the aim of eventually having every service we offer fully available over IPv6 and IPv4. We host the Raspberry Pi website which has an IPv6 only internal network, IPv6 only virtual machines and IPv4 on the front end to help out those of you with the ‘legacy’ internet.

A quick skim over the logfiles suggests that about 96% of you still access the site through the legacy IPv4 network – about 4% of hosts are now connecting over IPv6 which is starting to become a non trivial fraction of the traffic. Of course this is much higher than typical sites, Raspberry Pi users are much more technically aware than the general population.

Yesterday we had our first real connectivity problem to investigate – an end user within Ja.net (the UK academic network) was unable to access files from the Raspberry Pi download server on about half of the occasions. Further investigation showed that they could access the load balancers in our Sovereign House site with connectivity via the London Internet Exchange Juniper LAN, but not the load balancers in our Harbour Exchange site with connectivity over the London Internet Exchange Extreme LAN.

When we started investigating we confirmed that it seemed to be a problem with the Extreme LAN, if we forced the connectivity via the Juniper LAN it worked from both sites, if we forced it via the Extreme LAN it failed from both sites. Odder and odder though, a packet dump on our LINX interface didn’t show us passing the packets on.

Our IPv4 peering worked fine, this was IPv6 specific.

We then started looking at the routing table on the router. Over IPv4 it looks like

131.111.0.0/16 via 195.66.236.15 dev eth0

and over IPv6 it looks like

2001:630::/32 via fe80::5e5e:abff:fe23:2fc2 dev eth0

That gives us the netblock, and the next hop to send the packet to.

So the next step is to check you can reach the gateway happily enough.

# ping 195.66.236.15
PING 195.66.236.15 (195.66.236.15) 56(84) bytes of data.
64 bytes from 195.66.236.15: icmp_seq=1 ttl=64 time=0.220 ms

and

# ping6 fe80::5e5e:abff:fe23:2fc2
connect: Invalid argument

Odd. Then I realised that fe80:: in IPv6 means a link local address – the address is specific to the network card so to ping it you have to specify the destination address and the network interface.

# ping6 fe80::5e5e:abff:fe23:2fc2 -I eth9
PING fe80::5e5e:abff:fe23:2fc2(fe80::5e5e:abff:fe23:2fc2) from fe80::21b:21ff:fe65:a4c5 eth9: 56 data bytes
64 bytes from fe80::5e5e:abff:fe23:2fc2: icmp_seq=1 ttl=64 time=0.451 ms

Then the penny dropped. The routing table has eth0 in it but we’re actually connected to eth9. Under IPv4 this is fine because the next-hop address is globally unique and only accessible over eth9 so we send the packets out of eth9 and they go to the correct destination. Under IPv6 it’s a link local address and therefore valid over any interface, so we obey the routing table and throw the packets out of eth0 whereupon they fall onto the floor because there’s no fibre connected.

Fixing the config to put the right interface description in made it all work, and our end user is happily able to access all the load balancers on all the v6 addresses in all of the buildings.

Obviously if you’re a Mythic Beasts customer and you don’t already have an IPv6 allocation for your real or virtual server, drop us an email and we’ll hand you your own address space to play with.