Stormy weather, the clouds are growing.

August 26th, 2015 by

Photo-2015-08-26-12-31-10_1016

A customer of ours has been extending their private cloud. This adds another 160 cores, 160Gbps, 2TB of RAM and over half a petabyte of storage. On the left you can see the black mains cable, then the serial for out of bound configuration, then red cabling for 1Gbps each to our main network, then 20Gbps per server to the very secure private LAN on SFP+ direct attach.

The out of place yellow cable is network for the serial server above, and the out of place black one is serial to the 720Gbps switch which isn’t quite long enough to route neatly.

There’s a few more bits and pieces to add, but soon it will join their OpenStack cloud and substantially increase the rate at which their data gets crunched.

 

Bandwidth Upgrades for Cambridge servers

February 16th, 2015 by

Taking a break from our usual articles about upgrades for VPS customers and mocking the hopelessly incompetent, we’d like to announce an upgrade for dedicated and colo customers in our Cambridge data centre. We’ve finally completed the upgrade of both of our links into Cambridge, so have increased bandwidth quotas, and reduced excess rates to just 7p/GB.

Details of the new specs can be found on our Dedicated Server, Colocation and Mac Mini Colo pages.

Depending on nowhere by peering with everyone everywhere

August 29th, 2014 by

We’ve been adding some more peering sessions to improve our network redundancy. We already had direct peering with every significant UK ISP, we’ve now enhanced this so that one peering session terminates at one of the Telehouse sites, and the second terminates at one of the Telecity or Equinix sites. Each peering session is on a different London Internet Exchange (LINX) network which are physically separate from each other, and where possible we’ve preferred peering sessions that remain within a single building.

We have equal capacity on both networks at LINX, so unlike many ISPs with a single peering port or unequal capacity, in the event of a severe failure (e.g. a whole network or data centre) we just automatically migrate our traffic to our other peering link, rather than falling back to burst bandwidth with our transit providers. We feel that’s a risky strategy because failures are likely to be correlated, lots of ISPs will fall back to transit all at the same time in a badly planned and uncoordinated fashion which could cause a huge traffic spike upstream.

We light our own fibre ring around our core Docklands data centres, and have full transit and peering at both of our core POPs, with dual routers in each, and can offer full or partial transit at any of our data centres.

512k routes

August 13th, 2014 by

Some ISPs have started to report that their IPv4 routing tables now exceed 512k individual routes. At present we’re only seeing 502k routes but we’re nearly there.

Now for us this isn’t going to make any difference, our routers can all easily handle the routing table of this size, and the full IPv6 routing table of 30k routes all at the same time. However, it may start to affect things within other ISPs that we connect to. The likely things that we’ll see happening are,

  • Some ISPs will just drop some IPv4 routes, or cease processing updates, which means gradually odd bits of the internet will cease to work.
  • Some ISPs may fall into a software routing mode reducing in reduced performance.
  • Some ISPs will rely on filtering to reduce the routing table in size by aggregating routes together.
  • Some ISPs will alter their configuration for Cisco CAT6500 routers and disable IPv6 in order to increase the memory available in their routers for IPv4.

So watch out for oddities, and expect them to occur more and more frequently as the growth in the routing table gradually reveals which ISPs are running into trouble having not planned ahead.

Enabling Anycast DNS with Esgob

May 15th, 2014 by

Nat Morris, UK Network Operators Forum director recently gave a presentation to DNS Operations, Analysis and Research centre, which included this remarkably nice slide:

Screen Shot 2014-05-12 at 12.25.22

What is Anycast?

Normally a server has a globally unique IP address, and the Internet knows how to send traffic from any other machine in the world to that IP address. With Anycast we share a single address across multiple machines, and your traffic is sent to the nearest machine with that address. This means that UK customers can be answered from a server in the UK and Australian customers from a server in Australia allowing you to have very fast responses to things like DNS queries because you’re always served by a server that’s close by, rather than your query having to travel half way around the world.

To set up an Anycast network, you need your own address space, your own network number (ASN), multiple BGP-aware routers that can announce your address space, and multiple servers that can answer the queries. Typically this would require a pretty hefty budget, but if you’re Nat Morris and you know what you’re doing with software routing on Linux, and you know all the right providers then you can bring up a global Anycast network with 10+ servers and sites on an annual budget of well under $1,000.

The key to doing this is finding ISPs, ideally well-connnected ISPs in key internet hubs, who will provide you with a BGP feed to your hosted server. That’s where a UK clueful hosting company comes into the picture having excellent connectivity, inexpensive virtual machines (VMs) and a willingness to support customers with more unusual configurations.

Quick introduction to BGP and routing

Normally when you have a VM you get a default route, which looks like this:

# ip route 
...
default via 93.93.128.1 dev eth1 

which says that to get to anywhere on the internet, send packets to our router at 93.93.128.1.

Over BGP, instead we send you the whole routing table:

# ip route 
...
1.0.7.0/24 via 5.57.80.128 dev eth3.4  proto zebra  metric 1 
1.0.20.0/23 via 93.93.133.46 dev eth6.220  proto zebra  metric 142080 
... 
500,000 more lines like this

For every block on the whole internet you have a different gateway depending on what you’ve decided is the preferred route. At today’s count this is about 490,000 entries in the routing table. Don’t type ‘route’ if you’re logged in over 3G!

So for this VM, instead of having a default route, Nat has four full BGP sessions, two to each of our two routers to the site. On each router, one session provides 490,000 IPv4 routes, the other provides 18,000 IPv6 routes, and the VM gets to decide which router to send data to.

The other side of the BGP relationship, and the important bit for Anycasting, is that we receive an advert from Nat’s VM for his /24 of IPv4 space and /48 of IPv6 space, which we then advertise out to the world. The 10+ other providers in this Anycast setup will do the same, and hosts will direct traffic to whichever is nearest.

Filtering

As Paul Vixie pointed out in the first question to Nat, the main customers of VMs with BGP are spammers who hijack address space for nefarious usage. At Mythic Beasts we filter our announcements and our customer routes, so if Nat messes up his configuration and accidentally announces that his VM is responsible for the whole of Youtube we’ll drop the announcement rather than expecting one very small VM to handle one fifth of the internet.

BGP on a virtual or dedicated server

If you’re a DNS provider or a content delivery network, you’ll probably want to have an Anycast setup at some point. At Mythic Beasts we remember what it was like to be the little guy which is why we offer full BGP routing (including IPv6 BGP) as an option to any virtual server, dedicated server, colocated server or router. Providing you own your own ASN and IP space we can transit it for you and we can keep the start-up costs very low and scale with you. You can locate your VM or server directly with us in Telecity, mere tens of metres from LINX and LoNAP for minimal latency and maximal available bandwidth.

If you’ve no idea what an ASN, BGP, LIR, RIPE are, we can help arrange your ASN, IP space and BGP config.

Router fails, no packets dropped.

January 29th, 2014 by

This morning one of our routers in our Cambridge data centre stopped reporting bandwidth data to our billing system. We investigated and whilst it was still routing packets without issue, it appeared to be experiencing hardware failure.

We’ve powered the router down, pending full investigation on our data centre visit this afternoon. Currently all traffic from our Cambridge site is being handled by our other router. This seamlessly failed over with no customer impact.

Depending on your choice of terminology ‘Redundancy has been reduced to N’, or ‘The network is at-risk’. In Mythic Beasts we like to speak English so this translates to, if something else fails before the router is restored to service, there is a risk of a network outage to our Cambridge data centre.

Update : Friday 31st we fully restored our network to it’s usual redundant configuration by replacing the router with a similarly over specified replacement. Customers may have received free bandwidth for some of this period.

More bits

January 10th, 2014 by

At the end of last year we took the decision to significantly upgrade our two connections to LINX – our busiest connections to the outside world.

This turned out to be a good plan as Mythic Beasts got a Christmas present in the form of a new company bandwidth record, thanks to two customers, Blinkbox Music and Raspberry Pi getting a substantial spike in hits as people unwrapped their Christmas presents.

And it seems that the excitement of all the presents hasn’t worn off, as the Christmas day record has just been toppled by a new all time high yesterday. With the Blinkbox apps very high in the free music app charts, we’re not expecting it to stand for long.

IPv6, End users starting to care

September 17th, 2013 by

We’ve had an IPv6 aware network for quite some time, and we’ve been gradually rolling it out to our services with the aim of eventually having every service we offer fully available over IPv6 and IPv4. We host the Raspberry Pi website which has an IPv6 only internal network, IPv6 only virtual machines and IPv4 on the front end to help out those of you with the ‘legacy’ internet.

A quick skim over the logfiles suggests that about 96% of you still access the site through the legacy IPv4 network – about 4% of hosts are now connecting over IPv6 which is starting to become a non trivial fraction of the traffic. Of course this is much higher than typical sites, Raspberry Pi users are much more technically aware than the general population.

Yesterday we had our first real connectivity problem to investigate – an end user within Ja.net (the UK academic network) was unable to access files from the Raspberry Pi download server on about half of the occasions. Further investigation showed that they could access the load balancers in our Sovereign House site with connectivity via the London Internet Exchange Juniper LAN, but not the load balancers in our Harbour Exchange site with connectivity over the London Internet Exchange Extreme LAN.

When we started investigating we confirmed that it seemed to be a problem with the Extreme LAN, if we forced the connectivity via the Juniper LAN it worked from both sites, if we forced it via the Extreme LAN it failed from both sites. Odder and odder though, a packet dump on our LINX interface didn’t show us passing the packets on.

Our IPv4 peering worked fine, this was IPv6 specific.

We then started looking at the routing table on the router. Over IPv4 it looks like

131.111.0.0/16 via 195.66.236.15 dev eth0

and over IPv6 it looks like

2001:630::/32 via fe80::5e5e:abff:fe23:2fc2 dev eth0

That gives us the netblock, and the next hop to send the packet to.

So the next step is to check you can reach the gateway happily enough.

# ping 195.66.236.15
PING 195.66.236.15 (195.66.236.15) 56(84) bytes of data.
64 bytes from 195.66.236.15: icmp_seq=1 ttl=64 time=0.220 ms

and

# ping6 fe80::5e5e:abff:fe23:2fc2
connect: Invalid argument

Odd. Then I realised that fe80:: in IPv6 means a link local address – the address is specific to the network card so to ping it you have to specify the destination address and the network interface.

# ping6 fe80::5e5e:abff:fe23:2fc2 -I eth9
PING fe80::5e5e:abff:fe23:2fc2(fe80::5e5e:abff:fe23:2fc2) from fe80::21b:21ff:fe65:a4c5 eth9: 56 data bytes
64 bytes from fe80::5e5e:abff:fe23:2fc2: icmp_seq=1 ttl=64 time=0.451 ms

Then the penny dropped. The routing table has eth0 in it but we’re actually connected to eth9. Under IPv4 this is fine because the next-hop address is globally unique and only accessible over eth9 so we send the packets out of eth9 and they go to the correct destination. Under IPv6 it’s a link local address and therefore valid over any interface, so we obey the routing table and throw the packets out of eth0 whereupon they fall onto the floor because there’s no fibre connected.

Fixing the config to put the right interface description in made it all work, and our end user is happily able to access all the load balancers on all the v6 addresses in all of the buildings.

Obviously if you’re a Mythic Beasts customer and you don’t already have an IPv6 allocation for your real or virtual server, drop us an email and we’ll hand you your own address space to play with.

New DNS resolvers

August 28th, 2013 by

We’ve upgraded our DNS resolvers in our SOV and HEX data centres. New features include DNSSEC validation and IPv6.

The addresses are,

SOV : 2a00:1098:0:80:1000::12 / 93.93.128.2
HEX : 2a00:1098:0:82:1000::10 / 93.93.130.2

They’re now DNSSEC aware and validating resolvers. That means if a site has correctly configured DNSSEC and we receive an answer that fails the security check we will return no answer rather than an incorrect/forged one.

To demonstrate the difference,

a non dns sec validating resolver :
# dig +short sigfail.verteiltesysteme.net
134.91.78.139

a mythic beasts server using our resolvers
# dig +short sigfail.verteiltesysteme.net
<no answer>
#

and on the DNS server it logs an error,

debug.log:28-Aug-2013 15:44:57.565 dnssec: info: validating @0x7fba880b69e0: sigfail.verteiltesysteme.net A: no valid signature found

and correctly drops the reply.

Googles DNS servers on 8.8.8.8 work the same as ours so we’re fairly confident that there will be no compatibility issues.

Downstream ASN

August 12th, 2013 by

With a customer of ours we have set them up their own full BGP network, split across two of our London sites. With advice from us we have

  • Helped them join RIPE as an LIR
  • Helped them apply for an IPv6 /32 and an ASN
  • Set up a full BGP IPv6 only network
  • Helped them apply for a final /22 of IPv4 space
  • Configured this in the global routing table

They have the option to now cable or fibre direct to peering exchanges and other ISPs should they wish to do so on individual machines hosted within our rackspace. In the mean time they’re taking advantage of our co-location, out of band access to their routers via serial and our IPv4 and IPv6 transit.