Peering with Google
We’re now peering with Google over Lonap.
On Sunday I dropped in at the Debian Barbeque and provided a firkin of Pegasus from the Milton Brewery. Thanks to all the Debian developers for their hard work and have a pint on us.
In May we upgraded our Telecity Meridian Gate site to have 10 Gigabit at the core. Early this week we upgraded the core network in Telecity Sovereign House to run at 10 Gigabit. We are planning to upgrade Telecity Harbour Exchange in the near future and to continue the rollout of 10GigE from the core switches. This means we’ve plenty of spare capacity for very high bandwidth customers in our docklands data centres.
Yesterday we believe was a power failure in Telehouse North. Mythic Beasts don’t have any equipment located in Telehouse but the effects were quite noticeable.
Two internet exchanges, LONAP and LINX were affected. The LONAP looking glass and traffic graph tell us that LONAP saw a all of the peers located in Telehouse North disconnect.
Lonap Traffic Graph
We don’t believe that LINX was directly affected by the power failure, but all sessions on the Brocade LAN were reset and brought up slowly over the course of about an hour, as you can see from the looking glass.
LINX Looking glass for the Brocade LAN
whereas the Extreme LAN wasn’t affected at all.
LINX Looking glass for the Extreme LAN
LINX Traffic Graph
Mythic Beasts saw no overall change in our traffic levels; we escaped unscathed.
Mythic Beasts Total Traffic
but we did see a brief drop on LONAP as various high bandwidth peers disconnected in Telehouse North.
Mythic Beasts LONAP Traffic
we didn’t see any measurable effect over Edge-IX (this traffic pattern is normal for this time of day)
Mythic Beasts Edge-IX Traffic
Mythic Beasts doesn’t currently peer directly on LINX, but we have two partial transit suppliers that do. Partial transit suppliers provide us with routes only from their peering partners so when they lose contact with a peer, we stop being able to route traffic to that network through them.
This partial transit supplier has 10G into the LINX Brocade LAN, 1G into the LINX Extreme LAN and 2G into LONAP plus private peers.
Mythic Beasts Transit 1
This partial transit supplier has 10G into LINX Brocade, 10G into LINX Extreme, 10G into AMSIX, 2.5G into Decix, 2G into LONAP and 1G into Edge-ix plus private peers.
Mythic Beasts Transit 2
We take partial transit from two suppliers, one in Telecity HEX 6/7, one in Telecity MER. Whilst this is more expensive than a single supplier or joining LINX ourselves, we’ve always felt that the additional redundancy was worth paying extra for. We discovered today that one partial transit supplier has almost no redundancy in the event of a failure of the LINX Brocade LAN. We’ve brought this up with the transit in question and will be pressuring them to add resiliency to their partial transit service. We do intend to join LINX, but when we do so we’ll join both the peering LANs from different data centres to maximise our resiliency.
We’re pleased to announce that as a result of tonight’s connectivity changes our core network, all four data centres now have IPv6 connectivity available. In the next weeks we’ll be contacting all our peering partners to enable direct IPv6 peerings where possible to improve our IPv6 connectivity.
If any customers would like to enable IPv6 on their colocated, dedicated or virtual servers please contact us and we’ll allocate you an address range and provide you with connectivity details.
Until the end of August 2010, all IPv6 bandwidth will be free.
Our Cambridge hosting centre has two physically redundant private circuits linking it back to our network core in Telecity Sovereign House and Telecity Harbour Exchange. We’re pleased to report that we’ve now completed upgrades on both links increasing the available bandwidth to Cambridge customers by a factor of 2.5.
As a result we have increased all standard bandwidth customers to 250GB bandwidth quotas, and now offer higher bandwidth packages for dedicated and colocated customers in Cambridge.
A couple of weeks ago, Apple unveiled the latest incarnation of the Mac mini. Naturally, we dashed out to buy a few to see if they’re going to be any good as servers. Externally, this is the biggest revision of the Mac Mini yet, with a thinner all-aluminium case. We always get a bit nervous when Apple unveil a new Mac mini as there’s a chance that they’ll ruin the formula that make it such a great server in the name of creating a fun toy for your living room.
The most noticeable change, aside from the new case, is the removal of the power brick. The old minis relied on an external power supply that really was the size and shape of a brick. Getting rid of these will make racking them a lot simpler, as well as saving space. We reckon we should be able to get 24 machines and 3 APC Masterswitches into 6U of rackspace. The C7 power connectors should be more secure too.
The next thing that we approve of is the easily accessible RAM slots. Traditionally, Apple have charged silly money for ordering machines with extra RAM, so we’ve always done the upgrades ourselves. Upgrading the old minis was a real pain, requiring the cover to be prised off with putty knives, so this is a welcome change.
The most important factor for us, and the thing that lead us to use the original PPC minis, is power consumption, as power is the primary cost when hosting a machine. There’s good news here too: according to our power meter, power consumption is down from 26W to 12W (idle) and 44W to 40W (max).
Unfortunately, the reduction in hosting cost is offset somewhat by they usual price hike for the newer hardware, but we’re pleased to see that Apple have retained the “server” version of the mini with two hard drives, allowing us to continue to offer machines running software RAID.
Overall, the new Macs seem like a decent improvement on the old ones for our purposes. Apple make a fuss over the great variety of ports now available on the back of the new mini, but inexplicably they haven’t provided the one thing we’d want: a good old-fashioned serial port.
We still need to make the minis work with our custom net-booting bootloader, but once that is done, we’ll be offering them as dedicated servers.
It’s been nearly six years since we first launched our Virtual Dedicated Servers. At the time, the choice for virtualisation technology was easy: User Mode Linux. Initially, UML was a well supported option, with UML patches being incorporated into the Linux kernel. Over time, we’ve been following the development of other technologies such as Xen and KVM and at the end of last year we concluded that we should make the switch to Xen.
Getting Xen working reliably with our server management code has taken somewhat longer than expected, but we’re pleased to announce the the service is now live.
We’ve also taken the opportunity to roll out new hardware, allowing us to offer substantially higher specced VDSs for the same prices, with our base machine now coming in at 256MB RAM, 20GB, and an increased bandwidth allowance of 100GB/month for £15/month including VAT – less if you pay annually.
Although we’ve changed the virtualisation platform, we’ve retained the other key features of our virtual servers including:
Host servers with hot-swap hardware RAID. Although these are significantly more expensive, we figure that reliability is something that can be shared particularly effectively through virtualisation: over the years, our VDS host servers have seen a fair few disk failures and replacements, but typically our customers don’t even know that they’ve happened.
Nightly backup to other host servers, allowing us to resume service quickly in the event of a serious hardware failure.
As part of the upgrade, we’ve also deployed a new approach to providing disk images which offers significantly better IO performance than the standard approach of storing the VDS filesystem as a file on the host filesystem.
The new VDSs are available now, and we’ll be contacting all existing customers in the near future to arrange migration to the new platform.
Over the last few years, we’ve been investing heavily in increasing both our network’s capacity and its redundancy. We now have multiple gigabit upstream providers spread across our three London sites, allowing us to host very high bandwidth sites with a high level of redundancy.
Part of this work is to increase the number of peering agreements that we have in place. Peering is an arrangement where two networks agree to exchange traffic directly with each other for their mutual benefit, rather than paying to send it by a third party (a transit provider). Peering has two benefits:
Usually the first one is the important one: our marginal cost of bandwidth goes down, and we can reflect these savings in our own prices.
At the end of last year we joined EDGE-IX, a distributed internet exchange that gives us the ability to peer with some networks that we don’t see at other internet exchanges. Most notably, we now have peering in place with two big end-user networks: JANET (the UK’s education and research network) and Virgin Media (formerly NTL).
Sometimes the second point can be important. For example, users of Nominet’s Domain Availability Checker (DAC) are often extremely sensitive to latency, with a few milliseconds making a lot of difference. We received an enquiry from a prospective customer interested in using Nominet’s DAC service. This prompted us to set up a peering arrangement with Nominet, and by providing the customer with a Mac Mini dedicated server in one of our London data centres we were able to offer just about the fastest route physically possible to Nominet’s network.
A company blog has been something that we’ve talked about at Mythic Beasts for some time now, but we’ve never quite got round to it… until now.
One of the frustrating things about being an ISP is that all too often, the only time that your customers notice that you’re actually doing something is when it all goes wrong. For example, our network is now completely unrecognisable from where it was three years ago, but for the most part the vast amount of work that has gone into this transformation has been completely invisible to our users.
The company has also changed significantly, having acquired the shared hosting business of Black Cat Networks, and more recently, the hosting, virtual server and co-location business of Blue Linux. Integrating these services has allowed us to improve our own services, but in a lot of cases, this has happened in ways that are not directly visible to our users.
This blog is an attempt to give our customers (and anyone else who cares) some insight into what we’re up to, what’s in the future, and, when things do go wrong, provide a forum for discussing what happened and how we can improve in the future.