IPv4/IPv6 transit in HE Fremont 2

September 18th, 2020 by

Back in 2018, we acquired BHost, a virtual hosting provider with a presence in the UK, the Netherlands and the US. Since the acquisition, we’ve been working steadily to upgrade the US site from a single transit provider with incomplete IPv6 networking and a mixture of container-based and full virtualisation to what we have now:

  • Dual redundant routers
  • Two upstream network providers (HE.net, CenturyLink)
  • A presence on two internet Exchanges (FCIX/SFMIX)
  • Full IPv6 routing
  • All customers on our own KVM-based virtualisation platform

With these improvements to our network, we’re now able to offer IPv4 and IPv6 transit connectivity to other customers in Hurricane Electric’s Fremont 2 data centre. We believe that standard services should have a standard price list, so here’s ours:

Transit Price List

Prices start at £60/month on a one month rolling contract, with discounts for longer commits. You can order online by hitting the big green button, we’ll send you a cross-connect location within one working day, and we’ll have your session up within one working day of the cross connect being completed. If we don’t hit this timescale, your first month is free.

We believe that ordering something as simple as IP transit should be this straightforward, but it seems that it’s not the norm. Here’s what it took for us to get our second 10G transit link in place:

  • 24th April – Contact sales representative recommended by another ISP.
  • 1st May – Contact different sales representative recommended by UKNOF as one of their sponsors.
  • 7th May – 1 hour video conference to discuss our requirements (a 10Gbps link).
  • 4th June – Chase for a formal quote.
  • 10th June – Provide additional details required for a formal quote.
  • 10th June – Receive quote.
  • 1st July – Clarify further details on quote, including commit.
  • 2nd July – Approve quote, place order by email.
  • 6th July – Answer clarifications, push for contract.
  • 7th July – Quote cancelled. Provider realises that Fremont is in the US and they have sent EU pricing. Receive and accept higher revised quote.
  • 10th July – Receive contract.
  • 14th July – Return signed contract. Ask for cross connect location.
  • 15th July – Reconfirm the delivery details from the signed contract.
  • 16th July – Send network plan details for setting up the network.
  • 27th July – Send IP space justification form. They remind us to provision a cross connect, we ask for details again.
  • 6th August – Chase for cross connect location.
  • 7th August – Delivery manager allocated who will process our order.
  • 11th August – Ask for a cross connect location.
  • 20th August – Ask for a cross connect location.
  • 21st August – Circuit is declared complete within the 35 day working setup period. Billing for the circuit starts.
  • 26th August – Receive a Letter Of Authorisation allowing us to arrange the cross connect. We immediately place order for cross connect.
  • 26th August – Data centre is unable to fulfil cross connect order because the cross connect location is already in use.
  • 28th August – Provide contact at data centre for our new provider to work out why this port is already in use.
  • 1st September – Receive holding mail confirming they’re working on sorting our cross connect issue.
  • 2nd September – Receive invoice for August + September. Refuse to pay it.
  • 3rd September – Cross connect location resolved, circuit plugged in, service starts functioning.

Shortly after this we put our order form live and improved our implementation, we received our first order on the 9th September and provisioned a few days later. Our third transit customer is up and live, order form to fully working was just under twelve hours; comfortably within our promise of two working days.

VMHaus services now available in Amsterdam

July 3rd, 2019 by

Integration can be hard work

Last year we had a busy time acquiring Retrosnub, BHost and VMHaus. We’ve been steadily making progress in the background integrating the services the companies provide to reduce costs and complexity of management. We can now also announce our first significant feature upgrade for VMHaus. We’ve deployed a new virtual server cluster to our Amsterdam location and VMHaus services are now available in Amsterdam. VMHaus is using Mythic Beasts for colocation and network and in Amsterdam they will gain access to our extensive set of peers at AMSIX, LINX and LoNAP. Per hour billed virtual servers are available from VMHaus with payment through Paypal.

As you’d expect, every VM comes with a /64 of IPv6 space.

In the background we’ve also been migrating former-BHost KVM-based services to Mythic Beasts VM services in Amsterdam. Shortly we’ll be starting to migrate former-BHost and VMHaus KVM-based services in London to new VM clusters in the Meridian Gate data centre.

Mythic Beasts gaan naar Nederland

February 20th, 2019 by

The art warehouses in Amsterdam look much prettier than the data warehouses.

Back in July 2018, Mythic Beasts acquired Bhost, giving us additional virtual machine (VM) clusters in London, Amsterdam and California.

Today we’re pleased to announce that we’ve deployed a substantial new VM cloud to Amsterdam, running our own VM platform. Virtual machines in Amsterdam are available to purchase immediately through our website in sizes from 1GB/1vCPU to 160GB/12vCPUs, and with both SSD and spinning rust disk options. Server management and backup options are also available.

Thanks to Brexit-related regulatory uncertainty, some of our existing clients informed us that they must be hosted outside of the UK before 29th March. Deploying capacity on our own platform in Amsterdam means that we can migrate virtual servers directly to the new location.

Once we’ve dealt with the immediate Brexit-driven server moves, we’ll be looking at migrating former-Bhost VMs into this new cloud, giving a significant performance boost in the process.

Deploying the Amsterdam VM cloud is a significant milestone in the integration of the Bhost infrastructure into our own. The integration provides improved performance and redundancy for both Mythic Beasts and Bhost customers whilst simultaneously cutting our operating costs. In preparation for this, we completed upgrades to our core network last October. The existing fibre ring around our three main London sites, which is currently lit at 50Gbps, is now complemented by a 10Gbps ring around London (HEX) ⟺ Cambridge ⟺ Amsterdam ⟺ London (MER). This replaces the old 2x1Gbps connectivity from Cambridge to London with diverse 10Gbps feeds to London and Amsterdam. Our network has gained an additional 10Gbps transit in Amsterdam (NTT) and we are also now connected on the Amsterdam Internet Exchange (AMS-IX).

On a trip to deploy new routers, Pete even managed a tour of the city on foot in just over three hours.



Primary reasons for choosing Amsterdam include being a flat country that’s easy to cycle around, a remarkably nice overnight ferry journey and superb boy bands asking us to stay. Secondary reasons are all boring such as a well developed market for data centres and internet transit, a world class internet exchange and remarkably few insane British politicians. We’re looking forward to the first Anglo-Dutch cricket match.

Mythic Beasts acquires BHost

July 1st, 2018 by

Having a hungry Wyvern in our logo makes eating other companies much easier to draw.

Hot on the heels of acquiring Retrosnub, we’ve also bought the customers and assets of BHost. BHost are a virtual server provider with services in London, Amsterdam and California based on OpenVZ and KVM.

We’re excited about this acquisition as it provides us with a great opportunity to expand our network using BHost’s Amsterdam infrastructure. At the same time, we’re confident that we can provide some immediate and longer term improvements to the BHost service, not least through our larger support team being able to offer more timely and helpful responses to customer queries.

Although handover officially happened today, BHost customers have had access to our control panel for several weeks, mostly so that we could start tackling EU VAT bureaucracy. BHost are a US-registered business. We’re a VAT-registered business in the UK. Thanks to VAT MESS, it’s actually much harder for us to sell to EU-based consumers than it was for BHost, as we’re required to collect an unreasonable amount of evidence of customer location.

The good news for BHost customers is that we’re matching BHost’s current pricing with our UK VAT-inclusive price. This means that EU VAT-registered businesses, and all non-EU customers will see a significant reduction in the price that they pay.

If you’re a BHost customer and you’ve not already done so, please log in to our customer control panel using your BHost username (email address) and password and confirm your contact details.

Network Expansion

BHost run a network from London/Amsterdam with multiple 10Gbps uplinks and some peering in each site. We will be moving the BHost London network behind our own so that BHost customers can take advantage of our larger capacity uplinks and significantly better peering arrangements, which includes transit-free connections to every major UK ISP.

We’re also taking the opportunity to significantly improve the connectivity to our Cambridge data centre. We currently have two uplinks via different London data centres. We will replace one of these links with a direct connection to Amsterdam, and bring both up to 10Gbps. Combined with BHost’s existing London/Amsterdam connection, this will create a 10Gbps ring around London, Cambridge and Amsterdam, complementing our 50Gbps ring around our three London sites. This will provide increased bandwidth and improved resiliency for our Cambridge customers, whilst also providing a second London/Amsterdam link to improve resilience within the BHost network.

BHost Amsterdam customers will gain direct UK connectivity through our extensive London peering. We will gain the Amsterdam Internet Exchange connection (AMSIX) from BHost, bringing improved European connectivity to all London customers. We expect to be able to substantially increase the number of AMSIX peers, improving EU connectivity for all customers.

Cloud expansion

BHost’s London presence is in the Meridian Gate (MER) data centre. We already have a significant footprint in MER, although it’s not currently available as a zone in our public cloud. We’re investing in new hardware to deploy in Meridian Gate which is both substantially faster and more power efficient than the current hosts. We’ll be deploying this into our existing suite in MER, and then migrating BHost servers into it. BHost customers will see a small window of scheduled downtime as we migrate each server, but should then seen significantly improved performance on the new hardware.

Our Amsterdam and US presences will give additional options to customers that need to be physically hosted within the (post-Brexit) EU or US. We expect this to become more relevant after Brexit when the UK and EU may have diverging regulatory requirements.

Additional services

All BHost customers can now take advantage of additional Mythic Beasts services including management services for virtual servers, domain registration and DNSSEC-enabled, API-backed DNS hosting.

Support

Mythic Beasts have a larger support team and we’re very well placed to provide significantly improved customer service to all of our new customers. Of course, we do expect the period immediately after the transition to be very busy as customers become familiar with the new billing arrangements, and we get to grips with supporting BHost’s services. We will have additional staff during this period, but please be patient if support responses are a little slower than usual.

Flatpak: pre-assembled furniture applications for Linux

February 23rd, 2018 by

Flatpack is furniture you build yourself. Flatpak is preassembled applications for Linux. This is apparently not at all confusing. (image thanks to https://www.flickr.com/photos/51pct/)

Flatpak provides Linux desktop applications in a secure sandbox which can be installed and run independently of the underlying Linux distribution. Application developers can produce one Flatpak and select the versions of libraries that their application is built, tested and run with so it’s easy for users on any Linux OS to get whatever was intended by the application developer.

Flathub is a distribution service to make sure that Flatpaks are available for popular Linux desktop applications, and at its heart is a private cloud running BuiltBot which builds popular Linux and free/open source desktop apps in Flatpak format. This lives in Mythic Beasts’ Cambridge data centre.

At Mythic Beasts we like this idea so much we offered them lots of free bandwidth (100TB) to help get them started. We’ve now upgraded this with a pair of virtual machines in our core Docklands sites to provide redundancy and more grunt for traffic serving.


Some of their users noticed and were appreciative immediately:

2017-02-23 16:30:00irc wow! Flathub is *so* much faster i’m getting like 10 MB/s compared to less than 1 this morning … and the search is now instant
2017-02-26 11:35PersiFlathub is _really_ fast now, great job to whoever is responsible
🙂

Capacity upgrades, cheaper bandwidth and new fibre

December 8th, 2017 by

We don’t need these Giant Scary Laser stickers yet.

We’ve recently upgraded both our LONAP connections to 10Gbps at our two London POPs bring our total external capacity to 62Gbps.

We’ve been a member of LONAP, the London Network Access Point, since we first started running our own network. LONAP is an internet exchange, mutually owned by several hundred members. Each member connects to LONAP’s switches and can arrange to exchange traffic directly with other members without passing through another internet provider. This makes our internet traffic more stable because we have more available routes, faster because our connections go direct between source and recipient with fewer hops and usually cheaper too.

Since we joined, both we and LONAP have grown. Initially we had two 1Gbps connections, one in each of our two core sites. If one failed the other could take over the traffic. Recently we’ve been running both connections near capacity much of the time and in the event of failure of either link we’d have to fall back to a less direct, slower and more expensive route. Time to upgrade.

The upgrade involved moving from a copper CAT5e connection to optic fibre. As a company run by physics graduates this is an excellent excuse to add yet more LASERs to our collection. Sadly the LASERs aren’t very exciting, being 1310nm they’re invisible to the naked eye and for safety reasons they’re very low powered (~1mW). Not only will they not set things on fire (bad) they also won’t blind you if you accidentally look down the fibre (good). This is not universally true for all optic fibre though; DWDM systems can have nearly 100 invisible laser beams in the same fibre at 100x the power output each. Do not look down optic fibre!

The first upgrade at Sovereign House went smoothly, bringing online the first 10Gbps LONAP link. In Harbour Exchange proved a little more problematic.  We initially had a problem with an incompatible optical transceiver. Once replaced, we then saw a further issue with the link being unstable which was resolved by changing the switch port and optical transceiver at LONAP’s end. We then had further low level bit errors resulting in packet loss for large packets. This was eventually traced to a marginal optical patch lead. Many thanks to Rob Lister of LONAP support for quickly resolving this for us.

With the upgrade completed, we now have two 10Gbps connections to LONAP, in addition to our two 10Gbps connections into the London Internet Exchange and multiple 10Gbps transit uplinks, as well as some 1Gbps private connections to some especially important peers.

To celebrate this we’re dropping our bandwidth excess pricing to 1p/GB for all London based services.  The upgrades leave us even better placed to offer very competitive quotes on high bandwidth servers, as well as IPv6 and IPv4 transit in Harbour Exchange, Meridian Gate and Sovereign House.  Please contact us at sales@mythic-beasts.com for more information.

FRμIT: Federated RaspberryPi MicroInfrastructure Testbed

July 3rd, 2017 by

The participants of the FRμIT project, distributed Raspberry Pi cloud.

FRμIT is an academic project that looks at building and connecting micro-data-centres together, and what can be achieved with this kind of architecture. Currently they have hundreds of Raspberry Pis and they’re aiming for 10,000 by the project end. They invited us to join them, we’ve already solved the problem of building a centralised Raspberry Pi data centre and wanted to know if we could advise and assist their project.  We recently joined them in the Cambridge University Computer Lab for their first project meeting.

Currently we centralise computing in data centres as it’s cheaper to pick up the computers and move them to the heart of the internet than it is to bring extremely fast (10Gbps+) internet everywhere. This model works brilliantly for many applications because a central computing resource can support large numbers of users each connecting with their own smaller connections. It works less well when the source data is large and in somewhere with poor connectivity, for example a video stream from a nature reserve. There are also other types of application such as Seti@Home which have huge computational requirements on small datasets where distributing work over slow links works effectively.

Gbps per GHz

At a recent UK Network Operator Forum meeting, Google gave a presentation about their data centre networking where they built precisely the opposite architecture to the one proposed here. They have a flat LAN with the same bandwidth between any two points so that all CPUs are equivalent. This involves around 1Gbps of bandwidth per 1GHz of CPU. This simplifies your software stack as applications don’t have to try and place CPU close to the data but it involves an extremely expensive data centre build.

This isn’t an architecture you can build with the Raspberry Pi. Our Raspberry Pi cloud is as about as close as you can manage with 100Mbps per 4×1.2GHz cores. This is about 1/40th of the network capacity required to run Google architecture applications. But that’s okay, other applications are available. As FRμIT scales geographically, the bandwidth will become much more constrained – it’s easy to imagine a cluster of 100 Raspberry Pis sharing a single low bandwidth uplink back to the core.

This immediately leads to all sort of interesting and hard questions about how to write a scheduler as you need to know in advance the likely CPU/bandwidth mix of your distributed application in order to work out where it can run. Local data distribution becomes important – 100+ Pis downloading updates and applications may saturate the small backbone links. They also have a variety of hardware types, the original Pi model B to the newer and faster Pi 3, possibly even some Pi Zero W.

Our contribution

We took the members of the project through our Raspberry Pi Cloud is built, including how a Pi is provisioned, how the network and operating system are provisioned and the back-end for the entire process from clicking “order” to a booted Pi awaiting customer login.

In discussions of how to manage a large number of Federated Raspberry Pis we were pleased to find considerable agreement with our method of managing lots of servers: use OpenVPN to build a private network and route a /48 of IPv6 space to it.   This enables standard server management tools work, even where the Raspberry Pis are geographically distributed behind NAT firewalls and other creative network configurations.

Donate your old Pi

If you have an old Raspberry Pi, perhaps because you’ve upgraded to a new Pi 3, you can donate it directly to the project through PiCycle. They’ll then recycle your old Raspberry Pi into the distributed compute cluster.

We’re looking forward to their discoveries and enjoyed working with the researchers. When we build solutions for customers we’re aiming to minimise the number of unknowns to de-risk the solution. By contrast tackling difficult unsolved problems is the whole point of research. If they knew how to build the system already they wouldn’t bother trying.

IPv6 Update

November 1st, 2016 by

Sky completed their IPv6 rollout – any device that comes with IPv6 support will use it by default.

Yesterday we attended the annual IPv6 Council to exchange knowledge and ideas with the rest of the UK networking industry about bringing forward the IPv6 rollout.

For the uninitiated, everything connected to the internet needs an address. With IPv4 there are only 4 billion addresses available which isn’t enough for one per person – let alone one each for my phone, my tablet, my laptop and my new internet connected toaster. So IPv6 is the new network standard that has an effectively unlimited number of addresses and will support an unlimited number of devices. The hard part is persuading everyone to move onto the new network.

Two years ago when the IPv6 Council first met, roughly 1 in 400 internet connections in the UK had IPv6 support. Since then Sky have rolled out IPv6 everywhere and by default all their customers have IPv6 connectivity. BT have rolled IPv6 out to all their SmartHub customers and will be enabling IPv6 for their Homehub 5 and Homehub 4 customers in the near future. Today 1 in 6 UK devices has IPv6 connectivity and when BT complete it’ll be closer to 1 in 3. Imperial College also spoke about their network which has IPv6 enabled everywhere.

Major content sources (Google, Facebook, LinkedIn) and CDNs (Akamai, Cloudflare) are all already enabled with IPv6. This means that as soon as you turn on IPv6 on an access network, over half your traffic flows over IPv6 connections. With Amazon and Microsoft enabling IPv6 in stages on their public clouds by default traffic will continue to grow. Already for a some number of ISPs, IPv6 is the dominant protocol. The Internet Society are already predicting that IPv6 traffic will exceed IPv4 traffic around two to three years from now.

LinkedIn and Microsoft both spoke about deploying IPv6 in their corporate and data centre environments. Both companies are suffering exhaustion of private RFC1918 address space – there just aren’t enough 10.a.b.c addresses to cope with organisations of their scale so they’re moving now to IPv6-only networks.

Back in 2012 we designed and deployed an IPv6-only architecture for Raspberry Pi, and have since designed other IPv6-only infrastructures including a substantial Linux container deployment. Educating the next generation of developers about how networks will work when they join the workforce is critically important.

More bandwidth

October 19th, 2016 by
We've added 476892 kitten pictures per second of capacity.

We’ve added 476892 kitten pictures per second of capacity.

We’ve brought up some new connectivity today; we’ve added a new 10Gbps transit link out of our Sovereign House data centre. This gives not only more capacity but also some improved DDoS protection options with distance-based blackholing.

We also added a 1Gbps private peering connection to IDNet. We’ve used IDNet for ADSL connections for a long time, not least for their native IPv6 support. A quick inspection shows 17% of traffic over this private link as native IPv6.

DNSSEC now in use by Raspberry Pi

May 12th, 2016 by

Over the past twelve months we’ve implemented Domain Name Security Extensions, initially by allowing the necessary records to be set with the domain registries, and then in the form of a managed service which sets the records, signs the zone files, and takes care of regular key rotation

Our beta program has been very successful, lots of domains now have DNSSEC and we’ve seen very few issues. We thought that we should do some wider testing with a larger number of users than our own website, so we asked some friends of ours with a busy website if they felt brave enough to give it a go

Eben Upton> I think this would be worth doing.
Ben Nuttall> I'll go ahead and click the green button for each domain.
-- time passes --
Ben Nuttall> Done - for all that use HTTPS.

So now we have this lovely graph that indicates we’ve secured DNS all the way down the chain for every request. Mail servers know for definite they have the correct address to deliver mail to, Web requests know they’re at the correct webserver.

The only remaining task is to remove the beta label in our control panel.

Raspberry Pi DNSSEC visualisation, click for interactive version

Raspberry Pi DNSSEC visualisation, click for interactive version