Mythic Beasts acquires BHost

July 1st, 2018 by

Having a hungry Wyvern in our logo makes eating other companies much easier to draw.

Hot on the heels of acquiring Retrosnub, we’ve also bought the customers and assets of BHost. BHost are a virtual server provider with services in London, Amsterdam and California based on OpenVZ and KVM.

We’re excited about this acquisition as it provides us with a great opportunity to expand our network using BHost’s Amsterdam infrastructure. At the same time, we’re confident that we can provide some immediate and longer term improvements to the BHost service, not least through our larger support team being able to offer more timely and helpful responses to customer queries.

Although handover officially happened today, BHost customers have had access to our control panel for several weeks, mostly so that we could start tackling EU VAT bureaucracy. BHost are a US-registered business. We’re a VAT-registered business in the UK. Thanks to VAT MESS, it’s actually much harder for us to sell to EU-based consumers than it was for BHost, as we’re required to collect an unreasonable amount of evidence of customer location.

The good news for BHost customers is that we’re matching BHost’s current pricing with our UK VAT-inclusive price. This means that EU VAT-registered businesses, and all non-EU customers will see a significant reduction in the price that they pay.

If you’re a BHost customer and you’ve not already done so, please log in to our customer control panel using your BHost username (email address) and password and confirm your contact details.

Network Expansion

BHost run a network from London/Amsterdam with multiple 10Gbps uplinks and some peering in each site. We will be moving the BHost London network behind our own so that BHost customers can take advantage of our larger capacity uplinks and significantly better peering arrangements, which includes transit-free connections to every major UK ISP.

We’re also taking the opportunity to significantly improve the connectivity to our Cambridge data centre. We currently have two uplinks via different London data centres. We will replace one of these links with a direct connection to Amsterdam, and bring both up to 10Gbps. Combined with BHost’s existing London/Amsterdam connection, this will create a 10Gbps ring around London, Cambridge and Amsterdam, complementing our 50Gbps ring around our three London sites. This will provide increased bandwidth and improved resiliency for our Cambridge customers, whilst also providing a second London/Amsterdam link to improve resilience within the BHost network.

BHost Amsterdam customers will gain direct UK connectivity through our extensive London peering. We will gain the Amsterdam Internet Exchange connection (AMSIX) from BHost, bringing improved European connectivity to all London customers. We expect to be able to substantially increase the number of AMSIX peers, improving EU connectivity for all customers.

Cloud expansion

BHost’s London presence is in the Meridian Gate (MER) data centre. We already have a significant footprint in MER, although it’s not currently available as a zone in our public cloud. We’re investing in new hardware to deploy in Meridian Gate which is both substantially faster and more power efficient than the current hosts. We’ll be deploying this into our existing suite in MER, and then migrating BHost servers into it. BHost customers will see a small window of scheduled downtime as we migrate each server, but should then seen significantly improved performance on the new hardware.

Our Amsterdam and US presences will give additional options to customers that need to be physically hosted within the (post-Brexit) EU or US. We expect this to become more relevant after Brexit when the UK and EU may have diverging regulatory requirements.

Additional services

All BHost customers can now take advantage of additional Mythic Beasts services including management services for virtual servers, domain registration and DNSSEC-enabled, API-backed DNS hosting.

Support

Mythic Beasts have a larger support team and we’re very well placed to provide significantly improved customer service to all of our new customers. Of course, we do expect the period immediately after the transition to be very busy as customers become familiar with the new billing arrangements, and we get to grips with supporting BHost’s services. We will have additional staff during this period, but please be patient if support responses are a little slower than usual.

Flatpak: pre-assembled furniture applications for Linux

February 23rd, 2018 by

Flatpack is furniture you build yourself. Flatpak is preassembled applications for Linux. This is apparently not at all confusing. (image thanks to https://www.flickr.com/photos/51pct/)

Flatpak provides Linux desktop applications in a secure sandbox which can be installed and run independently of the underlying Linux distribution. Application developers can produce one Flatpak and select the versions of libraries that their application is built, tested and run with so it’s easy for users on any Linux OS to get whatever was intended by the application developer.

Flathub is a distribution service to make sure that Flatpaks are available for popular Linux desktop applications, and at its heart is a private cloud running BuiltBot which builds popular Linux and free/open source desktop apps in Flatpak format. This lives in Mythic Beasts’ Cambridge data centre.

At Mythic Beasts we like this idea so much we offered them lots of free bandwidth (100TB) to help get them started. We’ve now upgraded this with a pair of virtual machines in our core Docklands sites to provide redundancy and more grunt for traffic serving.


Some of their users noticed and were appreciative immediately:

2017-02-23 16:30:00irc wow! Flathub is *so* much faster i’m getting like 10 MB/s compared to less than 1 this morning … and the search is now instant
2017-02-26 11:35PersiFlathub is _really_ fast now, great job to whoever is responsible
🙂

Capacity upgrades, cheaper bandwidth and new fibre

December 8th, 2017 by

We don’t need these Giant Scary Laser stickers yet.

We’ve recently upgraded both our LONAP connections to 10Gbps at our two London POPs bring our total external capacity to 62Gbps.

We’ve been a member of LONAP, the London Network Access Point, since we first started running our own network. LONAP is an internet exchange, mutually owned by several hundred members. Each member connects to LONAP’s switches and can arrange to exchange traffic directly with other members without passing through another internet provider. This makes our internet traffic more stable because we have more available routes, faster because our connections go direct between source and recipient with fewer hops and usually cheaper too.

Since we joined, both we and LONAP have grown. Initially we had two 1Gbps connections, one in each of our two core sites. If one failed the other could take over the traffic. Recently we’ve been running both connections near capacity much of the time and in the event of failure of either link we’d have to fall back to a less direct, slower and more expensive route. Time to upgrade.

The upgrade involved moving from a copper CAT5e connection to optic fibre. As a company run by physics graduates this is an excellent excuse to add yet more LASERs to our collection. Sadly the LASERs aren’t very exciting, being 1310nm they’re invisible to the naked eye and for safety reasons they’re very low powered (~1mW). Not only will they not set things on fire (bad) they also won’t blind you if you accidentally look down the fibre (good). This is not universally true for all optic fibre though; DWDM systems can have nearly 100 invisible laser beams in the same fibre at 100x the power output each. Do not look down optic fibre!

The first upgrade at Sovereign House went smoothly, bringing online the first 10Gbps LONAP link. In Harbour Exchange proved a little more problematic.  We initially had a problem with an incompatible optical transceiver. Once replaced, we then saw a further issue with the link being unstable which was resolved by changing the switch port and optical transceiver at LONAP’s end. We then had further low level bit errors resulting in packet loss for large packets. This was eventually traced to a marginal optical patch lead. Many thanks to Rob Lister of LONAP support for quickly resolving this for us.

With the upgrade completed, we now have two 10Gbps connections to LONAP, in addition to our two 10Gbps connections into the London Internet Exchange and multiple 10Gbps transit uplinks, as well as some 1Gbps private connections to some especially important peers.

To celebrate this we’re dropping our bandwidth excess pricing to 1p/GB for all London based services.  The upgrades leave us even better placed to offer very competitive quotes on high bandwidth servers, as well as IPv6 and IPv4 transit in Harbour Exchange, Meridian Gate and Sovereign House.  Please contact us at sales@mythic-beasts.com for more information.

FRμIT: Federated RaspberryPi MicroInfrastructure Testbed

July 3rd, 2017 by

The participants of the FRμIT project, distributed Raspberry Pi cloud.

FRμIT is an academic project that looks at building and connecting micro-data-centres together, and what can be achieved with this kind of architecture. Currently they have hundreds of Raspberry Pis and they’re aiming for 10,000 by the project end. They invited us to join them, we’ve already solved the problem of building a centralised Raspberry Pi data centre and wanted to know if we could advise and assist their project.  We recently joined them in the Cambridge University Computer Lab for their first project meeting.

Currently we centralise computing in data centres as it’s cheaper to pick up the computers and move them to the heart of the internet than it is to bring extremely fast (10Gbps+) internet everywhere. This model works brilliantly for many applications because a central computing resource can support large numbers of users each connecting with their own smaller connections. It works less well when the source data is large and in somewhere with poor connectivity, for example a video stream from a nature reserve. There are also other types of application such as Seti@Home which have huge computational requirements on small datasets where distributing work over slow links works effectively.

Gbps per GHz

At a recent UK Network Operator Forum meeting, Google gave a presentation about their data centre networking where they built precisely the opposite architecture to the one proposed here. They have a flat LAN with the same bandwidth between any two points so that all CPUs are equivalent. This involves around 1Gbps of bandwidth per 1GHz of CPU. This simplifies your software stack as applications don’t have to try and place CPU close to the data but it involves an extremely expensive data centre build.

This isn’t an architecture you can build with the Raspberry Pi. Our Raspberry Pi cloud is as about as close as you can manage with 100Mbps per 4×1.2GHz cores. This is about 1/40th of the network capacity required to run Google architecture applications. But that’s okay, other applications are available. As FRμIT scales geographically, the bandwidth will become much more constrained – it’s easy to imagine a cluster of 100 Raspberry Pis sharing a single low bandwidth uplink back to the core.

This immediately leads to all sort of interesting and hard questions about how to write a scheduler as you need to know in advance the likely CPU/bandwidth mix of your distributed application in order to work out where it can run. Local data distribution becomes important – 100+ Pis downloading updates and applications may saturate the small backbone links. They also have a variety of hardware types, the original Pi model B to the newer and faster Pi 3, possibly even some Pi Zero W.

Our contribution

We took the members of the project through our Raspberry Pi Cloud is built, including how a Pi is provisioned, how the network and operating system are provisioned and the back-end for the entire process from clicking “order” to a booted Pi awaiting customer login.

In discussions of how to manage a large number of Federated Raspberry Pis we were pleased to find considerable agreement with our method of managing lots of servers: use OpenVPN to build a private network and route a /48 of IPv6 space to it.   This enables standard server management tools work, even where the Raspberry Pis are geographically distributed behind NAT firewalls and other creative network configurations.

Donate your old Pi

If you have an old Raspberry Pi, perhaps because you’ve upgraded to a new Pi 3, you can donate it directly to the project through PiCycle. They’ll then recycle your old Raspberry Pi into the distributed compute cluster.

We’re looking forward to their discoveries and enjoyed working with the researchers. When we build solutions for customers we’re aiming to minimise the number of unknowns to de-risk the solution. By contrast tackling difficult unsolved problems is the whole point of research. If they knew how to build the system already they wouldn’t bother trying.

IPv6 Update

November 1st, 2016 by

Sky completed their IPv6 rollout – any device that comes with IPv6 support will use it by default.

Yesterday we attended the annual IPv6 Council to exchange knowledge and ideas with the rest of the UK networking industry about bringing forward the IPv6 rollout.

For the uninitiated, everything connected to the internet needs an address. With IPv4 there are only 4 billion addresses available which isn’t enough for one per person – let alone one each for my phone, my tablet, my laptop and my new internet connected toaster. So IPv6 is the new network standard that has an effectively unlimited number of addresses and will support an unlimited number of devices. The hard part is persuading everyone to move onto the new network.

Two years ago when the IPv6 Council first met, roughly 1 in 400 internet connections in the UK had IPv6 support. Since then Sky have rolled out IPv6 everywhere and by default all their customers have IPv6 connectivity. BT have rolled IPv6 out to all their SmartHub customers and will be enabling IPv6 for their Homehub 5 and Homehub 4 customers in the near future. Today 1 in 6 UK devices has IPv6 connectivity and when BT complete it’ll be closer to 1 in 3. Imperial College also spoke about their network which has IPv6 enabled everywhere.

Major content sources (Google, Facebook, LinkedIn) and CDNs (Akamai, Cloudflare) are all already enabled with IPv6. This means that as soon as you turn on IPv6 on an access network, over half your traffic flows over IPv6 connections. With Amazon and Microsoft enabling IPv6 in stages on their public clouds by default traffic will continue to grow. Already for a some number of ISPs, IPv6 is the dominant protocol. The Internet Society are already predicting that IPv6 traffic will exceed IPv4 traffic around two to three years from now.

LinkedIn and Microsoft both spoke about deploying IPv6 in their corporate and data centre environments. Both companies are suffering exhaustion of private RFC1918 address space – there just aren’t enough 10.a.b.c addresses to cope with organisations of their scale so they’re moving now to IPv6-only networks.

Back in 2012 we designed and deployed an IPv6-only architecture for Raspberry Pi, and have since designed other IPv6-only infrastructures including a substantial Linux container deployment. Educating the next generation of developers about how networks will work when they join the workforce is critically important.

More bandwidth

October 19th, 2016 by
We've added 476892 kitten pictures per second of capacity.

We’ve added 476892 kitten pictures per second of capacity.

We’ve brought up some new connectivity today; we’ve added a new 10Gbps transit link out of our Sovereign House data centre. This gives not only more capacity but also some improved DDoS protection options with distance-based blackholing.

We also added a 1Gbps private peering connection to IDNet. We’ve used IDNet for ADSL connections for a long time, not least for their native IPv6 support. A quick inspection shows 17% of traffic over this private link as native IPv6.

DNSSEC now in use by Raspberry Pi

May 12th, 2016 by

Over the past twelve months we’ve implemented Domain Name Security Extensions, initially by allowing the necessary records to be set with the domain registries, and then in the form of a managed service which sets the records, signs the zone files, and takes care of regular key rotation

Our beta program has been very successful, lots of domains now have DNSSEC and we’ve seen very few issues. We thought that we should do some wider testing with a larger number of users than our own website, so we asked some friends of ours with a busy website if they felt brave enough to give it a go

Eben Upton> I think this would be worth doing.
Ben Nuttall> I'll go ahead and click the green button for each domain.
-- time passes --
Ben Nuttall> Done - for all that use HTTPS.

So now we have this lovely graph that indicates we’ve secured DNS all the way down the chain for every request. Mail servers know for definite they have the correct address to deliver mail to, Web requests know they’re at the correct webserver.

The only remaining task is to remove the beta label in our control panel.

Raspberry Pi DNSSEC visualisation, click for interactive version

Raspberry Pi DNSSEC visualisation, click for interactive version

Additional Managed Rack Capacity

March 14th, 2016 by

We’ve spent even more time than usual in data centres recently as we’ve been kitting out our new cage in the Meridian Gate data centre.

Much of the new capacity is being deployed as “managed racks”.  Racks are generally supplied with the bare essentials of electricity, cooling and locked doors.  At Mythic Beasts, we transform them into managed racks, including all the features you need to effectively administer your equipment remotely, including:

logging serial consoles

Logging serial consoles

  • Internet connectivity – we’ve got 10Gbps connections onto both LINX networks, connecting at different sites.  We’ve also got multiple transit providers, and are present on the LoNAP peering exchange.   Our network has native IPv6 support, and if you have your own address space, we can provide you with BGP feeds from our routers. We can also offer private LANs, both as VLANs or as physically separate networks.
  • Remote power management – power cycle your server immediately, at any time using our customer control panel.
  • Serial connectivity – a 115.2kbps serial connection may seem a bit old fashioned in an age when we’re wiring our switches together at 40Gbps, but they remain an extremely effective mechanism for out-of-band control of servers and other equipment, particularly when coupled with our logging serial console software.
  • On-site support – all of our London facilities have 24/7 access to the data centres’ on-site engineers.  We are also able to arrange for our own staff to carry out routine maintenance, such as replacing failed hard drives.

Meridian Gate is the third London data centre in which we have a presence, along with Sovereign House and Harbour Exchange, with the three sites connected by our own dark fibre ring.

IPv4 is so last century

November 11th, 2015 by
A scary beast that lives in the Fens.

A scary beast that lives in the Fens.

Fenrir is the latest addition to the Mythic Beasts family. It’s a virtual machine in our Cambridge data centre which is running our blog. What’s interesting about it, is that it has no IPv4 connectivity.

eth0 Link encap:Ethernet HWaddr 52:54:00:39:67:12
     inet6 addr: 2a00:1098:0:82:1000:0:39:6712/64 Scope:Global
     inet6 addr: fe80::5054:ff:fe39:6712/64 Scope:Link
     UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

It is fronted by our Reverse Proxy service – any connection over IPv4 or IPv6 arrives at one of our proxy servers and is forwarded on over IPv6 to fenrir which generates and serves the page. If it needs to make an outbound connection to another server (e.g. to embed our Tweets) it uses our NAT64 service which proxies the traffic for it.

All of our standard management services are running: graphing, SMS monitoring, nightly backups, security patches, and the firewall configuration is simpler because we only need to write a v6 configuration. In addition, we don’t have to devote an expensive IPv4 address to the VM, slightly reducing our marketing budget.

For any of our own services, IPv6 only is the new default. Our staff members have to make a justification if they want to use one of our IPv4 addresses for a service we’re building. We now also need to see how many addresses we can reclaim from existing servers by moving to IPv6 + Proxy.

IPv6 Graphing

October 15th, 2015 by
it's a server graph!

it’s a server graph!

One of the outstanding tasks for full IPv6 support within Mythic Beasts was to make our graphing server support IPv6 only hosts. In theory this is trivial, in practice it required a bit more work.

Our graphing service uses munin, and we built it on munin 1.4 nearly five years ago; we scripted all the configuration and it has basically run itself ever since. When we added our first IPv6 only server it didn’t automatically get configured with graphs. On investigation we discovered that munin 1.4 just didn’t support IPv6 at all, so the first step was to build a new munin server based on Debian Jessie with munin 2.0.

Our code generates the configuration file by printing a line for each server to monitor which includes the IP address. For IPv4 you print the address as normal, 127.0.0.1, for IPv6 you have to encase the address in square brackets [2a00:1098:0:82:1000:0:1:1]. So a small patch later to spot which type of address is which and we have a valid configuration file.

Lastly we needed to add the IPv6 address of our munin server into the configuration file of all the servers that might be talked to over IPv6. Once this was done, as if by magic, thousands of graphs appeared.