Raspberry Pi on Raspberry Pi

June 22nd, 2019 by

Question: Is the Raspberry Pi 4 any good?
Answer: It’s good enough to run its own launch website with tens of millions of visitors.

Raspberry Pi 4 with PoE mounting points already attached.

The Raspberry Pi 4 is out. It’s a quad core ARM A72 running at 1.5Ghz with 4GB of RAM and native 1Gbps ethernet. This means that according to our benchmarks (PHP 7.3 and WordPress) it’s about 2.5x the speed of the 3B+, thanks to the much faster core design and slight clock speed boost. The downside is that it uses more power. Idle power consumption is up slightly to about 3W, peak is now around 7W, up from 5W. It has some improved video features too and USB3.

We obtained an early sample and benchmarked it running the Raspberry Pi website. We used the main blog, which hosts the www.raspberrypi.org blog, and has historically been the most CPU-intensive site to provide. We now see complete page generation in about 0.8s, compared to 2.1s for the 3B+. Obviously in normal operation, most pages are served from a cache, and so the typical end user experience is much faster.

We were really excited by the Pi 4 and wanted to have them available in our cloud for launch day. Unfortunately, Eben had some bad news for us: netboot on the Pi 4 is only going to be added in a future firmware update. Netboot is critical to the operation of our cloud, as it prevents customers from bricking the servers. Our dreams were shattered.

Our standard Pi Cloud unit consists of 6x9x2 blocks of Pi 3B servers connected to PoE switches with just one wire per server. They all net boot and are controlled through our control panel and API for customer use. Since the lack of netboot means we couldn’t just drop the Pi 4 in as a faster version at this time, we went back to the lab and we built an alpha Pi 4 Cloud on a smaller scale: 18 Pi 4s that Raspberry Pi have very generously given to us, all connected with gigabit ethernet so we can try out the 2.5x faster CPUs, 3x faster Network and 4x RAM capacity. We deployed this to our Sovereign House data centre where it connects to our core network.

In full production, we’ll have six racks of Pi 4 stacked back to back.

What we needed then was a test application. We suggested running the main Raspberry Pi website, as we once did with the Pi 3. But with over twice the horsepower per machine we thought we’d dream bigger. How about hosting the Raspberry Pi website on the Raspberry Pi 4, on the Raspberry Pi 4 launch day?

We’ve set up 14 Pi 4s for PHP processing for the main website (56 cores, 56GB RAM), two for static file serving (8 cores, 8GB RAM) and two for memcached (8 cores / 8GB RAM). Late on Friday night we started moving production traffic from the existing virtual machines to the Pi 4 cluster, completing the move shortly after midnight. Every page from the blog after Sat 22nd June has been generated on a Raspberry Pi 4.

Unfortunately, this configuration isn’t yet ready to become the standard, production environment for the Raspberry Pi website. As noted above, the Pi 4s don’t yet support netboot, and so these ones have local SD card storage rather than netboot and network file storage. This means they can’t be remotely re-imaged and have comparatively unreliable storage. The configuration is also only deployed in a single data centre with all servers on a single switch, whereas in normal usage the Raspberry Pi website is simultaneously hosted in two different data centres for redundancy.

To make things more nerve wracking, Pi 4 requires Debian Buster which is a pre-release version of the operating system (full release July 6th). So it’s a cluster of brand new hardware, with a pre-release operating system and a single point of failure. We very strongly advise our customers not to use this for a mission critical super high profile website under-going the most significant production launch in their history. That really isn’t a very good idea.

We once advised Eben that Raspberry Pi probably wouldn’t sell very many computers. He didn’t listen to us then either.

We haven’t moved the entire stack to the Pi 4. The front-end load balancers, download and apt servers are still on non-Pi hardware, split across three data centres (two in London, one in Amsterdam). The Pi 4 hardware looks well-suited to taking over these roles too, although we’ve kept the current arrangement for now, as it’s well tested and allows us to switch back to non-Pi 4 back-ends quickly if needed.

We haven’t moved the databases to the Pi 4 yet either. We’re not going to do that until we can have nice reliable mirrored storage on enterprise SSDs with high write reliability and long write lifetimes attached to the Pis.

Where do we go from here?

Once netboot on Pi 4 is available, we’ll be adding 4 core A72 / 4GB servers to our Pi Cloud, at a slightly higher price than the existing Pi 3 servers, reflecting the higher hardware and power costs. We are also planning to investigate virtualisation as 1 core / 1GB Raspberry Pi VMs may be of interest to existing Pi3 users. 64 bit kernel support and potentially a 64 bit userland would also now be worth investigating.

If you like the idea of Pi 4 in the cloud, a Pi 4 VM in the cloud or 64 bit ARM in the cloud, tell us your plans at sales@mythic-beasts.com.

Out standing in a field

May 24th, 2019 by

Mythic Beasts: out standing in a field

Last year the Cambridge Beer Festival tried accepting payments by contactless cards. This didn’t work very well. They built a wireless LAN around the bar so that their card payment machines could process transactions. This went to an uplink that was a Raspberry Pi with a 4G dongle attached, this wasn’t really reliable enough for a full payment system, but worked as a proof of concept.

To improve things for this year we had a conversation with some friends at the recently incorporated Light Blue Fibre Ltd and between us were able to arrange for Jesus Green to have a fibre and an interlink to Mythic Beasts. As this is a prototype, we’re running below optimum speeds so we’ve delivered a relatively leisurely 1Gbps to the festival. The access points will happily deliver 150Mbps symmetric at any point on the bar if you have a quick enough wifi card in your laptop. We’ve still got the 3G uplink enabled as a backup just in-case someone slices the fibre.

If my phone had an Ethernet socket we’d be ten times as fast.

This year the plan was to restrict things to the tills and the administration network. However, being techies in a beer festival there is a tiny chance we may have been slightly drunk and enabled public wifi with a 100Mbps rate limit. This works well around the bar but there’s nowhere near enough access points to cover the outdoors and the onsite router is limited to 500 devices. It’s not yet production ready for 5,000 beer-drinking visitors, but we have a beer mat and a pencil and we’re sketching out ideas for next year.

Hosting made Sympl

May 21st, 2019 by

Sympl is so simple it’s even usable by Cambridge graduates

We’re pleased to announce that we are now supporting the Sympl open source project.  Sympl is a fork of Symbiosis, a platform that makes hosting websites and email on a virtual or dedicated server simple.  Once installed, configuring a new website, or creating a new email address and mailbox, is as simple as creating a new directory.  Web server, mail server and DNS configuration is all taken care of for you.

We’ve already taken the first steps towards integrating Sympl into our infrastructure by implementing support for our DNS API in OctoDNS.  For our next step, we will be adding support for OctoDNS to Sympl.  This means that it becomes possible to use Sympl with our DNS infrastructure, but equally you can use any other provider supported by OctoDNS (we don’t believe in lock in!)

We’re now very pleased to welcome Paul Cammish, the newest member of the Mythic Beasts team.  Paul has considerable experience, having worked at a number of different ISPs since 2000, most recently at Bytemark.  Paul created the Sympl project earlier this year, in order to provide ongoing support and enhancements for the platform.

We’re very excited by the possibilities that Sympl provides, and have some interesting ideas for future developments once we’ve dealt with the immediate priorities of DNS integration, and support for the upcoming Debian Buster release.

The source code for Sympl is now available in our self-hosted GitLab instance.

Moving to Mythic Beasts just got easier

April 9th, 2019 by

We’ve just rolled out a major overhaul of our DNS management interface. We hope that you’ll find the new interface faster and easier to use. As well as improvements to the user interface, we’ve also added the ability to import zone files. This means that if you’ve got a domain that is currently hosted with another provider, you can now easily transfer all of your DNS configuration to our servers in bulk (provided that you can get them to give you a copy of your current zone file).

Our DNS management interface is included with all domain registrations.  It’s also available for domains registered elsewhere for customers of our other services, including hosting accounts, virtual servers, dedicated servers and Raspberry Pi servers.

The DNS interface includes DNS API access, allowing you to support dynamic DNS and to automate other DNS management tasks.

We believe in retaining customers through good service rather than lock-in, so naturally there’s a corresponding zone file export feature.

Round-robin DNS – another use for ANAMEs

March 22nd, 2019 by

Sensible people don’t like to hard code IP addresses in lots of different places in DNS. Better to assign it a name, and then reference that name, as it makes it clearer what’s what and if you ever need to change that IP, you’ve only got to do it one place.

CNAME records can be a good way to do this, by aliasing a DNS name to an IP. Unfortunately, the DNS specs prevent you using CNAMEs in various places that you might want to, most commonly at the root-level of your domain (the dreaded “CNAME and other data” problem).

This is where ANAME pseudo-records come in. They look just like a CNAME record, but rather than being added to the DNS, our server converts them into A and AAAA records. This allows you to get the benefits of a CNAME in places where a CNAME is not legal.

This week a customer suggested another use for ANAME records that we’d not previously thought of: round robin DNS. That is, a single DNS name that points to multiple servers. As you can’t have multiple CNAME records for the same hostname, implementing round-robin DNS means hard-coding A and AAAA records into your zone file. Like this:

proxy.mythic-beasts.com. 3600	IN	A	93.93.129.174
proxy.mythic-beasts.com. 3600	IN	A	46.235.225.189
proxy.mythic-beasts.com. 3600	IN	AAAA	2a00:1098:0:80:1000:3b:1:1
proxy.mythic-beasts.com. 3600	IN	AAAA	2a00:1098:0:82:1000:3b:1:1

Which is messy. Wouldn’t it be nicer to use the names of the servers involved? Like this:

proxy.mythic-beasts.com. 3600	IN	CNAME	 rproxy46-sov-a.mythic-beasts.com.
proxy.mythic-beasts.com. 3600	IN	CNAME    rproxy46-hex-a.mythic-beasts.com.

Sadly, the spec says you can’t do that, but thanks to a minor tweak to our DNS control panel code, you can now do it with ANAME records. Simply specify multiple ANAME records for your host name, and we’ll go and find all A and AAAA records for all of the hosts that are referenced.

Thanks to @grayvsearth for the suggestion on this one.

ANAME records are available in our DNS management interface, which is included with all domain registrations, and available for free on other domains for customers of other services. Other features include a DNS API, allowing you to obtain Wildcard Let’s Encrypt certificates.

Mythic Beasts gaan naar Nederland

February 20th, 2019 by

The art warehouses in Amsterdam look much prettier than the data warehouses.

Back in July 2018, Mythic Beasts acquired Bhost, giving us additional virtual machine (VM) clusters in London, Amsterdam and California.

Today we’re pleased to announce that we’ve deployed a substantial new VM cloud to Amsterdam, running our own VM platform. Virtual machines in Amsterdam are available to purchase immediately through our website in sizes from 1GB/1vCPU to 160GB/12vCPUs, and with both SSD and spinning rust disk options. Server management and backup options are also available.

Thanks to Brexit-related regulatory uncertainty, some of our existing clients informed us that they must be hosted outside of the UK before 29th March. Deploying capacity on our own platform in Amsterdam means that we can migrate virtual servers directly to the new location.

Once we’ve dealt with the immediate Brexit-driven server moves, we’ll be looking at migrating former-Bhost VMs into this new cloud, giving a significant performance boost in the process.

Deploying the Amsterdam VM cloud is a significant milestone in the integration of the Bhost infrastructure into our own. The integration provides improved performance and redundancy for both Mythic Beasts and Bhost customers whilst simultaneously cutting our operating costs. In preparation for this, we completed upgrades to our core network last October. The existing fibre ring around our three main London sites, which is currently lit at 50Gbps, is now complemented by a 10Gbps ring around London (HEX) ⟺ Cambridge ⟺ Amsterdam ⟺ London (MER). This replaces the old 2x1Gbps connectivity from Cambridge to London with diverse 10Gbps feeds to London and Amsterdam. Our network has gained an additional 10Gbps transit in Amsterdam (NTT) and we are also now connected on the Amsterdam Internet Exchange (AMS-IX).

On a trip to deploy new routers, Pete even managed a tour of the city on foot in just over three hours.



Primary reasons for choosing Amsterdam include being a flat country that’s easy to cycle around, a remarkably nice overnight ferry journey and superb boy bands asking us to stay. Secondary reasons are all boring such as a well developed market for data centres and internet transit, a world class internet exchange and remarkably few insane British politicians. We’re looking forward to the first Anglo-Dutch cricket match.

Let’s Encrypt wildcard certificates

February 15th, 2019 by

Wildcard… sounds a bit like wildcat… cat pics!
Photo by Peter Trimming, CC BY 2.0

We’ve just made some changes to our plugin for dehydrated in order to better support Let’s Encrypt wildcard certificates.

Unlike normal certificates, which can be obtained using a web-based challenge, Let’s Encrypt’s wildcard certificates require a DNS-based challenge. In other words, you need to prove that you can control the DNS for the domain for which you are requesting a wildcard certificate.

Mythic Beasts provides a simple API for controlling DNS, which makes it possible to automate the process of responding to these challenges, and we provide a plugin for the popular dehydrated client that does just this.

We’ve just deployed a minor change which means that it’s now possible to obtain a single certificate for a domain, and a wildcard under that domain.

Access to our DNS API is included with all domain registrations. For more information, please see our instructions on using DNS-based challenges wih Let’s Encrypt. Please note that in order to obtain wildcard certificates you need to be using dehydrated version 0.6.0 or later.

Mythic Beasts acquires VMHaus

November 26th, 2018 by

Our pet wyvern was hungry again.

We’re pleased to announce that Mythic Beasts has acquired VMHaus, a virtual server provider with facilities in London and Los Angeles. We will continue to run VMHaus as a separate brand selling low-cost, prepaid virtual servers, which we believe will complement our own virtual server products well. We’re also pleased to announce that VMHaus co-founder Basil Fillan has joined Mythic Beasts as a full time employee. Basil has been responsible for the development of the VMHaus technical infrastructure, and will be ideally placed to help us provide support to VMHaus customers.

In the short term, VMHaus customers will see no changes to their services. Payments and invoicing will continue to be through VMHaus Ltd, and we will continue to accept new orders for VMHaus products. In the medium term, we’re planning improvements to both the VMHaus platform and our own virtual server infrastructure, based on our combined experiences of developing the two systems.

On the VMHaus side we hope to be able to start selling virtual servers in Amsterdam early in the new year, and also be able to offer IPv6-only virtual servers at a discounted rate. VMHaus customers will also be able to take advantage of our other services such as domain registration and backups.

On the Mythic Beasts side, we expect to be able to offer service upgrades thanks to the economies of scale resulting from the acquisitions of VMHaus and of BHost this summer.

OpenWRT install to RAM – run iftop on a router with very limited flash

November 23rd, 2018 by

OpenWRT is awesome, as it allows you to run proper Linux tools on your home router. I’m currently using a very old, underspecced TP Link box, with 32MB of RAM, but just 4MB of flash storage. This is just enough to get what I need installed, but one thing I’ve always wanted to do is use iftop to quickly see what’s using all the bandwidth. Unfortunately iftop, with its dependencies on libpcap and libncurses, just won’t fit into a 4MB image.

I recently stumbled across opkg’s install-to-RAM option, allowing me to use the 32MB of RAM to install the package, with the minor and obvious downside that it gets uninstalled when the router gets rebooted. For something like iftop, which is used for ad-hoc diagnostics, this isn’t a big issue.

Installing to RAM puts the packages under /tmp, so a little effort is required to make sure that libraries and other resources can be found. I now have the following shell script which installs iftop if it isn’t already, sets some environment variables and invokes iftop:

#!/bin/sh

if [ ! -f /tmp/usr/bin/iftop ] ; then
  opkg update
  opkg install -d ram iftop
fi

export LD_LIBRARY_PATH=/tmp/usr/lib
export TERM=xterm
export TERMINFO=/tmp/usr/share/terminfo/

/tmp/usr/bin/iftop $@

Fortunately I do have enough free space on flash storage to store the above script.
Obviously a similar approach could be used with other packages that are only needed “on demand”.

libssh emergency update

October 17th, 2018 by

An attack so simple my cat could get root on your server.

Managed customers of Mythic Beasts with libssh installed will have just received a notification that we updated it without warning or testing.

This is obviously bad practice, so what were we thinking?

A security advisory for libssh has just come out which is very bad. To paraphrase,

libssh -> hello new user
user -> can I have a root shell
libssh -> can you authenticate?
user -> yes but I'm not going to
libssh -> okay, have a root shell

This is completely secure, unless the client is prepared to lie in order to exploit your system. In the late 1990s some of our founders might have once exploited an online quiz in exactly the same way to get perfect scores. Don’t trust the client.

In our risk analysis, the risk of breakage to a customer site though a botched patch is vastly lower than giving an attacker a root shell, which is why we pushed an emergency update within a few hours of updated packages being available.

If this is the first you’ve heard about the issue, we suggest you’d benefit from our Managed Services