VPS-not-so-lite

June 17th, 2016 by

cloud-cpuWe’ve upgraded our VPS Lite service to bring it onto the same hardware platform as our standard VPS offerings. Our base level virtual server is now the VPS 1 and has 1GB RAM and 20GB disk. It’s fractionally more expensive than the old VPS Lite, but offers double the disk and RAM. We’ve also introduced an SSD option, in addition to traditional spinning disks, for faster IO performance.

Customers of the existing VPS Lite service will be migrated onto the new hardware platform, and given the option of keeping their current spec on faster hardware, or upgrading to the new VPS 1 spec.

Naturally, the new VPS 1 comes with IPv6 as standard: IPv4 is an option, although our IPv4 to IPv6 reverse proxy, and the availability of NAT64 for outbound traffic means that most websites can be hosted easily on a IPv6-only server.

Full details of the current specifications can be found on the Virtual Servers page.

I know that I know nothing

May 13th, 2016 by
Over a thousand people put together 43000 packages which forms the universal operating system.

Over a thousand people put together 43000 packages which forms the universal operating system.

One of the hazards of going to the pub in Cambridge is that very smart people will occasionally ask you difficult questions. Steve McIntyre, a former Debian Project leader asked our advice as to how Debian should specify a new central build server. Did we think that they’d be best off with lots of RAM or fast SSD, was PCI-E attached SSD better than SATA SSD or even sticking with cheap, slow but very large spinning hard disks.

On the unusual occasions we build software it completes very quickly, and for any big complicated package we’d just install the binary package from Debian. Advice that is almost always spot on, unless you are Debian attempting to make the binary packages in the first place.

We thought about this for a short time and proclaimed confidently that we didn’t know the answer.

However in Mythic Beasts we have a very strong science background. We suggested the right plan was to test it, take a big machine with multiple types of storage, disk, SATA attached SSD and PCI-E flash and try it out. Shortly afterwards our brain kicked in and realised that this looked just like the new VM hosts we were commissioning and with only a slight rearrangement to our plans we could lend one to Debian. Some weeks of work later and the answer is that an SSD makes a huge difference for the working filesystem, otherwise it doesn’t matter.

DNSSEC now in use by Raspberry Pi

May 12th, 2016 by

Over the past twelve months we’ve implemented Domain Name Security Extensions, initially by allowing the necessary records to be set with the domain registries, and then in the form of a managed service which sets the records, signs the zone files, and takes care of regular key rotation

Our beta program has been very successful, lots of domains now have DNSSEC and we’ve seen very few issues. We thought that we should do some wider testing with a larger number of users than our own website, so we asked some friends of ours with a busy website if they felt brave enough to give it a go

Eben Upton> I think this would be worth doing.
Ben Nuttall> I'll go ahead and click the green button for each domain.
-- time passes --
Ben Nuttall> Done - for all that use HTTPS.

So now we have this lovely graph that indicates we’ve secured DNS all the way down the chain for every request. Mail servers know for definite they have the correct address to deliver mail to, Web requests know they’re at the correct webserver.

The only remaining task is to remove the beta label in our control panel.

Raspberry Pi DNSSEC visualisation, click for interactive version

Raspberry Pi DNSSEC visualisation, click for interactive version

PROXY protocol + nginx = broken header

May 9th, 2016 by

We recently announced support for PROXY protocol in our IPv4 to IPv6 reverse proxy, and happily linked to the instructions for making it work with NGINX. One of our customers has pointed out that they didn’t actually work, and we’ve now got to the bottom of why not.

NGINX version

First issue: you need NGINX >= 1.9.10, as there was a bug with using proxy_protocol on IPv6 listeners. If you’re on Debian Jessie, you can get a suitable version from Jessie backports.

PROXY protocol version

Second issue: NGINX only speaks PROXY protocol v1 and our proxy was attempting to speak v2.

v1 is a human readable plain text protocol, whereas v2 is binary. If you see something like this in the error log:

2016/05/09 11:11:30 [error] 6058#6058: *1 broken header: "

QUIT
!
 ]Y??.????PGET / HTTP/1.1

Then that’s a good sign that you’ve got a v2 reverse proxy talking to you.

We’ve now changed our proxy to only speak PROXY protocol v1 by default. We will look into making this a configurable option in the future. The Apache module seems happy speaking either version.

Whilst we’re here, here are some other failure modes you might see. This in the access log, is v2 PROXY protocol being spoken to NGINX which is not configured for PROXY protocol at all.

2a00:1098:0:82:1000:3b:1:1 - - [09/May/2016:11:08:55 +0100] "\x00" 400 172 "-" "-"

And this is v1 PROXY protocol being spoken to NGINX which is not configured for it:

2a00:1098:0:82:1000:3b:1:1 - - [09/May/2016:11:39:30 +0100] "PROXY TCP4 93.89.134.240 46.235.225.189 64221 80" 400 173 "-" "-"

PROXY protocol support for our, err, proxy

April 29th, 2016 by

We’re increasingly using our IPv4 to IPv6 reverse proxy to host websites on IPv6-only virtual machines. One of the downsides of proxying is that your server doesn’t get to see the client’s real IP address. For non-SSL connections, the proxy can insert an “X-Forwarded-For” header, but SSL is increasingly becoming the norm, and one of the nice things about an SNI-aware reverse proxy is that it doesn’t need to do SSL off load: we don’t need your certificates on our proxy and your traffic stays encrypted until it hits your server. Of course, this means that we can’t go inserting any headers into your connection either.

Fortunately, there is a solution: PROXY protocol. This is a protocol-agnostic mechanism for passing information from a reverse proxy to a server, including the client IP address.

We’ve just added support for PROXY protocol to our reverse proxy:

proxy-protocol

Turning this on allows your server to get the client IP address, but as it’s an additional protocol, not part of HTTP, your server must be expecting it: turning this on and pointing it at a standard HTTP server will result in a broken website.

Most web servers have support for this. NGINX has support built in, and just needs “proxy_protocol” adding after the listen directive:

server {
    listen 80   proxy_protocol;
    listen 443  ssl proxy_protocol;
    ...
}

You will probably also want some additional configuration to actually set the IP address that gets used for logs etc., and also to ensure that you only trust proxy information from the real proxy servers.

For Apache, support is provided by mod_proxy_protocol, which needs to be installed manually. Once done, configuration is easy:

<VirtualHost *:443>
  ...
  ProxyProtocol On

  CustomLog ${APACHE_LOG_DIR}/access.log "%a %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-agent}i\""

The CustomLog line instructs Apache to use the real client IP rather than the proxy. You should now see v4 addresses being happily logged on your IPv6 server:

root@vm1:~# tail -n 1 /var/log/apache2/access.log
93.93.130.44 - - [29/Apr/2016:14:05:32 +0100] "GET / HTTP/1.1" 200 321 "-" "curl/7.26.0"

Unfortunately the module doesn’t currently provide a way to restrict enablement to trusted proxies only. As such, you’ll probably want to install a firewall to restrict HTTP/HTTPS traffic to only come from our proxies, as otherwise clients could easily fake their IP address.

One thing to watch out for is that although this is applied within a VirtualHost configuration, it’ll actually apply to all virtual hosts on the same IP address and port. This is an unavoidable side effect of the fact that the proxy information is sent before we start talking HTTP. Of course, with IPv6, throwing another IP address at the problem isn’t an issue.

IPv6 only hosting

April 27th, 2016 by

Last week at the UK Network Operators Forum Pete gave a talk about our IPv6 only hosting, progress we’ve made and barriers we’ve overcome.

It’s now available to view online

Let’s Encrypt IPv6-only

April 18th, 2016 by

Let’s Encrypt on a v6-only host

One of the much requested features for Let’s Encrypt free SSL certificates is support for IPv6-only hosts. Whilst this is promised in the very near future we’re happy to say that IPv6-only hosts behind our NAT64 & Proxy services work out of the box with Let’s Encrypt.

To test it we took the traditional dogfood approach, this website is run on an IPv6-only VM and we’ve just enabled Lets Enrypt SSL support on our own blog. As soon as Let’s Encrypt offer SSL certificates for IPV6-only hosts with no proxy and no NAT64 we’ll give that a try too.

DNS-based domain validation (dns-01)

An alternative approach would be to use dns-01 validation using our DNS API. Our API speaks native v6, so that should work just fine on a truly single-stack IPv6 host.

Without the hot air

April 15th, 2016 by

It’s with great sadness we learned of the death of Prof Sir David MacKay, FRS. He taught three of the Mythic Beasts founders information theory in 1999–2000, a fascinating and stunningly well-lectured course. The textbook Information Theory, Inference, and Learning Algorithms is freely available to download. Prof MacKay believed it was possible to make the world a better place.

Energy policy done by the numerate.

David’s other textbook Sustainable Energy, is not only fascinating, but vital reading for anyone interested in energy policy. It’s excellent, not because it provides answers, but because it teaches the tools to work things out. Can we power the UK only from wind energy? No. Can it make a substantial contribution? Yes, If we built lots and lots of wind how much can it contribute? Hopefully around 25% if you include a lot of offshore.

If more people read and understood the book we’d have fewer articles about the amazing solar panels that create power from rain, when the abstract tells us they create micro amps and milli-volts, throwing away 180W/m2 of solar to create 0.000000001W/m2 from rain.

David’s website withouthotair.com is prepaid far in advance and will remain available for many years to come.

The little computer that did

April 13th, 2016 by

At the end of March we migrated the Raspberry Pi website from a very big multi-core server to a tiny cluster of eight Raspberry Pi 3s. Here’s a bit more detail about how it worked.

The Pi rack not fooling anyone on April 1st

The Pi rack not fooling anyone on April 1st

Booting

For the Raspberry Pi 3 launch we tried out some Pis running in a data centre environment with high load using the SD card for the root filesystem. They kept crashing, if you exceed the write capability of the card the delays make the kernel think the storage has failed and the system falls over. We also want to be able to remotely rebuild the filesystem so we can fix a broken Pi remotely. So we’ve put the root filesystem on a network file server, which is accessed over NFS.

The Raspberry Pi runs the latest kernel, 4.1.18-v7+ and boots from the SD card with a configuration as follows:

dwc_otg.lpm_enable=0 console=ttyAMA0,115200 console=tty1 root=/dev/nfs rootfstype=nfs
  ip=10.46.189.2::10.46.189.1:255.255.255.252::eth0:off 
  nfsroot=10.46.189.1:/export/10.46.189.2 elevator=deadline 
  fsck.repair=yes rootwait

This brings up a block of 4 IP addresses on eth0. One address for the network, one for broadcast, one for the Pi and one for the network fileserver. It then mounts the NFS filesystem at:

nfsroot=10.46.189.1:/export/10.46.189.2

and uses that as the root filesystem.

Overly simple introduction to VLANs

On a traditional switch, you plug things and any ethernet port can talk to any other ethernet port. If you want to have two different networks you need two different switches, and any computer that needs to be on both networks needs two network ports. In our case we’re trying to have a private network for storage for each Raspberry Pi, so each Pi requires its own switch and the fileserver needs it’s own network port for every Raspberry Pi connected to keep them separate. This is going to get expensive very quickly.

Instead we turn on virtual LANs (VLAN). We connect our fileserver to port 24 and create a VLAN for ports 1 & 24, another for 2&24, etc. The switch configuration for the fileserver port specifies these VLANs as “tagged”, meaning our switch adds a header to the front of every packet from a Raspberry Pi port that allows the fileserver to tell which VLAN, and therefore which Raspberry Pi, the packet came from. The fileserver can reply with the same header, and that packet will only be sent to that specific Raspberry Pi. It behaves as if each Raspberry Pi has its own switch.

Network on the fileserver

The fileserver sees each VLAN as a separate network card, named eth0.N where N identifies the VLAN. We can configure them like any other network interface:

auto eth0.10
iface eth0.10 inet static
	address 10.46.189.1
	netmask 255.255.255.252

auto eth0.11
iface eth0.11 inet static
	address 10.46.189.5
	netmask 255.255.255.252

eth0.10 and eth0.11 appear to be network cards with a tiny network with one Raspberry Pi on the end, but in reality there’s a single physical ethernet connection underneath all of them.

Network on the Raspberry Pi

On the Raspberry Pi, eth0 is already configured on the Raspberry Pi by the boot line above to talk to the fileserver. On our switch configuration, we specify that private network is “untagged” on Raspberry Pi port, which means that it won’t have a VLAN header on it and we can access it as “eth0” rather than “eth0.N” as we did on the fileserver.

In order to do anything useful, we also need to give the Raspberry Pis access to the public network. On our network, the public network is accessible on VLAN 131. We configure this to be a “tagged” VLAN on the Raspberry Pi port, meaning it becomes accessible on the eth0.131 interface. We can configure this in the normal way, and in keeping with other back-end servers on the Raspberry Pi setup, it only has an IPv6 address:

auto eth0.131
iface eth0.131 inet6 static
	address	2a00:1098:0:84:1000:1::2
	netmask 64
	gateway	2a00:1098:0:84::1

Effectively the Raspberry Pi believes it has two network cards, one on eth0 which is a private network shared with the fileserver, one on eth0.131 which has an IPv6 address and is connected to the real internet.

Why all that configuration?

In an ideal world we’d have a single IPv6 address for each Pi, and mount the network filesystem with it. However, with an NFS root filesystem, potentially another user on the LAN who can steal your IPv6 address can access your files. There’s a second complication, IPv4 is built into the standard kernel on the Raspberry Pi and the differences per Pi are constrained to just the kernel command line, with IPv6 we’d have to build it into an initrd which would load up the IPv6 modules and set up the NFS mounts.

Planning for the future we’ve spoken to Gordon about how PXE boot on the Raspberry Pi will work and it’s extremely likely that it’s going to require IPv4 to pull in the bootloader, kernel and initrd. Whilst there is native IPv6 in the Raspberry Pi office, there isn’t any IPv6 on their test lan for developing the boot code and it’s a currently not a major priority for the Pi despite around 5% of the UK having native IPv6.

So if we want to make this commercial, each Pi needs its own storage network and it needs IPv4 on the storage network.

Power over Ethernet

We’ve added a Power over Ethernet HAT to our Raspberry Pis. This means that they receive power over the ethernet cable in addition to the two separate networks. As well as reducing the amount of space used by power bricks, it also means you can power cycle a Raspberry Pi just by re-configuring the switch.

Software

Each Raspberry Pi runs Raspbian with Apache2 installed. We’ve pulled in PHP7 from Debian Stretch to improve PHP performance and then copied all the files for the Raspberry Pi website onto the NFS root for each Raspberry Pi (so the fileserver effectively has 8 copies – one for each Pi). We then just added the IPv6 addresses of the Raspberry Pis into the site’s load balancer, deleted the addresses for the main x86 servers and waited for everything to explode.

Did it work?

Slightly to our surprise, yes and well. We had a couple of issues – the Pi is much slower than the x86 servers, not only clock speed but also the speed of the network card used to access the filesystem and the database server. Some rarely used functions, such as registering a new Raspberry Jam, weren’t really quick enough under the new setup and gave people some error pages as the connections timed out. Uploading images for new WordPress posts was similarly an issue as receiving a 3MB file and distributing eight copies on a 100Mbps network isn’t very fast. But mostly it worked.

Did power cycling the Pis via the switch work?

We never tested it in production, every Pi remained up and stable for the whole 3.5 day duration we had the system in use. In testing it’s been fine.

Can I buy one?

Not yet. At present you can still break a Pi by destroying the flash, and the enclosure doesn’t allow for replacement without taking the whole shelf (which in production would contain 96 Pis) offline. Once we have full netboot for the Pi, it is a service we could offer.

Can I register my interest to buy a Pi in the cloud?

Sure – email us at sales@mythic-beasts.com and we’ll add you to a list to keep you up to date.

Let’s Encrypt SSL Certificates using DNS API – HOWTO

March 16th, 2016 by

Here at Mythic Beasts, we’ve been busily undermining sales of our SSL certificates by rolling out support for free certificates from Let’s Encrypt, partly because we think that the internet should be secure by default, but mostly because we’re lazy and Let’s Encrypt makes it easy to fully automate certificate issue and deployment.

Domain validated certificates

The majority of SSL certificates in use today are “Domain Validated” certificates. These are issued automatically by a certificate authority once you have completed some action that proves that you are in control of the domain for which the certificate is being requested. This can include responding to an email send to an address at your domain, or posting a file to a specific location on your website.

Let’s Encrypt DNS challenge

One of the options for validation offered by Let’s Encrypt is a DNS challenge (known as “dns-01”), whereby you prove ownership of your domain by adding a specific entry to its DNS zone. This option is quite interesting, as it allows you to avoid meddling in any way with your web server configuration and, if your DNS is hosted with Mythic Beasts, you can automate the addition of the necessary records using our DNS API.

Automating via our DNS API

In order to support this, we’ve developed a hook script that works with the letsencrypt.sh client.

We’ve also written a step-by-step guide to configuring dns-01 validation using our DNS API.

Please note, if you’re a hosting account customer, you don’t need to worry about any of this. You can get an SSL certificate for your website simply by hitting a button in the control panel.

Thanks go to David Earl for testing this and providing the initial implementation of the hook script..