Capacity upgrades, cheaper bandwidth and new fibre

December 8th, 2017 by

We don’t need these Giant Scary Laser stickers yet.

We’ve recently upgraded both our LONAP connections to 10Gbps at our two London POPs bring our total external capacity to 62Gbps.

We’ve been a member of LONAP, the London Network Access Point, since we first started running our own network. LONAP is an internet exchange, mutually owned by several hundred members. Each member connects to LONAP’s switches and can arrange to exchange traffic directly with other members without passing through another internet provider. This makes our internet traffic more stable because we have more available routes, faster because our connections go direct between source and recipient with fewer hops and usually cheaper too.

Since we joined, both we and LONAP have grown. Initially we had two 1Gbps connections, one in each of our two core sites. If one failed the other could take over the traffic. Recently we’ve been running both connections near capacity much of the time and in the event of failure of either link we’d have to fall back to a less direct, slower and more expensive route. Time to upgrade.

The upgrade involved moving from a copper CAT5e connection to optic fibre. As a company run by physics graduates this is an excellent excuse to add yet more LASERs to our collection. Sadly the LASERs aren’t very exciting, being 1310nm they’re invisible to the naked eye and for safety reasons they’re very low powered (~1mW). Not only will they not set things on fire (bad) they also won’t blind you if you accidentally look down the fibre (good). This is not universally true for all optic fibre though; DWDM systems can have nearly 100 invisible laser beams in the same fibre at 100x the power output each. Do not look down optic fibre!

The first upgrade at Sovereign House went smoothly, bringing online the first 10Gbps LONAP link. In Harbour Exchange proved a little more problematic.  We initially had a problem with an incompatible optical transceiver. Once replaced, we then saw a further issue with the link being unstable which was resolved by changing the switch port and optical transceiver at LONAP’s end. We then had further low level bit errors resulting in packet loss for large packets. This was eventually traced to a marginal optical patch lead. Many thanks to Rob Lister of LONAP support for quickly resolving this for us.

With the upgrade completed, we now have two 10Gbps connections to LONAP, in addition to our two 10Gbps connections into the London Internet Exchange and multiple 10Gbps transit uplinks, as well as some 1Gbps private connections to some especially important peers.

To celebrate this we’re dropping our bandwidth excess pricing to 1p/GB for all London based services.  The upgrades leave us even better placed to offer very competitive quotes on high bandwidth servers, as well as IPv6 and IPv4 transit in Harbour Exchange, Meridian Gate and Sovereign House.  Please contact us at sales@mythic-beasts.com for more information.

Sender Rewriting Scheme

October 30th, 2017 by

tl;dr: SRS changes the sender address when you forward email so it doesn’t get filed as spam.

We’ve just deployed an update to our hosting accounts that allows you to enable Sender Rewriting Scheme when forwarding mail for your domain.

We’ve previously mentioned how we’re seeing increased adoption of Sender Policy Framework (SPF), a system for ensuring that mail from a domain only comes from authorised servers. Whilst this may or may not reduce spam, it does very reliably break email forwarding.

If someone at sender.com sends you an email to you at yourdomain.com and you forward it on to your address at youremailprovider.com, the email that arrives at your final address will come from the mail server hosting yourdomain.com which almost certainly isn’t listed as a valid sender in the SPF record for sender.com.  Your email provider may reject the mail, or flag it as “untrusted”.

To fix this, we need a different TLA: SRS, or Sender Rewriting Scheme. As the name suggests, this rewrites the sender address of a forwarded email, from one in a domain that you don’t control (sender.com) to one that you do (yourdomain.com).

In the example above, the actual rewritten address would be something like:

SRS0-9oge=B5=sender.com=them@yourdomain.com

This includes an encoded version of the original address, and any email sent to it will be routed back to the sender.  This means that any bounces messages will end up in the right place.

The sender and recipient in these examples refer to the “envelope” sender and receiver.  The addresses that are normally visible to users are the “from” and “to” headers, which may be different and are unaffected by sender rewriting.  Applying SRS should be invisible to the end users.

SRS is now available as an option whenever you create or edit a forwarder using the customer control panel for email accounts hosted on our main hosting servers.  If your account is hosted on sphinx, we need to do a little extra magic to enable it, so please email support.

CAA records

September 1st, 2017 by

A handful of the hundreds of different organisations, all of whom must be trustworthy.

Everybody knows that SSL is a good idea. It secures communications. At the heart of SSL is a list of certificate authorities. These are organisations that the confirm the identity of the SSL certificate. For example, if GeoTrust says that Raspberry Pi is Raspberry Pi we know that we’re talking to the right site and our communications aren’t being sniffed.

However, the list of certificate authorities is large and growing and as it stands, you’ve got to trust all of them to only issue certificates to the right people. Of course, through incompetence or malice, they can make mistakes.

CAA records are a relatively new mechanism that aims to stop this happening, making it harder to impersonate secure organisations, execute bank robberies and steal peoples’ identities.



CAA records enable you to list in your domain’s DNS the certificate authorities that are allowed to issue certificates for your domain. So, Google has a record stating that only Google and Symantec are allowed to issue certificates for google.com. If someone manages to persuade Comodo they are Google and should be issued a google.com certificate, Comodo will be obliged to reject the request based on the CAA records.

Of course, in order to be of any use, you need to be able to trust the DNS records. Fortunately, these days we have DNSSEC (dns security).

How does it work?

A typical CAA record looks something like this:

example.com. IN CAA 3600 0 issue "letsencrypt.org"

This states that only Let’s Encrypt may issue certificates for example.com or its subdomains, such as www.example.com.

Going through each part in turn:

  • example.com – the name of the hostname to which the record apply. In our DNS interface, you can use a hostname of “@” to refer to your domain.
  • IN CAA – the record type.
  • 3600 – the “time to live” (TTL). The amount of time, in seconds, for which this record may be cached.
  • 0 – any CAA flags
  • issue– the type of property defined by this record (see below)
  • "letsencrypt.org" – the value of the property

At present, there are three defined property types:

  • issue – specifies which authorities may issue certificates of any type for this hostname
  • issuewild – specifies which authorities may issue wildcard certificates for this hostname
  • iodef – provides a URL for authorities to contact in the event of an attempt to issue an unauthorised certificate

CAA records can be added using the new section at the bottom of the DNS management page in our control panel:

The @ in the first field denotes a record that applies to the domain itself.

At Mythic Beasts, we’re a bit skeptical about the value of CAA records. In order to protect against the incompetence of CAs, they rely on CAs competently checking the CAA records before issuing certificates. That said, they do provide a straightforward check that CAs can build into their automated processes to detect and reject unauthorised requests, so publishing CAA records will raise the bar somewhat for anyone looking to fraudulently obtain a certificate for your domain.

rm -rf /var

August 10th, 2017 by

Within Mythic Beasts we have an internal chat room that uses IRC (this is like Slack but free and securely stores all the history on our servers). Our monitoring system is called Ankou, named after Death’s henchman that watches over the dead, and has an IRC bot that alerts through our chat room.

This story starts with Ankoubot, who was the first to notice something was wrong with the world.

15:25:31

15:25:31 ankoubot managed vds:abcdefg-ssh [NNNN-ssh]: 46.235.N.N => bad banner from `46.235.N.N’: [46.235.N.N – VDShost:vds-hex-f]
15:25:31 ankoubot managed vds:abcdefg-web [NNNN-web]: http://www.abcdefg.co.uk/ => Status 404 (<html> <head><title>404 Not Found</title></head> <body bgcolor=”white”> <center><h1>404 Not Found</h1></center> <hr><center>nginx/1.10.3</center> </body> </html…) [46.235.N.N www.abcdefg.co.uk VDShost:vds-hex-f]

15:31:42

15:31:42pete I can’t get ssh in, I’m on the console.

15:38:16

15:38:16pete This is an extremely broken install. ssh is blocked, none of the bind mounts work

Debugging is difficult because /var/log is missing. systemd appears completely unable to function and we have no functioning logging. Unable to get ssh to start and fighting multiple broken tools due to missing mounts, we restart the server and mail the customer explaining what we’ve discovered so far. This doesn’t help and it hangs attempting to configure NFS mounts.

15:53:53

15:56:35

Boot to recovery media completes ready for restore from backup.

15:58:34

16:05:07

16:08:36

16:08:36 ankoubot managed vds:abcdefg-ssh [NNNN-ssh]: back to normal
16:08:36 ankoubot managed vds:abcdefg-web [NNNN-web]: back to normal

16:14:22

16:31:42

Customer confirms everything is restored and functional and gives permission to anonymously write up the incident for our blog including the following quote.

Mythic Beasts had come highly recommended to me for the level of support provided, and when it came to crunch time they were reacting to the problem before I’d even raised a support ticket.
This is exactly what we were looking for in a managed hosting provider, and I’m really glad we made the choice. Hopefully however, I won’t be causing quiet the same sort of problem for a looooong while.

In total the customer was offline for slightly over 30 minutes, after what can best be described as a catastrophic administrator error.

Raspberry Pi Cloud upgrades

May 12th, 2017 by

We’ve made some improvements to our Raspberry Pi Cloud.

  • Upgraded kernel to 4.9.24, which should offer improved performance and a fix for a rare crash in the network card.
  • Minor update to temperature logging to ease load on our monitoring server and allow faster CPU speeds.
  • Upgrade to the NFS fileserver to allow significantly improved IO performance.
  • Recent updates applied to both Debian and Ubuntu images.

Thanks to Gordon Hollingworth, Raspberry Pi Director of Engineering for his assistance.


Upgraded backups

March 31st, 2017 by

Servers need different backup strategies to Vampire Slayers.

Our backup report caught a warning from the backup on our monitoring server:

WARN - [child] mysqldump: Error 2013: Lost connection to MySQL server during query when dumping table `log` at row: 6259042
....
ERROR - mysqldump --all-databases .... exited with 3

We investigated, indeed this is an error and we’ve created a truncated backup. As we think backups are very important we investigated immediately rather than adding it to the end of a very long task list that would be ignored in favour of more user visible changes.

An initial guess was that it might be a mismatch in max_allowed_packet between the server and the dump process, a problem that we’ve seen before. We set max_allowed_packet for mysqldump to the maximum allowed value, reran the backup manually and watched it fail again. Hypothesis disproven and still no consistent backup.

Checking the system log, it quickly became apparent that we were running out of memory. The out of memory killer had kicked in and decided to kill mysqld (an unfortunate choice, really). This was what had caused the dump to terminate early.

Now we understand our problem, one solution is to configure a MySQL slave and back up from the slave, another is to move to a bigger MySQL server, another is to exclude the ephemeral data from the backup. We chose to exclude the ephmeral data and now our backup is complete and we’ve tested the restore.

While working on this, our engineer noticed that there was an easy extra check we could make to ensure the integrity of a MySQL dump. When the dump is complete we run the moral equivalent of:

zcat $dump | tail -1 | grep -q '^-- Dump completed'

to check that we have a success message at the end of the dumped file. This is an additional safety check. Previously we were relying on mysqldump to tell us if it found an error, now we require mysqldump to report success and the written file to pass automated tests for completeness.

We pushed out our updated backup package with the additional check to all managed customers yesterday. On World Backup Day, we’d like to remind the entire Internet to check that your backups work. If that sounds boring, we’ll check your backups for you.

One click HTTPS + HSTS

March 27th, 2017 by

Last year we rolled out one-click HTTPS hosting for our hosting accounts using free Let’s Encrypt certificates.  We’ve been making some further improvements to our control panel so that once you have enabled and tested HTTPS hosting, it’s also easy to redirect all HTTP traffic to your HTTPS site.

We’ve also added an option to enable HTTP Strict Transport Security (HSTS).  This allows you to use HTTPS on your website and commit that you’re not going to stop using it any time soon (we use 14 days by default).  Once a user has visited your site their browser will cache the redirect from HTTP to HTTPS and will automatically redirect any future requests without even visiting the HTTP version of your site.

HSTS makes it harder for an attacker to impersonate your site as even if they can intercept your traffic, they won’t be able to present an non-HTTPS version of your site to any user that has visited your site within the last 14 days.

HTTPS and HSTS control panel settings

We believe that the web should be secure by default, and hope that these latest changes will make it that little bit easier to secure your website.  These features are available on all of our web and email hosting accounts.  We’ll also happily enable this as part of the service for customer of our managed server hosting.

 

PHP7 on Pi 3 in the cloud (take 2)

March 24th, 2017 by

On Wednesday, we showed you how to get PHP7 up and running on one of our Pi 3 servers. Since then, we’ve implemented something that’s been on our to do list for a little while: OS selection. You can now have Ubuntu 16.04 and the click of a button, so getting up and running with PHP7 just got easier:

1. Get yourself a Pi 3 in our cloud.

2. Hit the “Reinstall” button:

3. Select Ubuntu 16.04:

4. Upload your SSH key (more details), turn the server on, SSH in and run:

apt-get install apache2 php7.0 php7.0-curl php7.0-gd php7.0-json \
    php7.0-mcrypt php7.0-mysql php7.0-opcache libapache2-mod-php7.0
echo "<?=phpinfo()? >" >/var/www/html/info.php

Browse to http://www.yourservername.hostedpi.com/info.php and you’re running PHP7:

PHP7 on a Raspberry Pi 3 in the cloud

March 22nd, 2017 by
Rasberry Pi 3

Two Raspberrys PI using PHP7 during the Pi 3 launch.

Last April we moved the main blog for Raspberry Pi to a small cluster of Raspberry Pi 3s. This went so well we made it commercially available and you can now buy your Raspberry Pi 3 in the cloud.

If you’d like to have PHP 7 running on your Raspberry Pi 3 in the cloud, this guide if for you. Click the link, buy a Pi 3 and install your ssh-key and log in. This should take no more than about a minute.

PHP 7 isn’t yet part of the standard Raspbian OS, so we need to get it from somewhere else.

A brief aside about CPU architectures, Raspbian and Debian

Debian provides three versions for ARM processors:

  • armel – 32 bit and ARMv5
  • armhf – 32 bit, ARMv7 and a floating point unit
  • arm64 – 64 bit ARMv8 and a floating point unit

The Raspberry Pi uses three different architectures:

  • Raspberry Pi A, B, Zero & Zero W – 32 bit ARMv6 with floating point
  • Raspberry Pi 2 – 32 bit ARMv7 with floating point
  • Raspberry Pi 3 – 32/64 bit ARMv8 with floating point unit

Raspbian is an unofficial port for 32bit ARMv6 and a floating point unit, which matches the hardware for an original Raspberry Pi model B. Because we’re working here with the Pi 3 – ARM8 and floating point, we can take official debian armhf packages and run them directly on our Pi 3.

Ondřej Surý is the Debian PHP maintainer who also has a private repository with newer versions of PHP built for Debian and Ubuntu. So we can use 32 bit Debian packages for ARM 7 (armhf) and install directly on top of Raspbian.

PHP 7 packages aren’t available for armel, so this won’t work on an original Raspberry Pi, or a Pi Zero/Zero W.

Add the PHP 7 repository

deb.sury.org includes newer PHP packages built for armhf, which we can use directly. Following the instructions here here we can set up the repository:

apt-get install apt-transport-https lsb-release ca-certificates
wget -O /etc/apt/trusted.gpg.d/php.gpg https://packages.sury.org/php/apt.gpg
echo "deb https://packages.sury.org/php/ $(lsb_release -sc) main" > /etc/apt/sources.list.d/php.list
apt-get update

Now we can install everything we need for php7 and apache2.4:

apt-get install apache2 php7.0 php7.0-curl php7.0-gd php7.0-json \
    php7.0-mcrypt php7.0-mysql php7.0-opcache libapache2-mod-php7.0
echo "<?=phpinfo()?>" >/var/www/html/info.php 

Wait a few moments and we have a webserver running PHP7 on our Pi3 in the cloud.

You’ll note we’ve included php7-opcache. This should accelerate our PHP performance by a factor of two or so.

Now for an application…

Try WordPress

WordPress needs a MySQL server & PHP library for accessing MySQL. We need to restart Apache to make PHP 7 pick up the additional library.

apt-get install php7.0-mysql  mysql-server
apache2ctl restart
mysql -u root -p

mysql> create database wordpress;
mysql> grant all privileges on wordpress.* to wordpress identified by 'password';

We strongly recommend you invent a better password.

cd /var/www/html
wget https://wordpress.org/latest.tar.gz
tar -zxvf latest.tar.gz
chown -R www-data:www-data wordpress

Then navigate to http://www.yourpiname.hostedpi.com/wordpress and finish the install through your browser.

Next steps

For information on how to host on your own domain name, and how to enable HTTPS see our previous blog post on hosting a website on a Raspberry Pi.

Hosting a website on an IPv6 Pi part 2: PROXY protocol

March 10th, 2017 by

Update, 2021: We recommend using the built in mod_remoteip rather than mod_proxy_protocol for recent versions of Apache (2.4.31 and newer) – which includes Debian Buster and the current version of Raspberry Pi OS. Our instructions for this are available on our support site.

 

In our previous post, we configured an SSL website on an IPv6-only Raspberry Pi server, using our IPv4 to IPv6 reverse proxy service.

The one problem with this is that our Pi would see HTTP and HTTPS requests coming from the proxy servers, rather than the actual clients requesting them.

Historically, the solution to this problem is to have the proxy add X-Forwarded-For headers to the HTTP request, but this only works if the request is unencrypted HTTP, or an HTTPS connection that is decrypted by the proxy. One of the nice features of our proxy is that it passes encrypted HTTPS straight to your server: we don’t need your private keys on the proxy server, and we can’t see or interfere with your traffic.

Of course, this means that we can’t add X-Forwarded-For headers to pass on the client IP address. Enter PROXY protocol. With this enabled, our proxies add an extra header before the HTTP or HTTPS request, with details of the real client. This is easy to enable in our control panel:

You also need to configure Apache to understand and make use of the PROXY protocol header. This is a little more involved, as the necessary module isn’t currently packaged as part of the standard Apache distribution (although this is changing), so we need to download and build it ourselves. First some extra packages are needed:

apt-get install apache2-dev git

This will install a good number of packages, and take a few minutes to complete. Once done, you can download, install and build mod_proxy_protocol

git clone https://github.com/roadrunner2/mod-proxy-protocol.git
cd mod-proxy-protocol
make

At this point you should just be able to type make install but at time of writing, there seems to be some problem with the packaging. So instead do this:

cp .libs/mod_proxy_protocol.so /usr/lib/apache2/modules/

Now you can load the module:

echo "LoadModule proxy_protocol_module /usr/lib/apache2/modules/mod_proxy_protocol.so" > /etc/apache2/mods-available/proxy_protocol.load
a2enmod proxy_protocol

You also need to configure Apache to use it. To do this, edit /etc/apache2/sites-enabled/000-default.conf and replace each line that contains CustomLog with the following two lines:

	ProxyProtocol On
	CustomLog ${APACHE_LOG_DIR}/access.log "%a %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-agent}i\""

This tells Apache to use Proxy Protocol, and to use the supplied IP address in its log files. Now restart Apache:

systemctl reload apache2

Visit you website, and if all is working well, you should start seeing actual client IP addresses in the log file, /var/log/apache2/access_log:

93.93.130.44 - - [24/Feb/2017:20:13:25 +0000] "GET / HTTP/1.1" 200 10701 "-" "curl/7.26.0"

Trusting your log files

With the above configuration, we’ve told Apache to use the client IP address supplied by our proxy servers. What we haven’t done is told it that it can’t trust any random server that pitches up talking PROXY protocol. This means that it’s trivial to falsify IP addresses in our log files. To prevent this, let’s set up a firewall, so that only our proxy servers are allowed to connect on the HTTP and HTTPS ports. We use the iptables-persistent package to ensure that our firewall is configured when the server is rebooted.

apt-get install iptables-persistent

ip6tables -A INPUT -s proxy.mythic-beasts.com -p tcp -m tcp --dport 80 -j ACCEPT
ip6tables -A INPUT -s proxy.mythic-beasts.com -p tcp -m tcp --dport 443 -j ACCEPT
ip6tables -A INPUT -p tcp --dport 80 -j REJECT
ip6tables -A INPUT -p tcp --dport 443 -j REJECT

ip6tables-save

And we’re done! Our IPv6-only Raspberry Pi3 is now hosting an HTTPS website, and despite being behind a proxy server, we’re tracking real client IP addresses in our logs.