Raspberry Pi Cloud upgrades

May 12th, 2017 by

We’ve made some improvements to our Raspberry Pi Cloud.

  • Upgraded kernel to 4.9.24, which should offer improved performance and a fix for a rare crash in the network card.
  • Minor update to temperature logging to ease load on our monitoring server and allow faster CPU speeds.
  • Upgrade to the NFS fileserver to allow significantly improved IO performance.
  • Recent updates applied to both Debian and Ubuntu images.

Thanks to Gordon Hollingworth, Raspberry Pi Director of Engineering for his assistance.


PHP7 on Pi 3 in the cloud (take 2)

March 24th, 2017 by

On Wednesday, we showed you how to get PHP7 up and running on one of our Pi 3 servers. Since then, we’ve implemented something that’s been on our to do list for a little while: OS selection. You can now have Ubuntu 16.04 and the click of a button, so getting up and running with PHP7 just got easier:

1. Get yourself a Pi 3 in our cloud.

2. Hit the “Reinstall” button:

3. Select Ubuntu 16.04:

4. Upload your SSH key (more details), turn the server on, SSH in and run:

apt-get install apache2 php7.0 php7.0-curl php7.0-gd php7.0-json \
    php7.0-mcrypt php7.0-mysql php7.0-opcache libapache2-mod-php7.0
echo "<?=phpinfo()? >" >/var/www/html/info.php

Browse to http://www.yourservername.hostedpi.com/info.php and you’re running PHP7:

PHP7 on a Raspberry Pi 3 in the cloud

March 22nd, 2017 by
Rasberry Pi 3

Two Raspberrys PI using PHP7 during the Pi 3 launch.

Last April we moved the main blog for Raspberry Pi to a small cluster of Raspberry Pi 3s. This went so well we made it commercially available and you can now buy your Raspberry Pi 3 in the cloud.

If you’d like to have PHP 7 running on your Raspberry Pi 3 in the cloud, this guide if for you. Click the link, buy a Pi 3 and install your ssh-key and log in. This should take no more than about a minute.

PHP 7 isn’t yet part of the standard Raspbian OS, so we need to get it from somewhere else.

A brief aside about CPU architectures, Raspbian and Debian

Debian provides three versions for ARM processors:

  • armel – 32 bit and ARMv5
  • armhf – 32 bit, ARMv7 and a floating point unit
  • arm64 – 64 bit ARMv8 and a floating point unit

The Raspberry Pi uses three different architectures:

  • Raspberry Pi A, B, Zero & Zero W – 32 bit ARMv6 with floating point
  • Raspberry Pi 2 – 32 bit ARMv7 with floating point
  • Raspberry Pi 3 – 32/64 bit ARMv8 with floating point unit

Raspbian is an unofficial port for 32bit ARMv6 and a floating point unit, which matches the hardware for an original Raspberry Pi model B. Because we’re working here with the Pi 3 – ARM8 and floating point, we can take official debian armhf packages and run them directly on our Pi 3.

Ondřej Surý is the Debian PHP maintainer who also has a private repository with newer versions of PHP built for Debian and Ubuntu. So we can use 32 bit Debian packages for ARM 7 (armhf) and install directly on top of Raspbian.

PHP 7 packages aren’t available for armel, so this won’t work on an original Raspberry Pi, or a Pi Zero/Zero W.

Add the PHP 7 repository

deb.sury.org includes newer PHP packages built for armhf, which we can use directly. Following the instructions here here we can set up the repository:

apt-get install apt-transport-https lsb-release ca-certificates
wget -O /etc/apt/trusted.gpg.d/php.gpg https://packages.sury.org/php/apt.gpg
echo "deb https://packages.sury.org/php/ $(lsb_release -sc) main" > /etc/apt/sources.list.d/php.list
apt-get update

Now we can install everything we need for php7 and apache2.4:

apt-get install apache2 php7.0 php7.0-curl php7.0-gd php7.0-json \
    php7.0-mcrypt php7.0-mysql php7.0-opcache libapache2-mod-php7.0
echo "<?=phpinfo()?>" >/var/www/html/info.php 

Wait a few moments and we have a webserver running PHP7 on our Pi3 in the cloud.

You’ll note we’ve included php7-opcache. This should accelerate our PHP performance by a factor of two or so.

Now for an application…

Try WordPress

WordPress needs a MySQL server & PHP library for accessing MySQL. We need to restart Apache to make PHP 7 pick up the additional library.

apt-get install php7.0-mysql  mysql-server
apache2ctl restart
mysql -u root -p

mysql> create database wordpress;
mysql> grant all privileges on wordpress.* to wordpress identified by 'password';

We strongly recommend you invent a better password.

cd /var/www/html
wget https://wordpress.org/latest.tar.gz
tar -zxvf latest.tar.gz
chown -R www-data:www-data wordpress

Then navigate to http://www.yourpiname.hostedpi.com/wordpress and finish the install through your browser.

Next steps

For information on how to host on your own domain name, and how to enable HTTPS see our previous blog post on hosting a website on a Raspberry Pi.

Hosting a website on an IPv6 Pi part 2: PROXY protocol

March 10th, 2017 by

Update, 2021: We recommend using the built in mod_remoteip rather than mod_proxy_protocol for recent versions of Apache (2.4.31 and newer) – which includes Debian Buster and the current version of Raspberry Pi OS. Our instructions for this are available on our support site.

 

In our previous post, we configured an SSL website on an IPv6-only Raspberry Pi server, using our IPv4 to IPv6 reverse proxy service.

The one problem with this is that our Pi would see HTTP and HTTPS requests coming from the proxy servers, rather than the actual clients requesting them.

Historically, the solution to this problem is to have the proxy add X-Forwarded-For headers to the HTTP request, but this only works if the request is unencrypted HTTP, or an HTTPS connection that is decrypted by the proxy. One of the nice features of our proxy is that it passes encrypted HTTPS straight to your server: we don’t need your private keys on the proxy server, and we can’t see or interfere with your traffic.

Of course, this means that we can’t add X-Forwarded-For headers to pass on the client IP address. Enter PROXY protocol. With this enabled, our proxies add an extra header before the HTTP or HTTPS request, with details of the real client. This is easy to enable in our control panel:

You also need to configure Apache to understand and make use of the PROXY protocol header. This is a little more involved, as the necessary module isn’t currently packaged as part of the standard Apache distribution (although this is changing), so we need to download and build it ourselves. First some extra packages are needed:

apt-get install apache2-dev git

This will install a good number of packages, and take a few minutes to complete. Once done, you can download, install and build mod_proxy_protocol

git clone https://github.com/roadrunner2/mod-proxy-protocol.git
cd mod-proxy-protocol
make

At this point you should just be able to type make install but at time of writing, there seems to be some problem with the packaging. So instead do this:

cp .libs/mod_proxy_protocol.so /usr/lib/apache2/modules/

Now you can load the module:

echo "LoadModule proxy_protocol_module /usr/lib/apache2/modules/mod_proxy_protocol.so" > /etc/apache2/mods-available/proxy_protocol.load
a2enmod proxy_protocol

You also need to configure Apache to use it. To do this, edit /etc/apache2/sites-enabled/000-default.conf and replace each line that contains CustomLog with the following two lines:

	ProxyProtocol On
	CustomLog ${APACHE_LOG_DIR}/access.log "%a %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-agent}i\""

This tells Apache to use Proxy Protocol, and to use the supplied IP address in its log files. Now restart Apache:

systemctl reload apache2

Visit you website, and if all is working well, you should start seeing actual client IP addresses in the log file, /var/log/apache2/access_log:

93.93.130.44 - - [24/Feb/2017:20:13:25 +0000] "GET / HTTP/1.1" 200 10701 "-" "curl/7.26.0"

Trusting your log files

With the above configuration, we’ve told Apache to use the client IP address supplied by our proxy servers. What we haven’t done is told it that it can’t trust any random server that pitches up talking PROXY protocol. This means that it’s trivial to falsify IP addresses in our log files. To prevent this, let’s set up a firewall, so that only our proxy servers are allowed to connect on the HTTP and HTTPS ports. We use the iptables-persistent package to ensure that our firewall is configured when the server is rebooted.

apt-get install iptables-persistent

ip6tables -A INPUT -s proxy.mythic-beasts.com -p tcp -m tcp --dport 80 -j ACCEPT
ip6tables -A INPUT -s proxy.mythic-beasts.com -p tcp -m tcp --dport 443 -j ACCEPT
ip6tables -A INPUT -p tcp --dport 80 -j REJECT
ip6tables -A INPUT -p tcp --dport 443 -j REJECT

ip6tables-save

And we’re done! Our IPv6-only Raspberry Pi3 is now hosting an HTTPS website, and despite being behind a proxy server, we’re tracking real client IP addresses in our logs.

Attending a 5 year old’s birthday party

March 7th, 2017 by

Over the weekend Pete went to the Raspberry Pi Big Birthday Weekend. Rather larger than previous years, they booked out the whole of the Cambridge Junction allowing them to have far more guests.

A very small Eben reviewing the extraordinary growth of Raspberry Pi over the last 12 months.

Eben Upton started the day with a keynote about how Raspberry Pi is doing. In short, they’ve made a lot of computers. Raspberry Pi Trading (the company that makes and sells computers) has generated several million pounds of profit that’s been returned to the Raspberry Pi Foundation (the charity that educates people). This makes them a very unusual startup compared to the current Silicon Valley competition of trying to lose money as fast as possible. Pi Zero W, the newest member of the family has sold over 100,000 units in the four days since launch.

Picture of the Pi Logo done on a sense hat by a young girl in the Astro-Pi workshop.

Pete joined David Honess for his Astro-Pi workshop, programming the Sense HAT and writing code that could eventually end up running on the International Space Station. One girl ran away with one of the examples, turning the black and white X into a full colour Raspberry Pi logo using the Sense HAT.



We were particularly interested in the presentation by Gwiddle as it’s a project that we sponsor, and the Pi Party was first opportunity we had to meet the founders in person. Gwiddle provide free Web hosting for people in full time education and are having significant success supporting nearly 1,000 students. Yasmin and Joshua fielded their questions well. We’re massively proud of what Gwiddle have achieved so far and their plans for future expansion into a full multi-national with world domination look exciting.

Pete giving a talk on Pi in the Cloud, photo by Joshua Bayfield. View the slides

Pete gave a talk on Raspberry Pi in the data centre and how we’ve built a Raspberry Pi Cloud. We’ve put the slides up at the request of the Japanese Raspberry Pi Users Group who’d flown over to be at the Party. We’d love it if you translate the slides Japanese and put them on your website!

Sam Aaron and Sonic Pi

Sam Aaron demonstrated new features in Sonic Pi in a live performance. Pete was particularly impressed by the distortion guitar amp effect from the Pi, and also the details of the new Erlang back-end to reduce timing jitter around a millisecond. Somebody needs to take the library and implement an easy real-time mode for the Pi Zero W to ease building Internet of Things applications.

A fantastic, but incredibly tiring weekend. We’re already looking forward to the 6th birthday party!

Hosting a website on a Raspberry Pi with IPv6 and SSL (part 1)

March 2nd, 2017 by

Our hosted Raspberry Pi 3 servers make a great platform for learning how to run a server. They’re particularly interesting as they only have IPv6 connectivity, yet they can still be used very easily to host a website that’s visible to the whole Internet. This guide walks through the process of setting up a website on one of our hosted Pis, including hosting your own domain name, setting up an SSL certificate from Let’s Encrypt, automating certificate renewal, and using our IPv4 to IPv6 HTTP reverse proxy.

Get a Raspberry Pi

First, get yourself a hosted Raspberry Pi server. You can order these from our website, and be up and running in two minutes:

Click on the link to configure your server and you’ll be shown details of your server, and prompted to configure an SSH key:

We use SSH keys rather than passwords. Click on the link, and you’ll be asked to paste in an SSH public key.  If you don’t have an SSH public key, you’ll need to generate one.  On Unix you can use ssh-keygen and on Windows you can use PuTTYgen. Details of exactly how to do this are beyond this guide, but Google will throw up plenty of other guides.

Connect to your server

Once done, you’re ready to SSH to your server. If you’ve got an IPv6 connection, you can connect directly. The Pi used for this walkthrough is called “mywebsite”, so where you see that in these instructions, use whatever name you chose for your server. To SSH directly, connect directly to mywebsite.hostedpi.com. Sadly, the majority of users currently only have IPv4 connectivity, which means you’ll need to use our gateway box. Your server page will give you details of the port you need to connect to. In my case, it’s 5125:

$ ssh -p 5125 root@ssh.mywebsite.hostedpi.com
The authenticity of host '[ssh.mywebsite.hostedpi.com]:5125 ([93.93.134.53]:5125)' can't be established.
ECDSA key fingerprint is SHA256:Hf/WDZdAn9n1gpdWQBtjRyd8zykceU1EfqaQmvUGiVY.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '[ssh.mywebsite.hostedpi.com]:5125,[93.93.134.53]:5125' (ECDSA) to the list of known hosts.

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Fri Nov  4 14:49:20 2016 from 2a02:390:748e:3:82cd:6992:3629:2f50
root@raspberrypi:~#

You’re in!

Install a web server

We’re going to use the Apache web server, which you can install with the following commands:

apt-get update
apt-get install apache2

And upload some content:

scp -P 5125 * root@ssh.mywebsite.hostedpi.com:/var/www/html/

Now, visit http://www.yourserver.hostedpi.com in your browser, and you should see something like this:

Another computer on the web serving cat pictures!
 

Host your own domain name

Magically, this site on your IPv6-only Raspberry Pi 3 is accessible even to IPv4-only users. To understand how that magic works, we’ll now host a different domain name on the Pi. We going to use the name mywebsite.uid0.com.

First, we need to set up the DNS for this hostname, but rather than pointing it directly at our server, we going to direct it at our IPv4 to IPv6 HTTP proxy, by creating a CNAME to proxy.mythic-beasts.com:

If you’re using a hostname that already has other records, such as a bare domain name that already has MX and NS records, you can use an ANAME pseudo-record.

Our proxy server listens for HTTP and HTTPS requests on both IPv4 and IPv6 addresses, and then uses information in the request header to determine which server to direct it to. This allows us to share one IPv4 address between many IPv6-only servers (actually, it’s two IPv4 addresses as we’ve got a pair of proxy servers in different data centres).

We need to tell the proxy server where to send requests for our hostname. To do this, visit the IPv4 to IPv6 Proxy page the control panel.  The endpoint address is the IP address of your server, which you can find on the details page for your server, as shown above.

For the moment, leave PROXY protocol disabled – we’ll explain that shortly.  After adding the proxy configuration, wait a few minutes, and after no more than five, you should be able to access the website using the hostname set above.

Enable HTTPS

We’re firmly of the view that secure connections should be the norm for websites, and now that Let’s Encrypt provide free SSL certificates, there’s really no excuse not to.

We’re going to use the dehydrated client, as it’s packaged for the Debian operating system that Raspbian is based on. Unfortunately, it’s not yet in the standard Raspian distribution, so in order to get it, you’ll need to use the “backports” repository.

To do this, first add the backports package repository to your apt configuration:

echo 'deb http://httpredir.debian.org/debian jessie-backports main contrib non-free' > /etc/apt/sources.list.d/jessie-backports.list

Then add the keys that these packages are signed with:

apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 7638D0442B90D010
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 8B48AD6246925553

Now update your local package list, and install dehydrated:

apt-get update
apt-get install dehydrated-apache2

We need to configure dehydrated to tell it which hostnames we want certificates for, which we do by putting the names in /etc/dehydrated/domains.txt:

echo "mywebsite.uid0.com" > /etc/dehydrated/domains.txt

It’s also worth setting the email address in the certificate so that you get an email if the automatic renewal that we’re going to setup fails for any reason, and the certificate is close to expiry:

echo "CONTACT_EMAIL=devnull@example.com" > /etc/dehydrated/conf.d/mail.sh

Now we’re ready to issue a certificate, which we do by running dehydrated -c. This will generate the necessary private key for the server, and then ask Let’s Encrypt to issue a certificate. Let’s Encrypt will issue us with a challenge: a file that we have to put on our website that Let’s Encrypt can then check for. dehydrated automates this all for us:

root@raspberrypi:~# dehydrated -c
# INFO: Using main config file /etc/dehydrated/config
# INFO: Using additional config file /etc/dehydrated/conf.d/mail.sh
Processing mywebsite.uid0.com
 + Signing domains...
 + Generating private key...
 + Generating signing request...
 + Requesting challenge for mywebsite.uid0.com...
 + Responding to challenge for mywebsite.uid0.com...
 + Challenge is valid!
 + Requesting certificate...
 + Checking certificate...
 + Done!
 + Creating fullchain.pem...
 + Done!

We now need to configure Apache for HTTPS hosting, and tell it about our certificates. First, enable the SSL module:

a2enmod ssl

Now add a section for an SSL enabled server running on port 443. You’ll need to amend the certificate paths to match your hostname. You can copy and paste the block below straight into your terminal, or you can edit the 000-default.conf file using your preferred text editor.

cat >> /etc/apache2/sites-enabled/000-default.conf <<EOF
<VirtualHost *:443>
	ServerAdmin webmaster@mywebsite.hostedpi.com
	DocumentRoot /var/www/html

	ErrorLog ${APACHE_LOG_DIR}/error.log
        CustomLog ${APACHE_LOG_DIR}/access.log combined

        SSLEngine On
        SSLCertificateFile /var/lib/dehydrated/certs/mywebsite.uid0.com/fullchain.pem
        SSLCertificateKeyFile /var/lib/dehydrated/certs/mywebsite.uid0.com/privkey.pem

</VirtualHost>
EOF

Now restart Apache:

systemctl reload apache2

and you should have an HTTPS website running on your Pi:

Automating certificate renewal

Let’s Encrypt certificates are only valid for three months. This isn’t really a problem, because we can easily automate renewal by running dehydrated in a cron job. To do this, we simply create a file in the directory /etc/cron.daily/:

cat > /etc/cron.daily/dehydrated <<EOF
#!/bin/sh

exec /usr/bin/dehydrated -c >/var/log/dehydrated-cron.log 2>&1
EOF
chmod 0755 /etc/cron.daily/dehydrated

dehydrated will check the age of the certificate daily, and if it’s within 30 days of expiry, will request a new one, logging to /var/log/dehydrated-cron.log.

Rotate your log files!

When setting up a log file, it’s always good practice to also set up log rotation, so that it can’t grow indefinitely (failure to do this has cost one of our founders a number of beers due to servers running out of diskspace). To do this, we drop a file into /etc/logrotate.d/:

cat > /etc/logrotate.d/dehydrated <<EOF
/var/log/dehydrated-cron.log
{
        rotate 12
        monthly
        missingok
        notifempty
        delaycompress
        compress
}
EOF

Client IP addresses

If you look at your web server log files, you’ll see one disadvantage of using our proxy to expose your site to the IPv4 world: all requests appear to come from our proxy servers, rather than the actual clients. This is obviously a bit annoying for log file analysis, but is a big problem for any kind of IP-based access controls or rate limiting. Fortunately, there’s a solution, which we’ll look at in the next post.

Rearranging Raspberry Pi

February 27th, 2017 by


Replacing the production DB under a running system.

Two years ago we migrated Raspberry Pi from a single big server to a series of virtual machines (VMs) on an even bigger server. As time has gone on this architecture served us well; we’ve managed the Pi Zero launch and the busier Raspberry Pi 3 launch and even briefly ran the website on Raspberry Pis as a test.

However, we had a number of issues with the setup that we were looking to address:

  1. We were out of space at the back end and needed more capacity;
  2. We wanted more redundancy: the setup was dependent on a single dedicated VM host in a single data centre; and
  3. There’s an apparent hardware fault on the current VM host that causes it to very occasionally spontaneously reboot (or in one case, switch itself off altogether)

It was time for a bit of an upgrade.

Scalable WordPress

The biggest part of the Raspberry Pi configuration is the main WordPress site that serves the front page for www.raspberrypi.org. This consisted of three VMs: two web servers and one database server. WordPress doesn’t provide any built-in functionality for scaling to multiple servers and although the vast majority of pages are driven entirely by the database, some operations, such as installing plugins or uploading media, result in the creation of local files that need to be available to all web servers.

In order to support WordPress in a multi-server configuration, we arrange the two web servers as a primary and a secondary. The primary delivers half of the public requests and also the administration and content creation side. The relevant parts of the local filesystem are then regularly rsynced to the secondary server, which serves public requests and can maintain the support the full public usage of the site if necessary.

The website is fronted by our “CDN”. This is a cluster of Mac Mini servers that we use to offload much of the static content traffic, and to load balance across the two web servers.

Step 1 : Figure out what you’re trying to do and write a plan.

More capacity meant a new VM host, and more redundancy meant that it went in a different data centre (Sovereign House, or “SOV”) to the existing VM host (Harbour Exchange, or “HEX”).

Diagnosing the fault on the current VM host is tricky, firstly because it only occurs once or twice a year, and secondly because it was hosting quite a busy live website. So our plan for this was to migrate all VMs onto entirely new hardware in HEX so that we can prod the old box at our leisure. This also gave us a handy opportunity to do something we’ve been wanting to do for a while, which is an OS upgrade on the VM host itself.

To add redundancy to the main site, we can split the two web server VMs across the two sites, but this doesn’t help unless we also replicate the database at the new site. So overall, we wanted to move one web server VM to SOV, and add a database VM, and in HEX we wanted to move the database VM and remaining web server VM to the new hardware. This all needed to be achieved with minimal downtime; as a highly public site we’d rather not have two hours of downtime if we can avoid it.

Step 2 : Move the database

We brought up a second database VM at the second site (SOV). We set this up as a MySQL replica of the primary database VM, this requires only the briefest of interruptions to service to configure. We then simulated a failure of the primary database server and moved all database services to the alternative site – so the database is now in SOV. Again, this has only a very brief interruption to service (<5s).

We then moved the old database VM to the new VM host in the original site (HEX) and reconfigured it as a MySQL replica. We now have a primary/secondary setup for MySQL on the new VM hosts, the secondary in a different location (HEX) to the primary (SOV), and the HEX server is not longer on the faulty hardware.

It’s worth noting that in normal operation, both web servers use the same database server for all queries. In similar arrangements, it’s quite common to have one or more web server use the slave database server for read queries, and to only send write queries to the master, thereby reducing the load on the primary database server. Unfortunately, standard WordPress doesn’t support this, but there are plugins that do which we may look at in the future.

Step 3 : Move the web servers

We shut down the secondary web server (HEX), moved it to the replacement VM host in the same data centre and brought it back up. The CDN automatically redirected all web traffic to the primary web server until the secondary came backup. Once this was complete we took the primary web server offline disabling all administration functions for the main website. Fortunately we’d told everyone in the Raspberry Pi office to drink coffee while we did this so nobody complained. Again the CDN moved all production traffic to the secondary web server, we then moved the primary VM to our alternative data centre (SOV) to sit next to the primary database server (SOV).

Step 4 : Tell the CDN

Unlike many providers, we have independent routing at each of our sites. This gives us much greater resilience to network problems, but means that moving a VM between sites necessitates a change in IP address. We informed our cluster of Mac Minis in the CDN that the primary web server had moved, and the administration site sprung back into life and the traffic split evenly across the two sites.

Step 5 : Drink coffee

Over the course of about three hours, we’d migrated a high volume production website from a non-redundant, single site configuration to a geographically redundant configuration, moved the primary database and primary web server to a different location and provided a capacity upgrade. This was all done in the middle of the day with no user-facing downtime, and only a modest maintenance window for the administration portal.

With that done, we can start work on the next stage of the plan: migrating the remainder of the VMs away from the old VM host in HEX.

Managed WordPress

January 23rd, 2017 by

Analogue photo taken with film and real chemistry. Parallax Photographic Cooperative.

WordPress is an excellent content management system that is behind around 25% of all sites on the internet. Our busiest site is Raspberry Pi which is now constructed from multiple different WordPress installations and some custom web applications, stitched together in to one nearly seamless high traffic website.

We’ve taken the knowledge we’ve gained supporting this site and rolled it out as a managed service, allowing you to concentrate on your content, whilst we take care of keep the site up and secure. In addition to 24/7 monitoring, plugin security scans, and our custom security hardening, we’re also able to assist with improving site performance.

We’re now hosting a broad range of sites on this service with the simpler cases start with customers such as Ellexus, who make very impressive technology for IO profiling, and need a reliable, managed platform that they can easily update.

At the other end of the spectrum we have the likes of Parallax Photographic, a co-operative in Brixton who sell photography supplies for people interested in film photography, using real chemistry to develop the photographs and a full analogue feel to the resulting prints. Parallax Photographic use WordPress to host to their online shop, embedding WooCommerce into WordPress to create their fully functional e-commerce site.

Parallax were having performance and management issues with their existing self-managed installation of WordPress. We transferred it for them to our managed WordPress service, in the process adding not only faster hardware but performance improvements to their WordPress stack, custom security hardening, managed backups and 24/7 monitoring. We took one hour for the final switch-over at 9am on a Sunday morning leaving them with a faster and more manageable site. They now have more time to spend fulfilling orders and taking beautiful photographs.

Purrmetrix monitors temperature accurately and inexpensively, and as you can see above with excellent embeddable web analytics. In addition to hosting their website and WooCommerce site for people to place orders, we are also customers (directly, through their website!) using their site to monitor our Raspberry Pi hosting platform. The heatmap (above) is a real-time export from their system. At the time of writing, it shows a 5C temperature difference between the cold and hot aisles across one of our shelves of 108 Pi 3s. The service provides automated alerts; if that graph goes red indicating an over temperature situation alerts start firing. During the prototyping and beta phase for our Raspberry Pi hosting platform, we’ve used their graphing to demonstrate that it takes about six hours from dual fan failure to critical temperature issues. This is long enough to make maintenance straightforward.

Also embedded in our Raspberry Pi hosting platform are multiple Power over Ethernet modules from Pi Supply who make a variety of add-ons for the Raspberry Pi, including some decent high quality audio adapters. With the launch of the Raspberry Pi 3 we had to do some rapid vertical scaling of the Pi Supply managed WooCommerce platform – in thirty seconds we had four times the RAM and double the CPU cores to cope with the additional customer load.

 

We host a wide variety of WordPress sites include Scottish comedy club Mirth of Forth, personalised embroidery for work and leisure wear and our own blog that you’re currently reading. So if you’d like to have us run your WordPress site for you, from a simple blog to a fully managed e-commerce solution or one of the busiest sites on the Web, we’d love to hear from you at sales@mythic-beasts.com.

IPv6 Update

November 1st, 2016 by

Sky completed their IPv6 rollout – any device that comes with IPv6 support will use it by default.

Yesterday we attended the annual IPv6 Council to exchange knowledge and ideas with the rest of the UK networking industry about bringing forward the IPv6 rollout.

For the uninitiated, everything connected to the internet needs an address. With IPv4 there are only 4 billion addresses available which isn’t enough for one per person – let alone one each for my phone, my tablet, my laptop and my new internet connected toaster. So IPv6 is the new network standard that has an effectively unlimited number of addresses and will support an unlimited number of devices. The hard part is persuading everyone to move onto the new network.

Two years ago when the IPv6 Council first met, roughly 1 in 400 internet connections in the UK had IPv6 support. Since then Sky have rolled out IPv6 everywhere and by default all their customers have IPv6 connectivity. BT have rolled IPv6 out to all their SmartHub customers and will be enabling IPv6 for their Homehub 5 and Homehub 4 customers in the near future. Today 1 in 6 UK devices has IPv6 connectivity and when BT complete it’ll be closer to 1 in 3. Imperial College also spoke about their network which has IPv6 enabled everywhere.

Major content sources (Google, Facebook, LinkedIn) and CDNs (Akamai, Cloudflare) are all already enabled with IPv6. This means that as soon as you turn on IPv6 on an access network, over half your traffic flows over IPv6 connections. With Amazon and Microsoft enabling IPv6 in stages on their public clouds by default traffic will continue to grow. Already for a some number of ISPs, IPv6 is the dominant protocol. The Internet Society are already predicting that IPv6 traffic will exceed IPv4 traffic around two to three years from now.

LinkedIn and Microsoft both spoke about deploying IPv6 in their corporate and data centre environments. Both companies are suffering exhaustion of private RFC1918 address space – there just aren’t enough 10.a.b.c addresses to cope with organisations of their scale so they’re moving now to IPv6-only networks.

Back in 2012 we designed and deployed an IPv6-only architecture for Raspberry Pi, and have since designed other IPv6-only infrastructures including a substantial Linux container deployment. Educating the next generation of developers about how networks will work when they join the workforce is critically important.

Sneak preview from Mythic Labs, Raspberry Pi netboot

August 5th, 2016 by

We don’t like to pre-announce things that aren’t ready for public consumption. It’s no secret that we’d love to offer hosted Raspberry Pis in the data centre, and in our view the blocker for this being possible is the unreliability of SD cards which require physical attention when they fail. So we’ve provided some assistance to Gordon to help with getting netboot working for the Raspberry Pi. We built a sensible looking netboot setup and spent a fair amount of time debugging and reading packets to try and help work out why the netboot was occasionally stopping.

This isn’t yet a production service and you can’t buy a hosted Raspberry Pi server. Yet. But if you’d be interested, we’d love to hear from you at sales@mythic-beasts.com.



This is a standard Raspberry Pi 3 with a Power over Ethernet (PoE) adapter. You have to boot the Pi once from a magic SD card which enables netboot. Then you remove the SD card and plug it in to the powered network port. PoE means we can power cycle it using the managed switch. At boot, it talks to a standard tftpd server and isc-dhcp-server, this then delivers the kernel which runs from an NFS root. It’s a minimal Raspbian Jessie from debootstrap plus sshd and occupies a mere 381M versus the 1.3G for a standard Raspbian install. The switch is reporting the Pi 3 consuming 2W.

The Raspberry Pi topple is just for fun.