Four years ago we sat on the phone while Eben Upton pushed the button to launch his educational computer, the Raspberry Pi, and we joined them on a fairly remarkable journey. “How do you sell and ship 10,000 Raspberry Pis?” turned into “how do you sell and ship 5,000,000 Raspberry Pis?” and “how do you contain the excitement of the internet when you put a computer on the front of a magazine?”
Today, we’re nervously watching all the server graphs as the new Raspberry Pi 3 launches and goes on sale. We’ve had one to play with for a while so we did what we do with any new shiny toy: benchmark it in a real world application.
Our favourite application is rendering WordPress pages for the Raspberry Pi website, so we set up a testbed: Pi2 and Pi3 versus the virtual machines that run the blog. We picked a typical page and tried them out. Initial results weren’t promising – just one fifth the speed of the production VMs. The VMs have the advantage of being on the same physical server as the VM that hosts the database.
Moving the Pis to the same switch as the database server, and upgrading from PHP 5.6 to PHP 7 brought Pi 3 page rendering times that were less than twice as long as the production servers.
|Seconds per page
|Blog VM (PHP 5.6)
|24 x 2.4Ghz Ivy Bridge
|Pi 2 (PHP 7)
|4 x 0.9Ghz A7
|Pi 3 (PHP 7)
|4 x 1.2Ghz A?
That’s fast enough to be usable. Parallelising requests across all cores, we can probably sustain about 4 hits/second from the Pi 2, 6 hits/second from the Pi 3 and around 50 hits/second for the main site.
These figures are for uncached pages. As we’ve seen in the past, 50 hits/second isn’t even close to enough to cope with launch day traffic. In reality, the vast majority of pages we serve are cached and both Pis can adequately serve 100Mbps of cached pages (versus 4Gbps for the main host).
So we’ve done what any sensible real world test would do, we’ve put them into the main hosting mix. If you read the headers you’ll see on some requests
HTTP/1.1 200 OK ... X-Served-By: Raspberry Pi 3 ...
indicating your page request came off a Raspberry Pi 3.
We’re aiming to serve about 1 in 12 requests from a Pi 2 or a Pi 3, but may adjust this up or down to keep the pi in action and not melting under the load.
How’s it done?
The backend for the Raspberry Pi site is built from virtual machines. One VM runs the database, and a pair that generate pages for the main, WordPress-based, website. One of the pair is designated as primary, and also runs the admin backend for WordPress, which then synchronises files to the other VM, now additionally, both the Raspberry Pis. All the backend servers exist on a pure IPv6 network. We have a cluster of front-end servers that are dual stack, and load balance requests through to IPv6-only backends.
If you have IPv6 you can see the status of the two Pis here:
If you don’t have IPv6 complain to your ISP, then set up a tunnel at he.net.
The two Pis can tweet directly at @hostingpi3 and @hostingpi2. Sadly, Twitter doesn’t support IPv6 so traffic goes via our NAT 64 service that provides outbound connectivity for IPv6-only servers to legacy parts of the internet.