Raspberry Pi Zero: Not executing a trillion lines of PHP

November 27th, 2015 by

A number of people noticed that Raspberry Pi had launched their $5 Pi Zero yesterday. We had advance warning that something was going to happen, even if we didn’t know exactly what. When the Pi2 launched we had some difficulties keeping up with comment posting and cache invalidation. We gave a very well received talk on the history and launch at the UK Network Operators Forum which you can see below.


Since then we’ve worked with Ben Nuttall to rebuild the entire hosting setup into an IPv6-only private cloud, hosted on one of our very large servers. This gives us :

  • Containment: One part of the site can’t significantly impact the performance of another.
  • Scalability: We can pull VMs into our public cloud and duplicate them if required.
  • Flexibility: We no longer have to have a single software stack that supports everything.

For the Pi 2 launch we sustained around 4500 simultaneous users before we really started struggling with comment posting and cache invalidation. So our new plan was to be able to manage over 5,000 simultaneous site users before we needed to start adding more VMs. This equates to around 1000 hits per second.

In order to do this, we need to make sure we can serve any of the 90% of the most common requests without touching the disks or the database; and without using more than 10ms of CPU time. We want to reserve all our capacity for pages that have to be dynamic – comment-posting and forums, for example – and make all the common content as cheap as possible.

So we deployed a custom script staticify. This automatically takes the most popular and important pages, renders them to static HTML and rewrites the webserver configuration to serve the static pages instead. It runs frequently so the cache is never more than 60 seconds old, making it appear dynamic.  It also means that we serve a file from filesystem cache (RAM) instead of executing WordPress. During the day we improved and deployed this same code to the MagPi site including some horrid hackery to cache popular GET request combinations.


 


Some very vague back-of-the-envelope calculations give us:

 


It’s fair to say that we exceeded our target of 5,000 simultaneous users,

 


Liz Upton was quite pleased:

 


Not to mention a certain amount of respect from our peers

 


If you deployed the blog unoptimised to AWS and just had auto-magic scaling, we’d estimate the monthly bills to be many tens of thousands of dollars per month, money that instead can be spent on education. In addition you’d still need to make sure you can effortlessly scale to thousands of cores without a single bottleneck somewhere in the stack causing them all to lie idle. The original version of the site (with hopeless analytics plugin that processed the complete site logs on every request) would consume more computer power than has ever existed under the traffic mentioned above. At this scale optimisation is a necessity, and if you’re going to optimise, you might as well optimise well.

That said, we think some of our peers possibly overstated our importance in the big scheme of things,

 


WP Super Cache vs Raspberry Pi 2

March 3rd, 2015 by

On Monday, the Raspberry Pi 2 was announced, and The Register’s predictions of global geekgasm proved to be about right. Slashdot, BBC News, global trending on Twitter and many other sources covering the story resulted in quite a lot of traffic. We saw 11 million page requests from over 700,000 unique IP addresses in our logs from Monday, around 6x the normal traffic load.

The Raspberry Pi website is hosted on WordPress using the WP Super Cache plugin. This plugin generally works very well, resulting in the vast majority of page requests being served from a static file, rather than hitting PHP and MySQL. The second major part of the site is the forums and the different parts of the site have wildly differing typical performance characteristics. In addition to this, the site is fronted by four load balancers which supply most of the downloads directly and scrub some malicious requests. We can cope with roughly:

Cached WordPress 160 pages / second
Non cached WordPress 10 pages / second
Forum page 10 pages / second
Maintenance page at least 10,000 pages / second

Back in 2012, during the original launch, we had a rather smaller server setup. That meant we simply just put a maintenance page up and directed everyone to buy a Pi direct from Farnell or RS, both of whom had some trouble coping with the demand. We also launched at 6am GMT so that most of our potential customers would still be in bed, spreading the initial surge over several hours.

This time, being a larger organisation with coordination across multiple news outlets and press conferences, the launch time was fixed for 9am on Feb 2nd 2015. Everything would happen then, apart from the odd journalist with premature timing problems – you know who you are.

Our initial plan was to leave the site up as normal, but set the maintenance page to be the launch announcement. That way if the launch overwhelmed things, everyone should see the announcement served direct from the load balancers and otherwise the site should function as normal. Plan B was to disable the forums, giving more resources to the main blog so people could comment there.

The Launch

turtlebeach

It is a complete coincidence that our director Pete took off to go to this isolated beach in the tropics five minutes after the Raspberry Pi 2 launch.

At 9:00 the announcement went live. Within a few minutes traffic volumes on the site had increased by more than a factor of five and the forum users were starting to make comments and chatter to each other. The server load increased from its usual level of 2 to over 400 – we now had a massive queue of users waiting for page requests because all of the server CPU time was being taken generating those slow forum pages which starved the main blog of server time to deliver those fast cached pages. At this point our load balancers started to kick in and deliver the maintenance page to a large fraction of our site users – the fall back plan. This did annoy the forum and blog users who had posted comments and received the maintenance page back having just had their submission thrown away – sorry. During the day we did a little bit of tweaking to the server to improve throughput, removing the nf_conntrack in the firewall to free up CPU for page rendering, and changing the apache settings to queue earlier so people received either their request page or maintenance page more quickly.

Disabling the forums freed up lots of CPU time for the main page and gave us a mostly working site. Sometimes it’d deliver the maintenance page, but mostly people were receiving cached WordPress pages of the announcement and most of the comments were being accepted.

Super Cache not quite so super

Unfortunately, we were still seeing problems. The site would cope with the load happily for a good few minutes, and then suddenly have a load spike to the point where pages were not being generated fast enough. It appears that WP Super Cache wasn’t behaving exactly as intended.

When someone posts a comment, Super Cache invalidates its cache of the corresponding page, and starts to rebuild a new one, but providing you have this option ticked…

supercache-anonymouse

…(we did), the now out-of-date cached page should continue to be served until it is overwritten by the newer version.

After a while, we realised that the symptoms that we were seeing were entirely consistent with this not working correctly, and once you hit very high traffic levels this behaviour becomes critical. If cached versions are not served whilst the page is being rebuilt then subsequent requests will also trigger a rebuild and you spend more and more CPU time generating copies of the missing cached page which makes the rebuild take even longer so you have to build more copies each of which now takes even longer.

Now we can build a ludicrously overly simple model of this with a short bit of perl and draw a graph of how long it takes to rebuild the main page based on hit rate – and it looks like this.

Supercache performance

This tells us that performance reasonably suddenly falls off a cliff at around 60-70 hits/second. At 12 hits/sec (typical usage) a rebuild of the page completes in considerably under a second, at 40 hits/sec (very busy) it’s about 4s, at 60 hits/sec it’s 30s, at 80hits/sec it’s well over five minutes. At that point the load balancers kick in and just display the maintenance page, and wait for the load to die down again before starting to serve traffic as normal again.

We still don’t know exactly what the cause of this was, so either it’s something else with exactly the same symptoms, or this setting wasn’t working or was interacting badly with another plugin, but as soon as we’d figured out the issue, we implemented the sensible workaround; we put a rewrite hack in to serve the front page and announcement page completely statically, then created the page afresh once every five minutes from cron, picking up all the newest comments. As if by magic the load returned to sensible levels, although there was now a small delay on new comments appearing.

Re-enabling the forums

With stable traffic levels, we turned the forums back on. And then immediately off again. They very quickly backed up the database server with connections, causing both the forums to cease working and the main website to run slowly. A little further investigation into the InnoDB parameters and we realised we had some contention on database locks, we reconfigured and this happened.

Our company pedant points out that actually only the database server process fell over, and it needed restarted not rebooting. Cunningly, we’d managed to find a set of improved settings for InnoDB that allowed us to see all the tables in the database but not read any data out of them. A tiny bit of fiddling later and everything was happy.

The bandwidth graphs

We end up with a traffic graph that looks like this.

raspi-launch-bwgraph

On the launch day it’s a bit lumpy, this is because when we’re serving the maintenance page nobody can get to the downloads page. Downloads of operating system images and NOOBS dominates the traffic graphs normally. Over the next few days the HTML volume starts dropping and the number of system downloads for newly purchased Raspberry Pis starts increasing rapidly. At this point were reminded of the work we did last year to build a fast distributed downloads setup and were rather thankful because we’re considerably beyond the traffic levels you can sanely serve from a single host.

Could do a bit better

The launch of Raspberry Pi 2 was a closely guarded secret, and although we were told in advance, we didn’t have a lot of time to prepare for the increased traffic. There’s a few things we’d like to have improved and will be talking to with Raspberry Pi over the coming months. One is to upgrade the hardware adding some more cores and RAM to the setup. Whilst we’re doing this it would be sensible to look at splitting the parts of the site into different VMs so that the forums/database/Wordpress have some separation from each other and make it easier to scale things. It would have been really nice to have put our extremely secret test setup with HipHop Virtual Machine into production, but that’s not yet well enough tested for primetime although a seven-fold performance increase on page rendering certainly would be nice.

Schoolboy error

Talking with Ben Nuttall we realised that the stripped down minimal super fast maintenance page didn’t have analytics on it. So the difference between our stats of 11 million page requests and Ben’s of 1.5 million indicate how many people during the launch saw the static maintenance page rather than a WordPress generated page with comments. In hindsight putting analytics on the maintenance page would have been a really good idea. Not every http request which received the maintenance page was necessarily a request to see the launch, nor was each definitely a different visitor. Without detailed analytics that we don’t have, we can estimate the number of people who saw the announcement to be more than 1.5 million but less than 11 million.

Flaming, Bleeding Servers

Liz occasionally has slightly odd ideas about exactly how web-servers work: 

is-this-thing-on

Now, much to her disappointment we don’t have any photographs of servers weeping blood or catching fire. [Liz interjects: it’s called METAPHOR, Pete.] But when we retire servers we like to give them a bit of a special send-off.

Happy Birthday to Raspberry Pi

March 2nd, 2015 by

Mythic Beasts has been supporting Raspberry Pi since we saw a small Atmel based prototype that Eben Upton was tremendously proud of and that we thought nobody would ever want. However, we’ve always been wary of betting against Eben and the fact we’re now providing enough bandwidth to download copies of N00BS considerably faster than we can make cups of coffee suggests that, much though it pains us to admit it, we might have been wrong and Eben might have been right.

On Saturday Pete went to join Raspberry Pi at their 3rd birthday party. It was a lot of fun. He drank beer brewed by a brewery controlled by Raspberry Pis, saw the magical RFID announcing machine declare Liz Upton ‘The Tyrannical Goddess of Time and Space’ which clearly had been set to maximum flattery mode. There was also a neat synthesiser with keyboards and a drum machine hooked up doing all the instrument synthesis on an original single core model B which resulted in this sort of Raspberry Jam:


ModMyPi also had a stock of quad core Pis meaning Pete was able to buy one in person for real money and skip the ordering delay on the ones he’s ordered online.

 

But mostly it was just great to see how far we’d come. At the original Raspberry Jam soon after launch in 2012 we met a lot of people who were exciting and fired-up with plans to do awesome things. Now lots and lots of awesome things have been done.

 

But I think it was Helen Lynn that summed it up best. She quietly said to me while surveying the amazing stuff in the room, ‘It really is loads better than when I was six’. Eben Uptons attempt to recreate the computers of his childhood in the 1980s has completely and utterly failed, it’s much cooler this time round.

Coping with Christmas

January 7th, 2014 by

Our latest blog post is on the Raspberry Pi website. Coping with Christmas

RaspberryPi crash

February 20th, 2013 by

After seven months and two weeks of uptime our Raspberry Pi mirror server fell over yesterday and required a power cycle to bring it back up. It lasted longer than it’s first USB hard disk which failed after about six months. Examining the logs suggests that the flash card is dying, yesterday it remounted read only and the network stack fell over. /var/log is on the external USB drive so we were able to see that the machine was alive, it could log ethernet connect/disconnect, it just couldn’t start the network back up.

During the time it was up it shipped about 1.5TB of downloads running an average of 3Mbps of traffic, quite regularly peaking at 10Mbps+.