A non-party political broadcast from Mythic Beasts

May 6th, 2015 by

Here at Mythic Beasts it’s fair to say that our staff hold a wide spectrum of political beliefs, but I think one thing we can all agree on is that all the major political parties have at least some irredeemably stupid policies (and possibly also that some of the minor parties only have stupid policies).

This makes voting for a political party a pretty depressing prospect. So, what about voting for an elected representative who will look after our interests?

Our founders reside in two constituencies with notable MPs: Witney and Cambridge.

The MP for Witney is notable for being the Prime Minister. The MP for Cambridge, Julian Huppert, is notable for being a Liberal Democrat and yet still being highly regarded by a large number of his constituents.

Now, if you want good data on whether your MP is any good or not, you should head over to the excellent They Work For You and find out what they’ve been up to in Parliament on your behalf.

But who wants good data when you can have some anecdotes? Let’s look at two issues that have got us wound up recently.

Firstly, the EU VAT MESS, which causes us an administrative burden far in excess of the value of the affected revenue.

Julian Huppert was very active on behalf of the constituents who contacted him on this issue (Mythic only got as far as a tweet…), including submitting written questions in parliament, which received a predictably useless response.

On the other hand, Paul wrote to David Cameron twice (the first letter went AWOL), and received only a hopeless response which completely failed to address any of the issues raised.

Secondly, banning secure encryption. As a hosting company, the ability to undertake transactions securely online is quite important to our everyday business (see previous notes).

The appalling jeering by other MPs, and the pathetic response given by Theresa May, to Julian Huppert’s questions asked in Parliament demonstrated the he was clearly one of the few MPs who actually grasped the implications of the proposal, rather just resorting to rhetoric that fuels the fear that terrorism relies on.

As for David Cameron, well, it’s his idea.

So what can we conclude from this? Not a lot, except that we’d probably be in a far better place if parliament were full of representatives who listened to and understood their constituents, rather than those who get in on the strength of a party political vote.

Debian 8.0 “Jessie” now available

April 27th, 2015 by

Jessie

The new stable version of Debian, named “Jessie” was released on Saturday.  The new version is now available for use on all of our Virtual Server hosts. Jessie is fully available at the Mythic Beasts mirror and we’re included in the default menu so you can easily install directly from our mirror.

Mythic Beasts make extensive use of Debian and would like to thank all the Debian developers by donating our usual firkin of beer from the every excellent Milton Brewery to the Summer Debian UK barbeque so everyone within the Debian community can have a pint on us. Possibly more than one.

Virtual Servers – SSDs and disk upgrades

April 17th, 2015 by

cloud-ssd-red-150Following on from recent upgrades to RAM and bandwidth for our Virtual Servers, we’re pleased to announce upgrades to Virtual Server storage options.

We’ve launched a new range of SSD Virtual Servers, offering the ultimate in I/O performance. The range starts with our VPS2 SSD which replaces the 40GB disk in our standard VPS 2 with a 10GB SSD drive.

Like our spinning rust-based Virtual Servers, our SSD storage is local to the host machine, and connected as RAID 1 mirrored pairs to a controller with a battery-backup unit.  This allows us to safely enable a large write cache, further boosting write performance.

We’ve also doubled the disk space available with all of our full HDD-based Virtual Servers, so our basic VPS2 now includes 40GB of disk, 2GB RAM and 1TB of monthly bandwidth.

Existing customers can upgrade to the new storage capacity by typing “upgrade” on the admin console, and then adding new partitions or resizing existing partitions to make use of the new capacity.

 

 

IPv6 bites again

April 10th, 2015 by

Every now and again, one of our users will either get their SMTP credentials stolen, or will get a machine on our network compromised. More often than not, the miscreants responsible will then proceed to send a whole bunch of adverts for V1@gr@ or whatever through our mail servers. This typically results in our mail servers getting (not unreasonably) added to various blacklists, which affects all our users, creates work for us and generally makes for sad times.

We’ve got various measures to counter this, one of which relies on the fact that spam lists are typically very dirty and will generate a lot of rejections. We can use this fact to freeze outgoing mail for a particular user or IP address if it is generating an unreasonable number of delivery failures. The approach we use is based on the, generally excellent, Block Cracking config.

Unfortunately, both we, and the author of the above, overlooked what happens when you start adding IPv6 addresses to a file which uses “:” as its key/value separator, such as that used by Exim’s lsearch lookup. Yesterday evening, a customer’s compromised machine started a spam run to us over IPv6.

Our system raises a ticket in our support queue every time it adds a new IP to our block list so that we can get in touch with the customer quickly. Unfortunately, if the lookup doesn’t work because you haven’t correctly escaped an IPv6 address, it’ll happily keep adding the same IP for each spam email seen, and raising a new ticket each time. Cue one very busy support queue.

Needless to say, the fix was simple enough, but the moral, if there is one is a) test everything that you do with both IPv6 and IPv4 and b) start preparing for IPv6 now, as it’s going to take you ages to find everything that it breaks.

Code making assumptions about what an IP address looks like that will be broken by IPv6 are almost certainly more prevalent than 2-digit year assumptions were 15 years ago.

WP Super Cache vs Raspberry Pi 2

March 3rd, 2015 by

On Monday, the Raspberry Pi 2 was announced, and The Register’s predictions of global geekgasm proved to be about right. Slashdot, BBC News, global trending on Twitter and many other sources covering the story resulted in quite a lot of traffic. We saw 11 million page requests from over 700,000 unique IP addresses in our logs from Monday, around 6x the normal traffic load.

The Raspberry Pi website is hosted on WordPress using the WP Super Cache plugin. This plugin generally works very well, resulting in the vast majority of page requests being served from a static file, rather than hitting PHP and MySQL. The second major part of the site is the forums and the different parts of the site have wildly differing typical performance characteristics. In addition to this, the site is fronted by four load balancers which supply most of the downloads directly and scrub some malicious requests. We can cope with roughly:

Cached WordPress 160 pages / second
Non cached WordPress 10 pages / second
Forum page 10 pages / second
Maintenance page at least 10,000 pages / second

Back in 2012, during the original launch, we had a rather smaller server setup. That meant we simply just put a maintenance page up and directed everyone to buy a Pi direct from Farnell or RS, both of whom had some trouble coping with the demand. We also launched at 6am GMT so that most of our potential customers would still be in bed, spreading the initial surge over several hours.

This time, being a larger organisation with coordination across multiple news outlets and press conferences, the launch time was fixed for 9am on Feb 2nd 2015. Everything would happen then, apart from the odd journalist with premature timing problems – you know who you are.

Our initial plan was to leave the site up as normal, but set the maintenance page to be the launch announcement. That way if the launch overwhelmed things, everyone should see the announcement served direct from the load balancers and otherwise the site should function as normal. Plan B was to disable the forums, giving more resources to the main blog so people could comment there.

The Launch

turtlebeach

It is a complete coincidence that our director Pete took off to go to this isolated beach in the tropics five minutes after the Raspberry Pi 2 launch.

At 9:00 the announcement went live. Within a few minutes traffic volumes on the site had increased by more than a factor of five and the forum users were starting to make comments and chatter to each other. The server load increased from its usual level of 2 to over 400 – we now had a massive queue of users waiting for page requests because all of the server CPU time was being taken generating those slow forum pages which starved the main blog of server time to deliver those fast cached pages. At this point our load balancers started to kick in and deliver the maintenance page to a large fraction of our site users – the fall back plan. This did annoy the forum and blog users who had posted comments and received the maintenance page back having just had their submission thrown away – sorry. During the day we did a little bit of tweaking to the server to improve throughput, removing the nf_conntrack in the firewall to free up CPU for page rendering, and changing the apache settings to queue earlier so people received either their request page or maintenance page more quickly.

Disabling the forums freed up lots of CPU time for the main page and gave us a mostly working site. Sometimes it’d deliver the maintenance page, but mostly people were receiving cached WordPress pages of the announcement and most of the comments were being accepted.

Super Cache not quite so super

Unfortunately, we were still seeing problems. The site would cope with the load happily for a good few minutes, and then suddenly have a load spike to the point where pages were not being generated fast enough. It appears that WP Super Cache wasn’t behaving exactly as intended.

When someone posts a comment, Super Cache invalidates its cache of the corresponding page, and starts to rebuild a new one, but providing you have this option ticked…

supercache-anonymouse

…(we did), the now out-of-date cached page should continue to be served until it is overwritten by the newer version.

After a while, we realised that the symptoms that we were seeing were entirely consistent with this not working correctly, and once you hit very high traffic levels this behaviour becomes critical. If cached versions are not served whilst the page is being rebuilt then subsequent requests will also trigger a rebuild and you spend more and more CPU time generating copies of the missing cached page which makes the rebuild take even longer so you have to build more copies each of which now takes even longer.

Now we can build a ludicrously overly simple model of this with a short bit of perl and draw a graph of how long it takes to rebuild the main page based on hit rate – and it looks like this.

Supercache performance

This tells us that performance reasonably suddenly falls off a cliff at around 60-70 hits/second. At 12 hits/sec (typical usage) a rebuild of the page completes in considerably under a second, at 40 hits/sec (very busy) it’s about 4s, at 60 hits/sec it’s 30s, at 80hits/sec it’s well over five minutes. At that point the load balancers kick in and just display the maintenance page, and wait for the load to die down again before starting to serve traffic as normal again.

We still don’t know exactly what the cause of this was, so either it’s something else with exactly the same symptoms, or this setting wasn’t working or was interacting badly with another plugin, but as soon as we’d figured out the issue, we implemented the sensible workaround; we put a rewrite hack in to serve the front page and announcement page completely statically, then created the page afresh once every five minutes from cron, picking up all the newest comments. As if by magic the load returned to sensible levels, although there was now a small delay on new comments appearing.

Re-enabling the forums

With stable traffic levels, we turned the forums back on. And then immediately off again. They very quickly backed up the database server with connections, causing both the forums to cease working and the main website to run slowly. A little further investigation into the InnoDB parameters and we realised we had some contention on database locks, we reconfigured and this happened.

Our company pedant points out that actually only the database server process fell over, and it needed restarted not rebooting. Cunningly, we’d managed to find a set of improved settings for InnoDB that allowed us to see all the tables in the database but not read any data out of them. A tiny bit of fiddling later and everything was happy.

The bandwidth graphs

We end up with a traffic graph that looks like this.

raspi-launch-bwgraph

On the launch day it’s a bit lumpy, this is because when we’re serving the maintenance page nobody can get to the downloads page. Downloads of operating system images and NOOBS dominates the traffic graphs normally. Over the next few days the HTML volume starts dropping and the number of system downloads for newly purchased Raspberry Pis starts increasing rapidly. At this point were reminded of the work we did last year to build a fast distributed downloads setup and were rather thankful because we’re considerably beyond the traffic levels you can sanely serve from a single host.

Could do a bit better

The launch of Raspberry Pi 2 was a closely guarded secret, and although we were told in advance, we didn’t have a lot of time to prepare for the increased traffic. There’s a few things we’d like to have improved and will be talking to with Raspberry Pi over the coming months. One is to upgrade the hardware adding some more cores and RAM to the setup. Whilst we’re doing this it would be sensible to look at splitting the parts of the site into different VMs so that the forums/database/Wordpress have some separation from each other and make it easier to scale things. It would have been really nice to have put our extremely secret test setup with HipHop Virtual Machine into production, but that’s not yet well enough tested for primetime although a seven-fold performance increase on page rendering certainly would be nice.

Schoolboy error

Talking with Ben Nuttall we realised that the stripped down minimal super fast maintenance page didn’t have analytics on it. So the difference between our stats of 11 million page requests and Ben’s of 1.5 million indicate how many people during the launch saw the static maintenance page rather than a WordPress generated page with comments. In hindsight putting analytics on the maintenance page would have been a really good idea. Not every http request which received the maintenance page was necessarily a request to see the launch, nor was each definitely a different visitor. Without detailed analytics that we don’t have, we can estimate the number of people who saw the announcement to be more than 1.5 million but less than 11 million.

Flaming, Bleeding Servers

Liz occasionally has slightly odd ideas about exactly how web-servers work: 

is-this-thing-on

Now, much to her disappointment we don’t have any photographs of servers weeping blood or catching fire. [Liz interjects: it’s called METAPHOR, Pete.] But when we retire servers we like to give them a bit of a special send-off.

Virtual Server performance boost

February 6th, 2015 by

cloud-cpuWe’ve just added an option to allow Virtual Servers to get full access to the CPU extensions available on the host server.

By default, virtual servers see a subset of CPU features that is available consistently across all of our hosts. For most users this has no impact on performance, but for some applications, such as performing certain types of encryption, speed can be substantially improved if certain processor extensions are available.

We’ve noticed significant improvements in OpenVPN throughput and latency after turning on this option on some of our servers.

CPU mode on our virtual servers can be configured using the “cpu” command on the admin shell.

Bring Your Own ISO

January 30th, 2015 by

Cloud CDROMOur Virtual Servers come with a virtual CD drive, allowing you to load an ISO image from our library and install an operating system of your choice, configured exactly how you want it.

We’ve just launched our “Bring Your Own ISO” feature, allowing you to upload your own ISO images, giving you complete freedom to install your choice of operating system, or to run a “live CD” distribution.

All users have a free 5GB allocation on our storage cluster for images, and files can be fetched from anywhere on the internet via HTTP, HTTPS, git, FTP or rsync.

Customers can upload a boot image via the “Boot Media” option on our customer control panel.

Virtual Servers: double the RAM, more CPUs

January 12th, 2015 by
800GB of RAM - just some of the new memory added to our hosts over Christmas

800GB of RAM – just some of the new memory added to our hosts over Christmas

As many of our existing VPS customers will be aware, over the holiday period we had a number of late nights in data centres, installing additional RAM into our virtual server hosts.

We’re now pleased to announce new specs for our Virtual Servers with a doubling of RAM at every price point.

Combined with the substantial upgrades to Virtual Server bandwidth allowances announced last month, our basic server now comes with 2GB of RAM and 1TB/month of bandwidth for £12.50+VAT per month (or less if paid yearly).

But that’s not all. Whilst we had the lids open, we also added additional CPUs meaning that for most hosts, CPU contention has been halved, giving a further boost to performance (RAM remains, and always has been, uncontended). Our higher spec servers have also received an increase in the number of virtual CPUs allocated.

Naturally, our servers retain all the great features that our customers are used to, including:

  • Full IPv6 connectivity
  • Virtual VNC and serial consoles
  • Choice of independently-routed data centres
  • DNS services for your domain
  • Installation from your choice of ISO image
  • Optional BGP feeds for AnyCast services
  • Optional Server Management

Most existing customers will have already received the new RAM allowance. If you were on a host that didn’t need a hardware upgrade, your VPS won’t have been rebooted. Simply shutdown your server, run “upgrade” on the admin console, and reboot.

We’re not done yet. Watch this space for further upgrades and improvements to our Virtual Servers.

Virtual Server bandwidth upgrades

December 19th, 2014 by

We know from experience that some of our customers get very busy at Christmas.

We know what you got for Christmas…

As an early Christmas present to our Virtual Server customers, we’ve just rolled out a substantial bandwidth upgrade across all our VPS range. Our 1GB VPS 1 server now comes with a 1TB/month bandwidth allowance, a ten fold increase on the old quota, with similar upgrades across the range.

You can find full details of the new allowances on our virtual server specs page.

All of our virtual servers come with IPv6 connectivity, VNC and serial consoles, free DNS services for any domains hosted on your server, and freedom to install the OS of your choice.

We’ve got more upgrades planned for our virtual servers in the near future, so watch this space.

The EU VAT MESS (again)

December 19th, 2014 by

Those of you who follow us on Twitter are probably bored of us banging on about this, but the true lunacy of the EU VAT MESS has only just come to light. It turns out that the UK and other states are going to compensate the tax haven at the centre of this to the tune of €1.1bn for loss of tax revenue as a result of the rule change.

Let’s re-cap:

1. Large companies such as Amazon indulge in VAT tourism, by paying very low Luxembourg VAT when supplying to customers in the UK and other countries.

2. The EU declares this to be unfair tax avoidance, and decides to close the “loop hole” by making electronic services subject to VAT in the customer’s country rather than the seller’s.

3. Faced with the prospect of thousands of companies having to register for and operate VAT in 28 member states, HMRC sets up their VAT “one stop shop”, the VAT MOSS. This avoids filing separate returns to different states, but still requires sellers to track the multitude of different VAT rates in operation by different states, including obscure regional variations such as the Portuguese Azores.

4. The legislation does not include any thresholds for inter-state VAT, meaning if you sell a single item to a consumer in another EU state you must register for the VAT MOSS and charge EU VAT.

5. You can’t register for the VAT MOSS unless you are registered for UK VAT, meaning that if you make a single sale to another EU state, you’re obliged to start operating UK VAT on all your sales even if you’re well below the UK VAT threshold of £81k.

6. The guidance requires companies to collect an often impossible set of non-conflicting data to prove the consumer’s location, and then retain those records for 10 years.

7. HMRC’s original impact assessment recognised but dismissed this problem by vastly underestimating the number of businesses affected, and claiming that most small companies sell through online market places, giving as much as 70% of their revenue to companies such as… Amazon.

8. HMRC back-tracks on (5) by stating that you can avoid charging VAT on your UK sales by splitting your sales into two separate businesses – a technique known as revenue splitting which is normally considered illegal tax evasion.

9. Many companies (ourselves included) realise that the cost of compliance is greater than the affected revenue, and consider simply not supplying to consumers in other EU states, but are warned that this may be illegal under EU anti-discrimination laws.

10. Companies start to indulge in farcical discussions with HMRC about what constitutes an e-service. In some cases, by making the business less efficient, for example by manually attaching a PDF to an email rather sending it automatically, the service will no longer be considered an e-service.

11. Despite acknowledging that the change would impact businesses that are not currently registered for UK VAT, HMRC apparently did nothing to communicate the change to the rules to anyone other than VAT registered companies. Vince Cable then has the gall to tell companies just finding out about the change that he has done a lot to communicate the change.

12. Recognising that it’s about to lose a huge wedge of tax revenue, Luxembourg ups its VAT rate to 17%, a move which would probably have significantly curtailed VAT tourism on its own.

13. The UK and other member states agree to compensate Luxembourg to the tune of €1.1bn for the VAT revenue that they will lose as a result of companies ceasing to use Luxembourg as their tax avoidance state of choice.

You really couldn’t make it up.

As it stands, many micro-businesses are planning to simply shut up shop rather be killed by VAT bureaucracy.

For the record, we were already VAT registered, and did find out about the changes early this year, but the work of updating our billing system to cope with a plethora of different VAT rates, and the necessary “proof of residence” steps has been a massive and expensive distraction from doing useful things like upgrading our Virtual Servers.