Covid 19 update

March 27th, 2020 by

Microscopy image of the coronavirus.
(Image copyright NIAID; licensed under CC-BY 2.0)

Covid-19 has dramatically changed life in the UK, and the lock-down announced on Monday has lead to further changes to how we, and our customers, are operating.  This page provides an update to our previously announced Covid-19 plan.

We’re happy to report that our staff member who had some of the symptoms of covid-19 is now fully recovered and has returned to our team after a few days of rest.

Data centre access changes

Our data centre suppliers have altered their operation: 24/7 walk in access is now prohibited and every visit needs booking and justification. The “remote hands” service remains available, but at a reduced capacity as they’ve moved as many staff as possible to home working to minimise their risks.

Equinix, who supply two of our core facilities, have completely closed their facilities in Italy, Germany, France and Spain to customers. Changes are only possible via their remote hands service in those countries.  We have a significant amount of equipment in two Equinix facilities in London and Amsterdam (LD8 and AM5).

Since this announcement on Sunday, we have been anticipating a similar closure being applied to London.  This has  now been announced and will be disruptive for us, as our normal operating procedures rely on being able to move spare equipment easily between the London data centres where we have a presence.  We have been taking steps to increase the spare hardware that we have at each of our London sites in order to mitigate the impact of this when it comes into force on Tuesday 31st March.

The data centres have well-considered policies to reduce risk, and to handle a confirmed case within the facility with rapid quarantine and deep clean procedures. No access is allowed to customers showing symptoms, and all customers’ temperatures are measured before entry to the facilities. Their operations are also robust in the event they need to manage the facility remotely for a period of time.

We’ve also altered our own policy for data centre access. Customer access to our data-centre space is suspended until further notice, including for documentation-related audits (e.g. ISO27001 compliance). This should have a minimal impact; we only allowed accompanied access previously and visits have always been exceedingly infrequent.

Items shipped to us will be quarantined for 24 hours before opening. Cardboard will self-decontaminate in 24 hours.

Staff members may not meet in a data centre unless it is specifically for a piece of work that requires two people for safety reasons (typically very heavy server deployment), and only if is to maintain an essential service. Staff members may not visit multiple key data centres in a single visit to minimise the risk of transmission between key sites, and may not visit if they are showing symptoms. Data centre visits are being minimised to reduce infection risk. This may limit the range of dedicated servers we are able to provision, and we have decided to stop offering Mac Minis with OS X due to the difficulty of provisioning them remotely.

Customer support

Unsurprisingly, a wholesale shift of the UK to remote working has a significant impact on all kinds of online systems, which are now critical for day to day operations. We’re supporting existing customers to make this transition, as well as provisioning new orders for services that now need to be in the cloud.

We run a system for POhWER that is used by all their advisors to track their cases. This is a critical system; if it’s offline, hundreds of people are unable to work. We maintain this as an active/standby pair split across two of our data centres.

“You’re supporting us to enable our vital work with the most vulnerable people in society to continue in these very trying times and, through your swift upgrade actions, our new fully remote working model is delivering the information, advice and advocacy our clients depend on.”
Sandra Black, Head of Training, Risk and Quality at POhWER.

The shift to remote working means that usage on their system has approximately doubled in ten days and has started to see performance limitations. We identified the bottleneck and proposed a cost-effective upgrade combined with some configuration improvements. We then made staff available to apply the changes in an emergency late night maintenance window, restoring their site to full performance by the next working day.

Direct efforts

We were approached at the weekend by a small team comprising local IT experts and doctors who are building an information website to efficiently distribute information to NHS staff members about how to use and select the correct protective equipment for the environment they are working in. We’re providing the virtual server, security updates, backups and 24/7 monitoring service for this free of charge, which has allowed the volunteer IT experts to concentrate on building the site. We’re expecting go-live in the next day or so once the content is checked.

We’re keen to hear of any other efforts where we may be able to assist.

Adding capacity

We’ve ordered more servers to expand the three busiest VM clouds to support existing customers scaling up, and new customers with urgent needs. We want to avoid cloud full and thankfully our server supplier is fully able to continue to build and deliver servers whilst maintaining 2m spacing between employees.

Covid-19

March 11th, 2020 by

The European Bio-informatics Institute used x-rays and a lot of hard maths to draw the above picture of the main covid-19 protein.

We provide critical infrastructure to many other companies who rely on our services. Covid-19 could significantly change day to day life everywhere. At present we don’t believe it will have a significant effect on our operations.

Remote working

Mythic Beasts has always been a distributed company with no central office; all staff members normally work from home. As a result adopting remote working recommendations or enforcement will have no significant impact on our day to day operations. Normally, we have a weekly optional meeting for staff around Cambridgeshire and a compulsory all company meeting roughly once every six weeks. Migrating these meetings to conference calls will have minimal operational impact.

Reducing travel

Our sales process has always been online. We don’t routinely meet customers and have a very light attendance at conferences and industry events. Our next scheduled events are UKNOF (Manchester, April) and LINX (Manchester) both of which offer remote participation should it become necessary.

Financial stability

Mythic Beasts has been profitable every year since 2001 and carries no debt. We maintain significant cash reserves so we can self-finance routine expansion and other business opportunities, and weather unforeseen circumstances. We understand that many consider this an inefficient use of capital, but we can definitely pay our bills in the event of a global pandemic crippling the economy for a short period.

Sensible staff policies

Our staff members are provided with private healthcare. They also get sick leave which they are expected to take if ill. We also provide 30 days + bank holidays holiday to all staff members as standard and we strongly encourage them to have a two week contiguous holiday in order that we know we can operate without them in the event of sickness. We have sufficient staff levels that in the event of multiple staff members being ill for an extended period day to day operations can be maintained and only longer term projects should be delayed.

Supplier issues

We maintain stock of key components to cope with hardware failure although for some months we’ve been seeing very long lead times for hardware, especially CPUs and SSDs. This may impact our lead times for larger orders of dedicated servers or custom hardware. We think it likely that our data centre and connectivity providers may implement a change freeze as they do annually over the Christmas period and at other key times (e.g. the 2012 Olympics). This is a familiar operating environment and utilising multiple providers in multiple countries will help to mitigate this.

2020-03-12 : We run our own private phone conference and IRC service so we’re not affected by the reported load issues at the major public providers Slack, Teams, Zoom etc.

In summary, we think that day to day operations shouldn’t be affected but if you have any concerns get in touch at support@mythic-beasts.com.

Working with talented people.

February 14th, 2020 by

You can buy another copy in a bookshop if your cat refuses to return the one you already own.


We like working with talented people be they staff, customers or suppliers. That’s why we give discounts to people who can navigate our jobs challenge even if they don’t want to work for us.

Occasionally we’ve drafted in Gytha Lodge to help us copy write various articles and turn a jumble of thoughts into a coherent and interesting article.

Formerly an aspiring author, her full title is now Richard and Judy book club pick and Sunday Times bestselling author, Gytha Lodge.

We’re also pleased to report that she took our advice on her first book seriously and the new book starts with a murder being watched over a webcam.

Security in DNS, TLSA records now available in our control panel to support DANE

February 11th, 2020 by

The Internet is better when it’s secure. Finally, thanks to Let’s Encrypt it’s possible to automatically get SSL certificates free of charge and as a result the Internet is dramatically more secure than it used to be. If you’ve used our DNS API you may have discovered that you can verify Let’s Encrypt SSL certificate requests using DNS records, including issuing wildcard certificates.

We support secure DNS (DNSSEC) which prevents DNS records from being forged, making the process of authenticating your SSL certificate through DNS records far more secure than the email-based authentication that was typically used for certificates issued by commercial certificate authorities. We have implemented support for CAA records which uses DNS to restrict the certificate authorities that can issue your certificates. This is most useful if the DNS is trustworthy which, again, requires DNSSEC.

However, there seems to be an opportunity here to improve things further. Rather than relying on a 3rd party certificate authority to confirm that you have control of your own DNS, why can’t you just publish your certificate in DNS directly? If you can trust DNS this would seem to be an obvious improvement, and with DNSSEC, DNS becomes trustworthy. We’ve now added support for the additional record type TLSA which allows exactly that, as part of DNS-Based Authentication of Named Entities (DANE).

Adding a TLSA record through our control panel.

DANE is a flexible mechanism that can be used to add an additional layer of security to certificates issued by a 3rd party authority, or to enable the use of self-signed certificates.

Unfortunately at the moment few clients support TLSA so for the majority of interactions you’re still going to rely on the certificate authority to verify the certificate. But implementations exist for both Exim and Postfix. Step by step, email is becoming more secure.

IPv6 updates

December 16th, 2019 by

Last Thursday we went to the IPv6 Council to speak about IPv6-only hosting and to exchange information with other networks about the state of IPv6 in the UK.

IPv4 address exhaustion is becoming ever more real: the USA and Europe have now run out, and Asia, Africa and Latin America all have less than a year of highly-restricted supply left.

Perhaps unsurprisingly, we’re now seeing real progress in deploying IPv6 across the board.

The major connectivity providers gave an update on their progress. Sky already have an effectively complete deployment across their UK network, so instead they told us about the Sky Italia build-out that launches early next year. It will initially be 100% dual stack but they’re planning to migrate to single stack IPv6 with IPv4 access provided by MAP-T as soon as possible. BT/EE have IPv6 virtually everywhere and take-up is rising as HomeHubs are retired and replaced with SmartHubs. Three are actively enabling IPv6 over their network, as we noticed last month:

Smaller providers are also making progress; Hyperoptic and Community Fibre have both essentially completed their dual stack rollout this year, with both organisations having to consider Network Address Translation due to lack of IPv4 addresses.

We’ve been working hard for many years to make IPv6-only hosting a practical option for our customers, allowing us to considerably expand the lifespan of our IPv4 allocation (which, thanks to a few acquisitions and being a relatively old company, is a reasonable size).

We heard from Ungliech, who started more recently and don’t have a large historical allocation of IPv4 addresses. They gave an interesting talk about their IPv6-only hosting and how it’s an urgent requirement for a new entrant because a RIPE final allocation of 1024 addresses isn’t enough to start a traditional hosting company. Thanks to RIPE running out last month, any new competitor has it four times harder with only 256 addresses to get them started.

We also had interesting updates from Microsoft about their continuing journey to IPv6-only internally in their corporate network, and the pain of continuing to support IPv4 private addressing. When they acquire a company they already have overlapping internal networks, and making internal services available to the wider organisation is an ongoing difficult challenge.

There was also a fascinating talk from SITA about providing network and infrastructure to aviation. There is a huge amount of networking involved and the RFC1918 private IPv4 address space is no longer large enough to enable a large airport. They have a very strong push to use IPv6 even on networks not connected to the public internet.

Updates to sympl to continue to support Let’s Encrypt

October 25th, 2019 by

Before you 3D print the keys from the photo, you should know they are no longer in use.

We’ve now updated Sympl to support the new ACME v2 protocol for long term Let’s Encrypt support.

Let’s Encrypt is changing the protocol for obtaining and renewing certificates from ACME v1, to ACME v2 and the version 1 protocol is now end-of-life. In the next few days (1st November) this means that new accounts will no longer be able to be registered which will prevent new sites obtaining SSL certificates. Final end of life occurs in 2021 when certificate renewals will start to generate errors and then fail entirely.

Symbiosis is now end of life, as Sympl is an actively developed fork we’d recommend any Symbiosis users migrate to Sympl. We’d also recommend our managed hosting as a good place to run your Sympl server.

Multiple Mythic Beasts staff members contributed to this update.

Let’s Encrypt support for older Debian

October 9th, 2019 by
seure cat

This cat is secure, but not dehydrated. (Credit Lizzie Charlton, @LizzieCharlton

Debian Jessie and Debian Stretch include dehydrated, a useful command line tool for managing Let’s Encrypt certificates. We use it fairly extensively for managing certificates throughout our servers and with our managed customers. Unfortunately due to a change in capitalisation at Let’s Encrypt, the standard copy of dehydrated shipped with Debian Jessie and Debian Stretch is no longer compatible. As there’s no package in backports, we’ve spun our own packages of a newer version of dehydrated which is available on our mirror server.

If you use the older version you’ll see an error like


{
"type": "urn:acme:error:badNonce",
"detail": "JWS has no anti-replay nonce",
"status": 400
}

or


{
“type”: “urn:ietf:params:acme:error:malformed”,
“detail”: “Malformed account ID in KeyID header URL: “https://acme-v02.api.letsencrypt.org/acme/acct/””,
“status”: 400
}

The fix is very simple, you just need to install our dehydrated packages. This is very easy to do.

First add our signing keys


wget -O - -q https://mirror.mythic-beasts.com/mythic/support@mythic-beasts.com.gpg.key | apt-key add -

Then the correct repository based on your version of Debian

echo deb http://packages.mythic-beasts.com/mythic/ jessie main >/etc/apt/sources.list.d/packages.mythic-beasts.com.list

or

echo deb http://packages.mythic-beasts.com/mythic/ stretch main >/etc/apt/sources.list.d/packages.mythic-beasts.com.list

then

apt-get update
apt-get install --only-upgrade dehydrated
dehydrated -c

and your copy of dehydrated will be updated to 0.6 and your certificates can be created as normal.

VMHaus services now available in Amsterdam

July 3rd, 2019 by

Integration can be hard work

Last year we had a busy time acquiring Retrosnub, BHost and VMHaus. We’ve been steadily making progress in the background integrating the services the companies provide to reduce costs and complexity of management. We can now also announce our first significant feature upgrade for VMHaus. We’ve deployed a new virtual server cluster to our Amsterdam location and VMHaus services are now available in Amsterdam. VMHaus is using Mythic Beasts for colocation and network and in Amsterdam they will gain access to our extensive set of peers at AMSIX, LINX and LoNAP. Per hour billed virtual servers are available from VMHaus with payment through Paypal.

As you’d expect, every VM comes with a /64 of IPv6 space.

In the background we’ve also been migrating former-BHost KVM-based services to Mythic Beasts VM services in Amsterdam. Shortly we’ll be starting to migrate former-BHost and VMHaus KVM-based services in London to new VM clusters in the Meridian Gate data centre.

Raspberry Pi on Raspberry Pi

June 22nd, 2019 by

Question: Is the Raspberry Pi 4 any good?
Answer: It’s good enough to run its own launch website with tens of millions of visitors.

Raspberry Pi 4 with PoE mounting points already attached.

The Raspberry Pi 4 is out. It’s a quad core ARM A72 running at 1.5Ghz with 4GB of RAM and native 1Gbps ethernet. This means that according to our benchmarks (PHP 7.3 and WordPress) it’s about 2.5x the speed of the 3B+, thanks to the much faster core design and slight clock speed boost. The downside is that it uses more power. Idle power consumption is up slightly to about 3W, peak is now around 7W, up from 5W. It has some improved video features too and USB3.

We obtained an early sample and benchmarked it running the Raspberry Pi website. We used the main blog, which hosts the www.raspberrypi.org blog, and has historically been the most CPU-intensive site to provide. We now see complete page generation in about 0.8s, compared to 2.1s for the 3B+. Obviously in normal operation, most pages are served from a cache, and so the typical end user experience is much faster.

We were really excited by the Pi 4 and wanted to have them available in our cloud for launch day. Unfortunately, Eben had some bad news for us: netboot on the Pi 4 is only going to be added in a future firmware update. Netboot is critical to the operation of our cloud, as it prevents customers from bricking the servers. Our dreams were shattered.

Our standard Pi Cloud unit consists of 6x9x2 blocks of Pi 3B servers connected to PoE switches with just one wire per server. They all net boot and are controlled through our control panel and API for customer use. Since the lack of netboot means we couldn’t just drop the Pi 4 in as a faster version at this time, we went back to the lab and we built an alpha Pi 4 Cloud on a smaller scale: 18 Pi 4s that Raspberry Pi have very generously given to us, all connected with gigabit ethernet so we can try out the 2.5x faster CPUs, 3x faster Network and 4x RAM capacity. We deployed this to our Sovereign House data centre where it connects to our core network.

In full production, we’ll have six racks of Pi 4 stacked back to back.

What we needed then was a test application. We suggested running the main Raspberry Pi website, as we once did with the Pi 3. But with over twice the horsepower per machine we thought we’d dream bigger. How about hosting the Raspberry Pi website on the Raspberry Pi 4, on the Raspberry Pi 4 launch day?

We’ve set up 14 Pi 4s for PHP processing for the main website (56 cores, 56GB RAM), two for static file serving (8 cores, 8GB RAM) and two for memcached (8 cores / 8GB RAM). Late on Friday night we started moving production traffic from the existing virtual machines to the Pi 4 cluster, completing the move shortly after midnight. Every page from the blog after Sat 22nd June has been generated on a Raspberry Pi 4.

Unfortunately, this configuration isn’t yet ready to become the standard, production environment for the Raspberry Pi website. As noted above, the Pi 4s don’t yet support netboot, and so these ones have local SD card storage rather than netboot and network file storage. This means they can’t be remotely re-imaged and have comparatively unreliable storage. The configuration is also only deployed in a single data centre with all servers on a single switch, whereas in normal usage the Raspberry Pi website is simultaneously hosted in two different data centres for redundancy.

To make things more nerve wracking, Pi 4 requires Debian Buster which is a pre-release version of the operating system (full release July 6th). So it’s a cluster of brand new hardware, with a pre-release operating system and a single point of failure. We very strongly advise our customers not to use this for a mission critical super high profile website under-going the most significant production launch in their history. That really isn’t a very good idea.

We once advised Eben that Raspberry Pi probably wouldn’t sell very many computers. He didn’t listen to us then either.

We haven’t moved the entire stack to the Pi 4. The front-end load balancers, download and apt servers are still on non-Pi hardware, split across three data centres (two in London, one in Amsterdam). The Pi 4 hardware looks well-suited to taking over these roles too, although we’ve kept the current arrangement for now, as it’s well tested and allows us to switch back to non-Pi 4 back-ends quickly if needed.

We haven’t moved the databases to the Pi 4 yet either. We’re not going to do that until we can have nice reliable mirrored storage on enterprise SSDs with high write reliability and long write lifetimes attached to the Pis.

Where do we go from here?

Once netboot on Pi 4 is available, we’ll be adding 4 core A72 / 4GB servers to our Pi Cloud, at a slightly higher price than the existing Pi 3 servers, reflecting the higher hardware and power costs. We are also planning to investigate virtualisation as 1 core / 1GB Raspberry Pi VMs may be of interest to existing Pi3 users. 64 bit kernel support and potentially a 64 bit userland would also now be worth investigating.

If you like the idea of Pi 4 in the cloud, a Pi 4 VM in the cloud or 64 bit ARM in the cloud, tell us your plans at sales@mythic-beasts.com.

Out standing in a field

May 24th, 2019 by

Mythic Beasts: out standing in a field

Last year the Cambridge Beer Festival tried accepting payments by contactless cards. This didn’t work very well. They built a wireless LAN around the bar so that their card payment machines could process transactions. This went to an uplink that was a Raspberry Pi with a 4G dongle attached, this wasn’t really reliable enough for a full payment system, but worked as a proof of concept.

To improve things for this year we had a conversation with some friends at the recently incorporated Light Blue Fibre Ltd and between us were able to arrange for Jesus Green to have a fibre and an interlink to Mythic Beasts. As this is a prototype, we’re running below optimum speeds so we’ve delivered a relatively leisurely 1Gbps to the festival. The access points will happily deliver 150Mbps symmetric at any point on the bar if you have a quick enough wifi card in your laptop. We’ve still got the 3G uplink enabled as a backup just in-case someone slices the fibre.

If my phone had an Ethernet socket we’d be ten times as fast.

This year the plan was to restrict things to the tills and the administration network. However, being techies in a beer festival there is a tiny chance we may have been slightly drunk and enabled public wifi with a 100Mbps rate limit. This works well around the bar but there’s nowhere near enough access points to cover the outdoors and the onsite router is limited to 500 devices. It’s not yet production ready for 5,000 beer-drinking visitors, but we have a beer mat and a pencil and we’re sketching out ideas for next year.