Why Does Every Supercomputer Run On Linux?

  • Monday, February 04, 2019
  • Linux

Linux is everywhere. The cloud runs almost entirely on Linux. The majority of websites, web applications, and online services are hosted on servers that run Linux. Android is, at its core, based on Linux. If you use the internet or a non-Apple smartphone, you use Linux. So Linux is used by everyone in developed countries and the majority of people in developing countries. It’s also used on every one of the 500 fastest computers in the world.

It’s not obvious that the operating system that hosts your WordPress website or Magento eCommerce store should be well-suited to supercomputers too. But Linux is inevitably the first choice of OS for the multi-million-dollar machines that unfold exquisitely complex proteins, simulate billions of neurons, or model the beginning of the universe. Why?

One of the first experiments with Linux and supercomputers happened back in 1994. Nasa’s Thomas Sterling and Donald Becker decided to build a powerful computer made up of many commodity machines — a supercomputer made of PCs. They couldn’t afford the supercomputers of the time and wanted to test out clustering as a means of carrying out parallel computations on cheap hardware. Their initial “supercomputer” was called Beowulf, a cluster of 16 Intel 486 DX4 processors that ran on, you guessed it, Linux. The project produced low-level kernel software for parallel processing and networking and inspired many other similar projects in academia and industry.

If you follow the tech news, you’ll be aware that IBM recently bought Red Hat, the company behind Red Hat Enterprise Linux and CentOS, the Linux distribution that runs on Future Hosting servers. IBM has a role in the early history of Linux supercomputing too. In the late nineties, IBM built supercomputers that ran its proprietary operating systems, but the company saw the benefit of adopting Linux for enterprise applications. Around that time, IBM was also working on parallel computing and had acquired advanced technology that allowed programs to run on a huge number of processors. That technology was integrated with Linux.

Supercomputers are invariably massively parallel. A supercomputer is hundreds or thousands of not-so-super computers working together. Linux had a huge advantage over other operating systems of the time on machines of this type, and so it became the standard for building large multi-processor clusters. Because it was so popular in that field, a lot of work was done by various projects to enhance its abilities as a supercomputer OS.

But what was it about Linux that got the ball rolling in the first place? In short, Linux is free and its open source. Imagine you’re building a massively parallel supercomputer and you want to keep costs down. You want to avoid a proprietary operating system that charges license fees for every processor or user. When building a computer with a specific purpose in mind, it’s a good idea to strip out all the code you don’t need. The superfluous code might cause performance issues and the more code there is, the more bugs there are. Linux is modular, so it’s easy to build a slimmed-down kernel with only essential code. You can’t do that with a proprietary operating system. Finally, because Linux is open source, supercomputing projects around the world benefit from code contributed by other projects.

Over many years, Linux evolved into the ideal operating system for supercomputers, and that’s why every one of the fastest computers in the world runs on Linux.

Dedicated Server Special

Take advantage of our Double RAM offer on the E3-1230v2 4 x 3.30GHz+HT server! Only $134.95 per month. Managed and Unmanaged options available at checkout.

GET STARTED