Linux has been quite a phenomenon. From its inception in the early 1990s by then student, Linus Torvalds, who would have guessed it would become the force to reckon with for operating system choices. In the beginning, it was a rough road trying to install Linux and getting it to work with almost any hardware. The kernels would crash shortly after booting, and finding device drivers that worked was almost impossible.
Linux was touch and go for the first 10 years. A small group of developers were working feverishly to make the operating system commercial grade. The distributions available then were Slackware and Debian, two distributions that had a lot of blood, sweat and tears behind them.
Slackware was actually a closed source distribution with one sole architect. Releases were sparse, and long periods of time would pass between updates.
Debian, on the other hand, has always been the default GNU Linux, and it was tough to find large numbers of volunteers who would work and stay on the project.
But Linux had something going for it that most commercial operating systems didn’t – it was open source, and a developer could modify the system as a custom operating system, only promising that the changes and improvements would be uploaded to improve its integrity, and share the source code to those changes with anyone who wanted to use Linux. And while it took some years, eventually the idea caught on, and Linux took off like a wildfire.
In 1999, IBM funded Red Hat, a company that was introducing not only a commercial-grade Linux, but also professional support, and things began to turn around rather quickly. Other companies followed suit in providing full-time Linux developers, and soon Intel, AMD, Samsung, Google, SUSE and even Microsoft started assigning Linux developers. The tiny engine that couldn’t gain traction was finally moving ahead at full speed.
Why Linux 3.0 Was a Game Changer
Linux 2.6 was released December 18, 2003, and while it had made tremendous inroads in features, Linux 3.0 was a game changer for the operating systems market, being released on July 21, 2011. A number of projects that were conceived and developed during the years of the 2.6 kernel all reached maturity shortly after the release of Linux 3.0.
Some of these features included the following:
- Automatic hardware detection
- Versioned device driver install
- Kernel virtualization (KVM)
- Processor emulation (QEMU)
- Virtual machine management
- Virtual networking
- Compiler improvements
- C and C++ refinements
- Cross-platform development
While Linux 2.6 tried hard to do automatic hardware detection, it was never fully functional. But by the summer of 2011, detecting hardware and installing the proper kernel drivers was working almost flawlessly. Support for OpenGL graphics drivers, improved performance, added support for virtualization of hardware and networking quickly made Linux a contender for professional web servers.
A native x386 or x64 processor-based machine could be virtualized on the fly, so as web server demands increased, a new instance could be started, and then shut down when demand went down. With virtual networking, a web server could be isolated, given its own range of IP addresses and operate as a standalone computer.
By supporting virtual processors in virtualization, instances of non-Intel compatible processors could be started. For example, an IBM s390 mainframe version of Linux could be started in emulation mode on an Intel processor–based machine. And ARM-based machines could be emulated for development and debugging purposes on an Intel-based machine.
Linux was finally rolling along, and the train was about to accelerate even more. That ability to build cross-platform compilers and do cross-platform development made it easier than ever to write and debug software, even operating systems using processor emulation with machine virtualization. Now, code development for one or more processor architectures could be developed at on time.
This meant command-line utilities or GUI-based applications could be written once and deployed on multiple architectures at once. And this was using the same source code, cross-compiling for different processors. Now you could write a GUI program for the server and at the same time deploy the application for a client. So, if your server was x64 based and your client was i386 based, “Write once; deploy many,” was the new motto.
How Raspberry Pi Brought New Life to Linux
But in 2012, something happened that would launch Linux into a market that never before existed. On February 29, 2012, the Raspberry Pi was introduced. The Raspberry Pi was a credit card-sized computer designed for students. With a price of $35.00, it was affordable, and adding a keyboard, mouse and wireless network adapter kept the price barely under $100. But for a total of $200, you had a full-blown computer that fit in a small box. It was designed to run Linux, a modified Debian version dubbed Raspbian Linux.
The first Raspberry Pi was a single-core ARM-based computer, and quickly, educational tools aimed at young people learning for the first time, became available. High-level languages like Scratch, which allowed a program to be written using visual blocks, were easy to learn, and a program could be put together in minutes.
The Raspberry Pi quickly took off, and on October 8, 2013, the 1 millionth Raspberry Pi had been shipped. Over time the Raspberry Pi matured and newer multicore processors were introduced. And the Raspbian Linux distribution was refined with new features added. The Broadcom processor had many proprietary closed-source drivers at first, but over the years these have been moved to open-source versions.
Now Linux could run on an IBM mainframe, an Intel small business server or client, and an embedded credit card–sized computer named the Raspberry Pi. By 2012, Linux had already gone where no other operating system had gone before, Everywhere a computer could go, from the smallest PC in the palm of your hand to the largest IBM mainframe, Linux was there.