Over the past several months we have covered a number of topics on Linux and IoT:
- From Embedded Linux to IoT
- 3 Benefits of Embedded Linux in IoT Development
- Linux, Cybersecurity and Protecting Industrial Control Systems
This series concludes with the background of Linux and IoT scalability.
What Makes Linux So Scalable?
There are many reasons for the scalability of Linux:
- Building on UNIX: UNIX was designed for ease of portability across processor architectures.
- Writing the Linux system in C: The C language was designed to be portable in support of UNIX design standards.
- Being an open source product: Linux has the backing of thousands of open source developers.
While these are all contributing factors, insight, successful design and the need for a portable operating system are the most likely reasons for the scalability of Linux.
How Scalable Is Linux?
It would be easier to count the processor architectures that Linux doesn’t run on than provide a list of all the architectures that it does work on. As of this publication, 31 different processors are currently supported for Linux. These architectures range from postage stamp-sized computers to mini-computers, and all the way up to the IBM Z-Series computers. Linux systems are scalable from computers that fit in your pocket to computers that require a full floor of a large building, and everything in-between.
UNIX and Linux: A Heritage of Design Scalability
UNIX was always designed to be scalable. In the early days, Linux ran on a PDP-11 minicomputer by Digital Equipment Corporation (DEC). Later, DEC branded its own version of the UNIX OS as Ultrix. Ultrix ran the Atex application, which was newsroom editing software for almost every major newspaper in the world. Atex was the publishing system to beat all publishing systems.
But it wasn’t until Linux came along that both went down to nanoscale operating systems. The smallest computer I have ever developed with using Linux was the Intel Edison board, a postage stamp-sized multi-core Atom processor with several co-processors and plenty of input/output (I/O) pins.
Another credit card-sized computer is the BeagleBone PocketBeagle, a credit card-sized computer has low power consumption, plenty of I/O and high-resolution graphics driven by a USB port for power.
Going a little larger, we have the Raspberry Pi and Tinker Board computers, slightly larger than a credit card.
On the larger side of embedded Linux systems would be the NVIDIA Jetson NANO boards. These boards are touted for artificial intelligence (AI) applications and have multiple parallel processors for high-speed processing.
But today, new Linux systems are emerging that are ARM processor-based servers. Rack mounted or stand alone, these systems are high powered and used for enterprise applications.
On the largest end of the scale is the IBM Z-Series mainframe series. Linux runs on the Z-series using the s390x architecture.
Scalability with Ease
As scalability has matured and evolved over the years, developing for multiple board types and processor architectures from a single source code base has become very easy. Today, adding a new processor architecture is a simple two-step process.
- The first step is to add the processor architecture in dpkg as shown below:
$ sudo dpkg --add-architecture ARM && apt update
- Once the processor architecture is added and the package database updated, we need only add the specific compiler and utilities for the specific ARM compiler required, so adding the ARM64 architecture is a one-line command as follows:
$ sudo apt-get install binutils-aarch64-linux-gnu gcc-aarch64-linux-gnu
If you are using Eclipse or NetBeans integrated development environment (IDE), you can add these to your list of targets and compile your projects for multiple architectures at the same time.
Building Linux Kernels
Once the proper compilers and utilities are installed, building a Linux kernel, kernel drivers and device drivers can be done fairly easy. Linux systems can be built from scratch using Linux from Scratch, Buildroot or Yocto. These products are open source and have default build and make files for building cross-platform targets on a number of host systems.
Linux from Scratch (LFS)
LFS uses scripts to build Linux completely from source code. You have to install the compilers manually using your package manager or other methods, such as Git or other scripts. LFS gives the highest degree of experience but has a steep learning curve if you are not familiar with building Linux from source code.
Buildroot automates Linux from scratch. With Buildroot, the developer uses scripts to set up the target architecture. Once set up, a series of automated menus walks the developer through build options. When the options are all set, a make file is created, and the standard make program is invoked with the target make file just created. The build process can take from a half hour to several hours, depending on build options. A complete Linux ISO file is put into a folder ready to copy to an SD card or hard disk for testing.
Yocto is supported by a number of hardware manufacturers for developing custom Linux systems for custom hardware. Yocto uses a tool called BitBake and special formatted scripts to add or remove features from a compete Linux system. One major difference between Yocto and Buildroot is tools and scripting syntax. Buildroot uses stand Linux C and make file syntax, while Yocto uses special tools and syntax. Yocto has a steep learning curve, even if you are already familiar with stand script and C language conventions. Yocto has thousands of pages of documentation, so if you are going to use Yocto, be ready to hit the books.
If you wish to debug a user application, you can choose to do target emulation with the quick emulator (QEMU). QEMU is officially the Linux emulator for kernel virtual machine (KVM). KVM allows emulating another Linux OS for debugging purposes and can also be used to do source-level debugging by the host using emulated hardware instead of the actual target.
Linux scalability is inherent to Linux and comes with every GNU/Debian Linux system by default. Being able to add an architecture through the use of the default package manager app takes the guesswork out of the equation and builds time for cross-compilers and utilities. By using an IDE, cross-targeting executables, debugging and deployment almost becomes a single step, making it quick and easy to add another processor architecture to application development.
This is the way development should be — intuitive and easy. Just a couple years ago, several days would be spent on adding each additional processor architecture. Now it can be as simple as adding a new cross-compiler.
Finally, the steps for adding new architectures are identical, except for platform name. This allows Linux developers to focus on code quality and program design, without the hassle of learning custom tool installations, making development more effective and less costly.
Validate your Linux skills with CompTIA Linux+. Download the exam objectives to see what’s covered.