LINFO

Robust Definition



The word robust, when used with regard to computer software, refers to an operating system or other program that performs well not only under ordinary conditions but also under unusual conditions that stress its designers' assumptions.

Software is typically buggy (i.e., contains errors) and fragile, and thus not robust. This is in large part because programs are usually too big and too complicated for a single human mind to comprehend in their entirety, and thus it is difficult for their developers to be able to discover and eliminate all the errors, or to even be certain as to what extent of errors exist. This is especially true with regard to subtle errors that only make their presence known in unusual circumstances.

A major feature of Unix-like operating systems is their robustness. That is, they can operate for prolonged periods (sometimes years) without crashing (i.e., stopping operating) or requiring rebooting (i.e., restarting). And although individual application programs sometimes crash, they almost always do so without affecting other programs or the operating system itself.

Robustness is something that should be designed into software from the ground up; it is not something that can be successfully tacked on at a later date. The lack of advance planning for robustness is a major factor in the numerous security and stability problems that plague some non-Unix-like operating systems.

The Rule of Robustness in the Unix philosophy states that robustness results from transparency and simplicity. Software is transparent when a skilled programmer can examine its source code (i.e., the original version written by a human in a programming language) and soon comprehend how it works. It is simple when its operation is sufficiently uncomplicated that a programmer can visualize with little effort all of the potential situations that it might encounter. The more that programs have both of these qualities, the more robust they will be.

Another important tactic for creating robust software is to write general code that can accommodate a wide range of situations and thereby avoid having to insert extra code into it just to handle special cases. This is because code added just to accommodate special cases is often buggier than other code, and stability problems can become particularly frequent and/or severe from the interactions among several such sections of code.

Linux and other open source software, i.e., software for which the source code is freely available, has a major advantage over closed source software, i.e., software for which the source code is secret (as is the case for most commercial, or proprietary, software) as far as robustness is concerned. It is that the source code can be reviewed by a large number of highly diverse programmers at their leisure via the Internet, and thus it is easier to find and correct errors than when the code is reviewed only by in-house programmers who are often under pressure from deadlines and who might not have strong incentives to find and eliminate all the bugs.

Applying the Unix Philosophy's Rule of Composition (i.e., to design programs so that they can be connected to other programs) also helps result in robust software. This is because it leads to small, modular programs that are easier to comprehend and to correct than are larger ones that attempt to do many things. It is also because input received from other programs (as contrasted to that from humans) can be particularly effective for stress-testing software and thus for helping to provide tolerance for unusual and large inputs.

The robustness of Unix-like operating systems is also the result of several additional deliberate design concepts. One, which has been adopted increasingly by other operating systems as well, is the providing of each application program with its own area of memory and preventing it from interfering with the memory areas for other applications or the kernel (i.e., the core of the operating system).

Another is the system of permissions that are required for accessing all objects (i.e., files and directories) on the system. This system, when administered correctly, can make it difficult for sloppy or malicious (e.g., viruses, worms and trojans) code to affect key parts of the system.






Created June 20, 2005.
Copyright © 2005 The Linux Information Project. All Rights Reserved.