LINFO

Bit Definition



A bit is a binary digit (i.e., a digit in a binary numbering system) and is the most basic unit of information in digital computing and communications systems.

Binary refers to any system that uses two alternative states, components, conditions or conclusions. The binary, or base 2, numbering system uses combinations of just two unique numbers, i.e., zero and one, to represent all values, in contrast with the decimal system (base 10), which uses combinations of ten unique numbers, i.e., zero through nine.

Virtually all electronic computers are designed to operate internally with all information encoded in binary numbers. This is because it is relatively simple to construct electronic circuits that generate two distinct voltage levels (i.e., off and on or low and high) to represent zero and one. The reason is that transistors and capacitors, which are the fundamental components of processors (the logic units of computers) and memory, generally have only two distinct states: off and on.

The values of bits are stored in various ways, depending on the medium. For example, the value of each bit is stored as an electrical charge in a single capacitor within a RAM (random access memory) chip. It is stored as the magnetization of a microscopic area of magnetic material on a platter in a hard disk drive (HDD) or on a floppy disk. It is stored along the spiral track on an optical disk as a change from a pit to the surface or from the surface to a pit (representing a one) and as no change (representing a zero).

Computers are almost always designed to store data and execute instructions in larger and more meaningful units called bytes, although they usually also provide ways to test and manipulate single bits. Bytes are abbreviated with an upper case B, and bits are abbreviated with a lower case b. The number of bits in a byte varied according to the manufacturer and model of computer in the early days of computing, but today virtually all computers use bytes that consist of eight bits.

Whereas a bit can have only one of two values, an eight-bit byte can have any of 256 possible values, because there are 256 possible permutations (i.e., combinations of zero and one) for eight consecutive bits (i.e., 28). Thus, an eight-bit byte can represent any unsigned integer from zero through 255 or any signed integer from -128 to 127. It can also represent any character (i.e., letter, number, punctuation mark or symbol) in a seven-bit or eight-bit character encoding system (such as ASCII, the default character encoding used on most computers).

The number of bits is often used to classify generations of computers and their components, particularly CPUs (central processing units) and busses and to provide an indication of their capabilities. However, such terminology can be confusing or misleading when used in an imprecise manner, which it frequently is.

For example, classifying a computer as a 32-bit machine might mean that its data registers are 32 bits wide, that it uses 32 bits to identify each address in memory or that its address buses or data buses of that size. A register is a very small amount of very fast memory that is built into the CPU in order to speed up its operations by providing quick access to commonly used values. Whereas using more bits for registers makes computers faster, using more bits for addresses enables them to support larger programs.

A bus is a set of wires that connects components within a computer, such as the CPU and the memory. A 32-bit bus transmits 32 bits in parallel (i.e., simultaneously rather than sequentially).

Although CPUs that treat data in 32-bit chunks (i.e., processors with 32-bit registers and 32-bit memory addresses) still constitute the personal computer mainstream, 64-bit processors are common in high-performance servers and are now being used in an increasing number of personal computers as well.

The rate of data transfer in computer networks and telecommunications systems is referred to as the bit rate or bandwidth, and it is usually measured in terms of some multiple of bits per second, abbreviated bps, such as kilobits, megabits or gigabits (i.e., billions of bits) per second.

A bitmap is a method of storing graphics (i.e., images) in which each pixel (i.e., dot that is used to form an image on a display screen) is stored as one or several bits. Graphics are also often described in terms of bit depth, which is the number of bits used to represent each pixel. A single-bit pixel is monochrome (i.e., either black or white), a two-bit pixel can represent any of four colors (or black and white and two shades of gray), an eight bit pixel can represent 256 colors and 24-bit and 32-bit pixels support highly realistic color which is referred to as true color.

The word bit was invented in the latter half of the 1940s by John W. Tukey (1915-2000), an eminent statistician, while working at Bell Labs (the research arm of AT&T, the former U.S. telecommunications monopoly). He coined it as a contraction of the term binary digit and as a handier alternative to bigit or binit. Tukey also coined the word software.

The term bit was first used in an influential publication by Claude E. Shannon (1916-2001), also while at Bell Labs, in his seminal 1948 paper A Mathematical Theory of Communication. Shannon, widely regarded as the father of information theory, developed a theory that for the first time treated communication as a rigorously stated mathematical problem and provided communications engineers with a technique for determining the capacities of communications channels in terms of of bits.

Although the bit has been the smallest unit of storage used in computing so far, much research is being conducted on qubits, the basic unit of information in quantum computing (which is based on phenomena that occur at the atomic and subatomic levels). Qubits hold an exponentially greater amount of information than conventional bits.






Created March 4, 2005. Updated April 5, 2006.
Copyright © 2005 - 2006 The Linux Information Project. All Rights Reserved.