Bit

Encyclopedia

A

in computing

and telecommunication

s; it is the amount of information stored by a digital device or other physical system that exists in one of two possible distinct states

. These may be the two stable states of a flip-flop

, two positions of an electrical switch, two distinct voltage

or current

levels allowed by a circuit, two distinct levels of light intensity

, two directions of magnetization

or polarization

, etc.

In computing

, a bit can also be defined as a variable or computed quantity that can have only two possible values

. These two values are often interpreted as binary digits and are usually denoted by the Arabic numerical digit

s 0 and 1. The two values can also be interpreted as logical values (

or device is a matter of convention, and different assignments may be used even within the same device or program

. The length of a binary number may be referred to as its "bit-length

."

In information theory

, one bit is typically defined as the uncertainty of a binary random variable that is 0 or 1 with equal probability, or the information that is gained when the value of such a variable becomes known.

In quantum computing, a

that can exist in superposition

of two bit values, "true" and "false".

The symbol for bit, as a unit of information, is either simply "bit" (recommended by the ISO/IEC standard 80000-13 (2008)

) or lowercase "b" (recommended by the IEEE 1541 Standard (2002)).

s invented by Basile Bouchon

and Jean-Baptiste Falcon (1732), developed by Joseph Marie Jacquard

(1804), and later adopted by Semen Korsakov

, Charles Babbage

, Hermann Hollerith, and early computer manufacturers like IBM

. Another variant of that idea was the perforated paper tape. In all those systems, the medium (card or tape) conceptually carried an array of hole positions; each position could be either punched through or not, thus potentially carrying one bit of information. The encoding of text by bits was also used in Morse code

(1844) and early digital communications machines such as teletypes and stock ticker machines (1870).

Ralph Hartley

suggested the use of a logarithmic measure of information in 1928. Claude E. Shannon first used the word

had written in 1936 of "bits of information" that could be stored on the punched card

s used in the mechanical computers of that time. The first programmable computer built by Konrad Zuse

used binary notation for numbers.

or current

pulse, or by the electrical state of a flip-flop circuit

. For devices using positive logic, a digit value of 1 (true value or high) is represented by a positive voltage relative to the electrical ground voltage (up to 5 volt

s in the case of TTL

designs), while a digit value of 0 (false value or low) is represented by 0 volts.

, a bit was often stored as the position of a mechanical lever or gear, or the presence or absence of a hole at a specific point of a paper card

or tape

. The first electrical devices for discrete logic (such as elevator

and traffic light

control circuits, telephone switches, and Konrad Zuse's computer) represented bits as the states of electrical relays which could be either "open" or "closed". When relays were replaced by vacuum tube

s, starting in the 1940s, computer builders experimented with a variety of storage methods, such as pressure pulses traveling down a mercury delay line, charges stored on the inside surface of a cathode-ray tube, or opaque spots printed on glass discs

by photolithographic techniques.

In the 1950s and 1960s, these methods were largely supplanted by magnetic storage

devices such as magnetic core memory

, magnetic tape

s, drums, and disk

s, where a bit was represented by the polarity of magnetization

of a certain area of a ferromagnetic film. The same principle was later used in the magnetic bubble memory developed in the 1980s, and is still found in various magnetic strip items such as metro

tickets and some credit card

s.

In modern semiconductor memory

, such as dynamic random access memory

or flash memory

, the two values of a bit may be represented by two levels of electric charge

stored in a capacitor

. In programmable logic array

s and certain types of read-only memory

, a bit may be represented by the presence or absence of a conducting path at a certain point of a circuit. In optical disc

s, a bit is encoded as the presence or absence of a microscopic pit on a reflective surface. In one-dimensional bar codes, bits are encoded as the thickness of alternating black and white lines.

capacity to store binary code (0 or 1, up or down, current or not, etc). Information

technology. Using an analogy, the hardware

binary digits refer to the amount of storage space available (like the number of buckets available to store things), and the information content the filling, which comes in different levels of granularity (fine or coarse, that is, compressed or uncompressed information). When the granularity is finer (when information is more compressed), the same bucket can hold more.

For example, it is estimated that the combined technological capacity of the world to store information provides 1,300 exabytes of hardware digits in 2007. However, when this storage space is filled and the corresponding content is optimally compressed, this only represents 295 exabytes of information.

When optimally compressed, the resulting carrying capacity approaches Shannon information or information entropy

.

which are defined as multiples of bits, such as byte

(8 bits), kilobit

(either 1000 or 2

(either or 8×2

Computers usually manipulate bits in groups of a fixed size, conventionally named "words". The number of bits in a word varies with the computer model; typically between 8 to 80 bits; or even more in some specialized machines.

The International Electrotechnical Commission

's standard IEC 60027

specifies that the symbol for binary digit should be "bit", and this should be used in all multiples, such as "kbit" (for kilobit). However, the letter "b" (in lower case) is widely used too. The letter "B" (upper case) is both the standard and customary symbol for byte.

In telecommunications (including computer network

s), data transfer rates are usually measured in bits per second (bit/s) or its multiples, such as kbit/s. (This unit is not to be confused with baud

.)

A millibit is a (rare) unit of information equal to one thousandth of a bit.

computer processor

instructions (such as

In the 1980s, when bitmap

ped computer displays became popular, some computers provided specialized bit block transfer ("bitblt" or "blit") instructions to set or copy the bits that corresponded to a given rectangular area on the screen.

In most computers and programming languages, when a bit within a group of bits such as a byte or word is to be referred to, it is usually specified by a number from 0 (not 1) upwards corresponding to its position within the byte or word. However, 0 can refer to either the most significant bit

or to the least significant bit

depending on the context, so the convention of use must be known.

; and the

2 (≈ 0.693) nats, or log

**bit**is the basic unit of informationInformation

Information in its most restricted technical sense is a message or collection of messages that consists of an ordered sequence of symbols, or it is the meaning that can be interpreted from such a message or collection of messages. Information can be recorded or transmitted. It can be recorded as...

in computing

Computing

Computing is usually defined as the activity of using and improving computer hardware and software. It is the computer-specific part of information technology...

and telecommunication

Telecommunication

Telecommunication is the transmission of information over significant distances to communicate. In earlier times, telecommunications involved the use of visual signals, such as beacons, smoke signals, semaphore telegraphs, signal flags, and optical heliographs, or audio messages via coded...

s; it is the amount of information stored by a digital device or other physical system that exists in one of two possible distinct states

State (computer science)

In computer science and automata theory, a state is a unique configuration of information in a program or machine. It is a concept that occasionally extends into some forms of systems programming such as lexers and parsers....

. These may be the two stable states of a flip-flop

Flip-flop (electronics)

In electronics, a flip-flop or latch is a circuit that has two stable states and can be used to store state information. The circuit can be made to change state by signals applied to one or more control inputs and will have one or two outputs. It is the basic storage element in sequential logic...

, two positions of an electrical switch, two distinct voltage

Voltage

Voltage, otherwise known as electrical potential difference or electric tension is the difference in electric potential between two points — or the difference in electric potential energy per unit charge between two points...

or current

Electric current

Electric current is a flow of electric charge through a medium.This charge is typically carried by moving electrons in a conductor such as wire...

levels allowed by a circuit, two distinct levels of light intensity

Irradiance

Irradiance is the power of electromagnetic radiation per unit area incident on a surface. Radiant emittance or radiant exitance is the power per unit area radiated by a surface. The SI units for all of these quantities are watts per square meter , while the cgs units are ergs per square centimeter...

, two directions of magnetization

Magnetism

Magnetism is a property of materials that respond at an atomic or subatomic level to an applied magnetic field. Ferromagnetism is the strongest and most familiar type of magnetism. It is responsible for the behavior of permanent magnets, which produce their own persistent magnetic fields, as well...

or polarization

Electrical polarity

Electrical polarity is present in every electrical circuit. Electrons flow from the negative pole to the positive pole. In a direct current circuit, one pole is always negative, the other pole is always positive and the electrons flow in one direction only...

, etc.

In computing

Computing

Computing is usually defined as the activity of using and improving computer hardware and software. It is the computer-specific part of information technology...

, a bit can also be defined as a variable or computed quantity that can have only two possible values

Value (computer science)

In computer science, a value is an expression which cannot be evaluated any further . The members of a type are the values of that type. For example, the expression "1 + 2" is not a value as it can be reduced to the expression "3"...

. These two values are often interpreted as binary digits and are usually denoted by the Arabic numerical digit

Numerical digit

A digit is a symbol used in combinations to represent numbers in positional numeral systems. The name "digit" comes from the fact that the 10 digits of the hands correspond to the 10 symbols of the common base 10 number system, i.e...

s 0 and 1. The two values can also be interpreted as logical values (

*true*/*false*,*yes*/*no*), algebraic signs (*+*/*−*), activation states (*on*/*off*), or any other two-valued attribute. The correspondence between these values and the physical states of the underlying storageData storage device

thumb|200px|right|A reel-to-reel tape recorder .The magnetic tape is a data storage medium. The recorder is data storage equipment using a portable medium to store the data....

or device is a matter of convention, and different assignments may be used even within the same device or program

Computer program

A computer program is a sequence of instructions written to perform a specified task with a computer. A computer requires programs to function, typically executing the program's instructions in a central processor. The program has an executable form that the computer can use directly to execute...

. The length of a binary number may be referred to as its "bit-length

Bit-length

The length, in integers, of a binary number. The term "bit" is an abbreviation of "binary digits."At their most fundamental level, digital computers and telecommunications devices can process only data that has been expressed in binary format...

."

In information theory

Information theory

Information theory is a branch of applied mathematics and electrical engineering involving the quantification of information. Information theory was developed by Claude E. Shannon to find fundamental limits on signal processing operations such as compressing data and on reliably storing and...

, one bit is typically defined as the uncertainty of a binary random variable that is 0 or 1 with equal probability, or the information that is gained when the value of such a variable becomes known.

In quantum computing, a

*quantum bit*or*qubit*

is a quantum systemQubit

In quantum computing, a qubit or quantum bit is a unit of quantum information—the quantum analogue of the classical bit—with additional dimensions associated to the quantum properties of a physical atom....

Quantum mechanics

Quantum mechanics, also known as quantum physics or quantum theory, is a branch of physics providing a mathematical description of much of the dual particle-like and wave-like behavior and interactions of energy and matter. It departs from classical mechanics primarily at the atomic and subatomic...

that can exist in superposition

Quantum superposition

Quantum superposition is a fundamental principle of quantum mechanics. It holds that a physical system exists in all its particular, theoretically possible states simultaneously; but, when measured, it gives a result corresponding to only one of the possible configurations.Mathematically, it...

of two bit values, "true" and "false".

The symbol for bit, as a unit of information, is either simply "bit" (recommended by the ISO/IEC standard 80000-13 (2008)

ISO/IEC 80000

International standard ISO 80000 or IEC 80000—depending on which of the two international standards bodies International Organization for Standardization and International Electrotechnical Commission is in charge of each respective part—is a style guide for the use of physical quantities and units...

) or lowercase "b" (recommended by the IEEE 1541 Standard (2002)).

## History

The encoding of data by discrete bits was used in the punched cardPunched card

A punched card, punch card, IBM card, or Hollerith card is a piece of stiff paper that contains digital information represented by the presence or absence of holes in predefined positions...

s invented by Basile Bouchon

Basile Bouchon

Basile Bouchon was a textile worker in the silk center in Lyon who invented a way to control a loom with a perforated paper tape in 1725. The son of an organ maker, Bouchon partially automated the tedious setting up process of the drawloom in which an operator lifted the warp threads using...

and Jean-Baptiste Falcon (1732), developed by Joseph Marie Jacquard

Joseph Marie Jacquard

Joseph Marie Charles dit Jacquard played an important role in the development of the earliest programmable loom , which in turn played an important role in the development of other programmable machines, such as computers.- Early life :Jean Jacquard’s name was not really...

(1804), and later adopted by Semen Korsakov

Semen Korsakov

Semen Nikolaevich Korsakov was a Russian government official, noted both as a homeopath and an inventor who was involved with an early version of information technology.-Biography:...

, Charles Babbage

Charles Babbage

Charles Babbage, FRS was an English mathematician, philosopher, inventor and mechanical engineer who originated the concept of a programmable computer...

, Hermann Hollerith, and early computer manufacturers like IBM

IBM

International Business Machines Corporation or IBM is an American multinational technology and consulting corporation headquartered in Armonk, New York, United States. IBM manufactures and sells computer hardware and software, and it offers infrastructure, hosting and consulting services in areas...

. Another variant of that idea was the perforated paper tape. In all those systems, the medium (card or tape) conceptually carried an array of hole positions; each position could be either punched through or not, thus potentially carrying one bit of information. The encoding of text by bits was also used in Morse code

Morse code

Morse code is a method of transmitting textual information as a series of on-off tones, lights, or clicks that can be directly understood by a skilled listener or observer without special equipment...

(1844) and early digital communications machines such as teletypes and stock ticker machines (1870).

Ralph Hartley

Ralph Hartley

Ralph Vinton Lyon Hartley was an electronics researcher. He invented the Hartley oscillator and the Hartley transform, and contributed to the foundations of information theory.-Biography:...

suggested the use of a logarithmic measure of information in 1928. Claude E. Shannon first used the word

**in his seminal 1948 paper***bit**A Mathematical Theory of Communication*

. He attributed its origin to John W. Tukey, who had written a Bell Labs memo on 9 January 1947 in which he contracted "binary digit" to simply "bit". Interestingly, Vannevar BushA Mathematical Theory of Communication

"A Mathematical Theory of Communication" is an influential 1948 article by mathematician Claude E. Shannon. As of November 2011, Google Scholar has listed more than 48,000 unique citations of the article and the later-published book version...

Vannevar Bush

Vannevar Bush was an American engineer and science administrator known for his work on analog computing, his political role in the development of the atomic bomb as a primary organizer of the Manhattan Project, the founding of Raytheon, and the idea of the memex, an adjustable microfilm viewer...

had written in 1936 of "bits of information" that could be stored on the punched card

Punched card

A punched card, punch card, IBM card, or Hollerith card is a piece of stiff paper that contains digital information represented by the presence or absence of holes in predefined positions...

s used in the mechanical computers of that time. The first programmable computer built by Konrad Zuse

Konrad Zuse

Konrad Zuse was a German civil engineer and computer pioneer. His greatest achievement was the world's first functional program-controlled Turing-complete computer, the Z3, which became operational in May 1941....

used binary notation for numbers.

### Transmission and processing

Bits can be implemented in many forms. In most modern computing devices, a bit is usually represented by an electrical voltageVoltage

Voltage, otherwise known as electrical potential difference or electric tension is the difference in electric potential between two points — or the difference in electric potential energy per unit charge between two points...

or current

Electric current

Electric current is a flow of electric charge through a medium.This charge is typically carried by moving electrons in a conductor such as wire...

pulse, or by the electrical state of a flip-flop circuit

Flip-flop (electronics)

In electronics, a flip-flop or latch is a circuit that has two stable states and can be used to store state information. The circuit can be made to change state by signals applied to one or more control inputs and will have one or two outputs. It is the basic storage element in sequential logic...

. For devices using positive logic, a digit value of 1 (true value or high) is represented by a positive voltage relative to the electrical ground voltage (up to 5 volt

Volt

The volt is the SI derived unit for electric potential, electric potential difference, and electromotive force. The volt is named in honor of the Italian physicist Alessandro Volta , who invented the voltaic pile, possibly the first chemical battery.- Definition :A single volt is defined as the...

s in the case of TTL

Transistor-transistor logic

Transistor–transistor logic is a class of digital circuits built from bipolar junction transistors and resistors. It is called transistor–transistor logic because both the logic gating function and the amplifying function are performed by transistors .TTL is notable for being a widespread...

designs), while a digit value of 0 (false value or low) is represented by 0 volts.

### Storage

In the earliest non-electronic information processing devices, such as Jacquard's loom or Babbage's Analytical EngineAnalytical engine

The Analytical Engine was a proposed mechanical general-purpose computer designed by English mathematician Charles Babbage. It was first described in 1837 as the successor to Babbage's difference engine, a design for a mechanical calculator...

, a bit was often stored as the position of a mechanical lever or gear, or the presence or absence of a hole at a specific point of a paper card

Punched card

A punched card, punch card, IBM card, or Hollerith card is a piece of stiff paper that contains digital information represented by the presence or absence of holes in predefined positions...

or tape

Punched tape

Punched tape or paper tape is an obsolete form of data storage, consisting of a long strip of paper in which holes are punched to store data...

. The first electrical devices for discrete logic (such as elevator

Elevator

An elevator is a type of vertical transport equipment that efficiently moves people or goods between floors of a building, vessel or other structures...

and traffic light

Traffic light

Traffic lights, which may also be known as stoplights, traffic lamps, traffic signals, signal lights, robots or semaphore, are signalling devices positioned at road intersections, pedestrian crossings and other locations to control competing flows of traffic...

control circuits, telephone switches, and Konrad Zuse's computer) represented bits as the states of electrical relays which could be either "open" or "closed". When relays were replaced by vacuum tube

Vacuum tube

In electronics, a vacuum tube, electron tube , or thermionic valve , reduced to simply "tube" or "valve" in everyday parlance, is a device that relies on the flow of electric current through a vacuum...

s, starting in the 1940s, computer builders experimented with a variety of storage methods, such as pressure pulses traveling down a mercury delay line, charges stored on the inside surface of a cathode-ray tube, or opaque spots printed on glass discs

Optical disc

In computing and optical disc recording technologies, an optical disc is a flat, usually circular disc which encodes binary data in the form of pits and lands on a special material on one of its flat surfaces...

by photolithographic techniques.

In the 1950s and 1960s, these methods were largely supplanted by magnetic storage

Magnetic storage

Magnetic storage and magnetic recording are terms from engineering referring to the storage of data on a magnetized medium. Magnetic storage uses different patterns of magnetization in a magnetizable material to store data and is a form of non-volatile memory. The information is accessed using...

devices such as magnetic core memory

Magnetic core memory

Magnetic-core memory was the predominant form of random-access computer memory for 20 years . It uses tiny magnetic toroids , the cores, through which wires are threaded to write and read information. Each core represents one bit of information...

, magnetic tape

Magnetic tape

Magnetic tape is a medium for magnetic recording, made of a thin magnetizable coating on a long, narrow strip of plastic. It was developed in Germany, based on magnetic wire recording. Devices that record and play back audio and video using magnetic tape are tape recorders and video tape recorders...

s, drums, and disk

Disk storage

Disk storage or disc storage is a general category of storage mechanisms, in which data are digitally recorded by various electronic, magnetic, optical, or mechanical methods on a surface layer deposited of one or more planar, round and rotating disks...

s, where a bit was represented by the polarity of magnetization

Magnetism

Magnetism is a property of materials that respond at an atomic or subatomic level to an applied magnetic field. Ferromagnetism is the strongest and most familiar type of magnetism. It is responsible for the behavior of permanent magnets, which produce their own persistent magnetic fields, as well...

of a certain area of a ferromagnetic film. The same principle was later used in the magnetic bubble memory developed in the 1980s, and is still found in various magnetic strip items such as metro

Rapid transit

A rapid transit, underground, subway, elevated railway, metro or metropolitan railway system is an electric passenger railway in an urban area with a high capacity and frequency, and grade separation from other traffic. Rapid transit systems are typically located either in underground tunnels or on...

tickets and some credit card

Credit card

A credit card is a small plastic card issued to users as a system of payment. It allows its holder to buy goods and services based on the holder's promise to pay for these goods and services...

s.

In modern semiconductor memory

Semiconductor memory

Semiconductor memory is an electronic data storage device, often used as computer memory, implemented on a semiconductor-based integrated circuit. Examples of semiconductor memory include non-volatile memory such as Read-only memory , magnetoresistive random access memory , and flash memory...

, such as dynamic random access memory

Dynamic random access memory

Dynamic random-access memory is a type of random-access memory that stores each bit of data in a separate capacitor within an integrated circuit. The capacitor can be either charged or discharged; these two states are taken to represent the two values of a bit, conventionally called 0 and 1...

or flash memory

Flash memory

Flash memory is a non-volatile computer storage chip that can be electrically erased and reprogrammed. It was developed from EEPROM and must be erased in fairly large blocks before these can be rewritten with new data...

, the two values of a bit may be represented by two levels of electric charge

Electric charge

Electric charge is a physical property of matter that causes it to experience a force when near other electrically charged matter. Electric charge comes in two types, called positive and negative. Two positively charged substances, or objects, experience a mutual repulsive force, as do two...

stored in a capacitor

Capacitor

A capacitor is a passive two-terminal electrical component used to store energy in an electric field. The forms of practical capacitors vary widely, but all contain at least two electrical conductors separated by a dielectric ; for example, one common construction consists of metal foils separated...

. In programmable logic array

Programmable logic array

A programmable logic array is a kind of programmable logic device used to implement combinational logic circuits. The PLA has a set of programmable AND gate planes, which link to a set of programmable OR gate planes, which can then be conditionally complemented to produce an output...

s and certain types of read-only memory

Read-only memory

Read-only memory is a class of storage medium used in computers and other electronic devices. Data stored in ROM cannot be modified, or can be modified only slowly or with difficulty, so it is mainly used to distribute firmware .In its strictest sense, ROM refers only...

, a bit may be represented by the presence or absence of a conducting path at a certain point of a circuit. In optical disc

Optical disc

In computing and optical disc recording technologies, an optical disc is a flat, usually circular disc which encodes binary data in the form of pits and lands on a special material on one of its flat surfaces...

s, a bit is encoded as the presence or absence of a microscopic pit on a reflective surface. In one-dimensional bar codes, bits are encoded as the thickness of alternating black and white lines.

## Information capacity and information compression

When the information*capacity*of a storage system or a communication channel is presented in bits or bits per second, this often refers to binary digits, which is a hardwareHardware

Hardware is a general term for equipment such as keys, locks, hinges, latches, handles, wire, chains, plumbing supplies, tools, utensils, cutlery and machine parts. Household hardware is typically sold in hardware stores....

capacity to store binary code (0 or 1, up or down, current or not, etc). Information

*capacity*of a storage system is only an upper bound to the actual*quantity of information*stored therein. If the two possible values of one bit of storage are not equally likely, that bit of storage will contain less than one bit of information. Indeed, if the value is completely predictable, then the reading of that value will provide no information at all (zero entropic bits, because no resolution of uncertainty and therefore no information). If a computer file that uses*n*bits of storage contains only*m*<*n*bits of information, then that information can in principle be encoded in about*m*bits, at least on the average. This principle is the basis of data compressionLossless data compression

Lossless data compression is a class of data compression algorithms that allows the exact original data to be reconstructed from the compressed data. The term lossless is in contrast to lossy data compression, which only allows an approximation of the original data to be reconstructed, in exchange...

technology. Using an analogy, the hardware

Hardware

Hardware is a general term for equipment such as keys, locks, hinges, latches, handles, wire, chains, plumbing supplies, tools, utensils, cutlery and machine parts. Household hardware is typically sold in hardware stores....

binary digits refer to the amount of storage space available (like the number of buckets available to store things), and the information content the filling, which comes in different levels of granularity (fine or coarse, that is, compressed or uncompressed information). When the granularity is finer (when information is more compressed), the same bucket can hold more.

For example, it is estimated that the combined technological capacity of the world to store information provides 1,300 exabytes of hardware digits in 2007. However, when this storage space is filled and the corresponding content is optimally compressed, this only represents 295 exabytes of information.

When optimally compressed, the resulting carrying capacity approaches Shannon information or information entropy

Information entropy

In information theory, entropy is a measure of the uncertainty associated with a random variable. In this context, the term usually refers to the Shannon entropy, which quantifies the expected value of the information contained in a message, usually in units such as bits...

.

## Multiple bits

There are several units of informationUnits of information

In computing and telecommunications, a unit of information is the capacity of some standard data storage system or communication channel, used to measure the capacities of other systems and channels...

which are defined as multiples of bits, such as byte

Byte

The byte is a unit of digital information in computing and telecommunications that most commonly consists of eight bits. Historically, a byte was the number of bits used to encode a single character of text in a computer and for this reason it is the basic addressable element in many computer...

(8 bits), kilobit

Kilobit

The kilobit is a multiple of the unit bit for digital information or computer storage. The prefix kilo is defined in the International System of Units as a multiplier of 103 , and therefore,...

(either 1000 or 2

^{10}= 1024 bits), megabyteMegabyte

The megabyte is a multiple of the unit byte for digital information storage or transmission with two different values depending on context: bytes generally for computer memory; and one million bytes generally for computer storage. The IEEE Standards Board has decided that "Mega will mean 1 000...

(either or 8×2

^{20}= ), etc.Computers usually manipulate bits in groups of a fixed size, conventionally named "words". The number of bits in a word varies with the computer model; typically between 8 to 80 bits; or even more in some specialized machines.

The International Electrotechnical Commission

International Electrotechnical Commission

The International Electrotechnical Commission is a non-profit, non-governmental international standards organization that prepares and publishes International Standards for all electrical, electronic and related technologies – collectively known as "electrotechnology"...

's standard IEC 60027

IEC 60027

IEC 60027 is the International Electrotechnical Commission's standard on Letter symbols to be used in electrical technology...

specifies that the symbol for binary digit should be "bit", and this should be used in all multiples, such as "kbit" (for kilobit). However, the letter "b" (in lower case) is widely used too. The letter "B" (upper case) is both the standard and customary symbol for byte.

In telecommunications (including computer network

Computer network

A computer network, often simply referred to as a network, is a collection of hardware components and computers interconnected by communication channels that allow sharing of resources and information....

s), data transfer rates are usually measured in bits per second (bit/s) or its multiples, such as kbit/s. (This unit is not to be confused with baud

Baud

In telecommunications and electronics, baud is synonymous to symbols per second or pulses per second. It is the unit of symbol rate, also known as baud rate or modulation rate; the number of distinct symbol changes made to the transmission medium per second in a digitally modulated signal or a...

.)

A millibit is a (rare) unit of information equal to one thousandth of a bit.

## Bit-based computing

Certain bitwiseBitwise operation

A bitwise operation operates on one or more bit patterns or binary numerals at the level of their individual bits. This is used directly at the digital hardware level as well as in microcode, machine code and certain kinds of high level languages...

computer processor

Central processing unit

The central processing unit is the portion of a computer system that carries out the instructions of a computer program, to perform the basic arithmetical, logical, and input/output operations of the system. The CPU plays a role somewhat analogous to the brain in the computer. The term has been in...

instructions (such as

*bit set*) operate at the level of manipulating bits rather than manipulating data interpreted as an aggregate of bits.In the 1980s, when bitmap

Bitmap

In computer graphics, a bitmap or pixmap is a type of memory organization or image file format used to store digital images. The term bitmap comes from the computer programming terminology, meaning just a map of bits, a spatially mapped array of bits. Now, along with pixmap, it commonly refers to...

ped computer displays became popular, some computers provided specialized bit block transfer ("bitblt" or "blit") instructions to set or copy the bits that corresponded to a given rectangular area on the screen.

In most computers and programming languages, when a bit within a group of bits such as a byte or word is to be referred to, it is usually specified by a number from 0 (not 1) upwards corresponding to its position within the byte or word. However, 0 can refer to either the most significant bit

Most significant bit

In computing, the most significant bit is the bit position in a binary number having the greatest value...

or to the least significant bit

Least significant bit

In computing, the least significant bit is the bit position in a binary integer giving the units value, that is, determining whether the number is even or odd. The lsb is sometimes referred to as the right-most bit, due to the convention in positional notation of writing less significant digits...

depending on the context, so the convention of use must be known.

## Other information units

Other units of information, sometimes used in information theory, include the*natural digit*also called a*nat*

orNat (information)

A nat is a logarithmic unit of information or entropy, based on natural logarithms and powers of e, rather than the powers of 2 and base 2 logarithms which define the bit. The nat is the natural unit for information entropy...

*nit*and defined as logLogarithm

The logarithm of a number is the exponent by which another fixed value, the base, has to be raised to produce that number. For example, the logarithm of 1000 to base 10 is 3, because 1000 is 10 to the power 3: More generally, if x = by, then y is the logarithm of x to base b, and is written...

_{2}*e*(≈ 1.443) bits, where*e*is the base of the natural logarithmsE (mathematical constant)

The mathematical constant ' is the unique real number such that the value of the derivative of the function at the point is equal to 1. The function so defined is called the exponential function, and its inverse is the natural logarithm, or logarithm to base...

; and the

*dit*,*ban*

, orBan (information)

A ban, sometimes called a hartley or a dit , is a logarithmic unit which measures information or entropy, based on base 10 logarithms and powers of 10, rather than the powers of 2 and base 2 logarithms which define the bit. As a bit corresponds to a binary digit, so a ban is a decimal digit...

*Hartley*, defined as log_{2}10 (≈ 3.322) bits. Conversely, one bit of information corresponds to about lnNatural logarithm

The natural logarithm is the logarithm to the base e, where e is an irrational and transcendental constant approximately equal to 2.718281828...

2 (≈ 0.693) nats, or log

_{10}2 (≈ 0.301) Hartleys. Some authors also define a**binit**as an arbitrary information unit equivalent to some fixed but unspecified number of bits.## See also

- ByteByteThe byte is a unit of digital information in computing and telecommunications that most commonly consists of eight bits. Historically, a byte was the number of bits used to encode a single character of text in a computer and for this reason it is the basic addressable element in many computer...
- Integer (computer science)Integer (computer science)In computer science, an integer is a datum of integral data type, a data type which represents some finite subset of the mathematical integers. Integral data types may be of different sizes and may or may not be allowed to contain negative values....
- Primitive data type
- BitstreamBitstreamA bitstream or bit stream is a time series of bits.A bytestream is a series of bytes, typically of 8 bits each, and can be regarded as a special case of a bitstream....
- Entropy (information theory)
- Binary numeral systemBinary numeral systemThe binary numeral system, or base-2 number system, represents numeric values using two symbols, 0 and 1. More specifically, the usual base-2 system is a positional notation with a radix of 2...
- Ternary numeral systemTernary numeral systemTernary is the base- numeral system. Analogous to a bit, a ternary digit is a trit . One trit contains \log_2 3 bits of information...
- Bit (Tron character)