Units of information


In computing and telecommunications, a unit of information is the capacity of some standard data storage system or communication channel, used to measure the capacities of other systems and channels. In information theory, units of information are also used to measure the entropy of random variables and information contained in messages.
The most commonly used units of data storage capacity are the bit, the capacity of a system that has only two states, and the byte, which is equivalent to eight bits. Multiples of these units can be formed from these with the SI prefixes or the newer IEC binary prefixes.

Primary units

In 1928, Ralph Hartley observed a fundamental storage principle, which was further formalized by Claude Shannon in 1945: the information that can be stored in a system is proportional to the logarithm of N possible states of that system, denoted. Changing the base of the logarithm from b to a different number c has the effect of multiplying the value of the logarithm by a fixed constant, namely.
Therefore, the choice of the base b determines the unit used to measure information. In particular, if b is a positive integer, then the unit is the amount of information that can be stored in a system with N possible states.
When b is 2, the unit is the shannon, equal to the information content of one "bit". A system with 8 possible states, for example, can store up to log28 = 3 bits of information. Other units that have been named include:
The trit, ban, and nat are rarely used to measure storage capacity; but the nat, in particular, is often used in information theory, because natural logarithms are mathematically more convenient than logarithms in other bases.

Units derived from bit

Several conventional names are used for collections or groups of bits.

Byte

Historically, a byte was the number of bits used to encode a character of text in the computer, which depended on computer hardware architecture; but today it almost always means eight bits – that is, an octet. A byte can represent 256 distinct values, such as non-negative integers from 0 to 255, or signed integers from −128 to 127. The IEEE 1541-2002 standard specifies "B" as the symbol for byte. Bytes, or multiples thereof, are almost always used to specify the sizes of computer files and the capacity of storage units. Most modern computers and peripheral devices are designed to manipulate data in whole bytes or groups of bytes, rather than individual bits.

Nibble

A group of four bits, or half a byte, is sometimes called a nibble or nybble. This unit is most often used in the context of hexadecimal number representations, since a nibble has the same amount of information as one hexadecimal digit.

Crumb

A pair of two bits or a quarter byte was called a crumb, often used in early 8-bit computing. It is now largely defunct.

Word, block, and page

Computers usually manipulate bits in groups of a fixed size, conventionally called words. The number of bits in a word is usually defined by the size of the registers in the computer's CPU, or by the number of data bits that are fetched from its main memory in a single operation. In the IA-32 architecture more commonly known as x86-32, a word is 16 bits, but other past and current architectures use words with 4, 8, 9, 12, 13, 16, 18, 20, 21, 22, 24, 25, 26, 29, 30, 31, 32, 33, 35, 36, 38, 39, 40, 42, 44, 48, 50, 52, 54, 56, 60, 64, 72, 80 bits or others.
Some machine instructions and computer number formats use two words, or four words.
Computer memory caches usually operate on blocks of memory that consist of several consecutive words. These units are customarily called cache blocks, or, in CPU caches, cache lines.
Virtual memory systems partition the computer's main storage into even larger units, traditionally called pages.

Systematic multiples

Terms for large quantities of bits can be formed using the standard range of SI prefixes for powers of 10, e.g., kilo = 103 = 1000, mega = 106 = and giga = 109 = . These prefixes are more often used for multiples of bytes, as in kilobyte, megabyte, and gigabyte.
However, for technical reasons, the capacities of computer memories and some storage units are often multiples of some large power of two, such as 228 = bytes. To avoid such unwieldy numbers, people have often repurposed the SI prefixes to mean the nearest power of two, e.g., using the prefix kilo for 210 = 1024, mega for 220 =, and giga for 230 =, and so on. For example, a random access memory chip with a capacity of 228 bytes would be referred to as a 256-megabyte chip. The table below illustrates these differences.
SymbolPrefixSI MeaningBinary meaningSize difference
kkilo103 = 10001210 = 102412.40%
Mmega106 = 10002220 = 102424.86%
Ggiga109 = 10003230 = 102437.37%
Ttera1012 = 10004240 = 102449.95%
Ppeta1015 = 10005250 = 1024512.59%
Eexa1018 = 10006260 = 1024615.29%
Zzetta1021 = 10007270 = 1024718.06%
Yyotta1024 = 10008280 = 1024820.89%

In the past, uppercase K has been used instead of lowercase k to indicate 1024 instead of 1000. However, this usage was never consistently applied.
On the other hand, for external storage systems, the SI prefixes were commonly used with their decimal values. There have been many attempts to resolve the confusion by providing alternative notations for power-of-two multiples. In 1998 the International Electrotechnical Commission issued a standard for this purpose, namely a series of binary prefixes that use 1024 instead of 1000 as the main radix:
The JEDEC memory standards however define uppercase K, M, and G for the binary powers 210, 220 and 230 to reflect common usage.

Size examples

Several other units of information storage have been named:
Some of these names are jargon, obsolete, or used only in very restricted contexts.