Bandwidth (computing)


In computing, bandwidth is the maximum rate of data transfer across a given path. Bandwidth may be characterized as network bandwidth, data bandwidth, or digital bandwidth.
This definition of bandwidth is in contrast to the field of signal processing, wireless communications, modem data transmission, digital communications, and electronics, in which bandwidth is used to refer to analog signal bandwidth measured in hertz, meaning the frequency range between lowest and highest attainable frequency while meeting a well-defined impairment level in signal power. The actual bit rate that can be achieved depends not only on the signal bandwidth but also on the noise on the channel.

Network bandwidth capacity

The term bandwidth sometimes defines the net bit rate 'peak bit rate', 'information rate,' or physical layer 'useful bit rate', channel capacity, or the maximum throughput of a logical or physical communication path in a digital communication system. For example, bandwidth tests measure the maximum throughput of a computer network. The maximum rate that can be sustained on a link are limited by the Shannon–Hartley channel capacity for these communication systems, which is dependent on the bandwidth in hertz and the noise on the channel.

Network bandwidth consumption

The consumed bandwidth in bit/s, corresponds to achieved throughput or goodput, i.e., the average rate of successful data transfer through a communication path. The consumed bandwidth can be affected by technologies such as bandwidth shaping, bandwidth management, bandwidth throttling, bandwidth cap, bandwidth allocation, etc. A bit stream's bandwidth is proportional to the average consumed signal bandwidth in hertz during a studied time interval.
Channel bandwidth may be confused with useful data throughput. For example, a channel with x bps may not necessarily transmit data at x rate, since protocols, encryption, and other factors can add appreciable overhead. For instance, much internet traffic uses the transmission control protocol, which requires a three-way handshake for each transaction. Although in many modern implementations the protocol is efficient, it does add significant overhead compared to simpler protocols. Also, data packets may be lost, which further reduces the useful data throughput. In general, for any effective digital communication, a framing protocol is needed; overhead and effective throughput depends on implementation. Useful throughput is less than or equal to the actual channel capacity minus implementation overhead.

Asymptotic bandwidth

The asymptotic bandwidth for a network is the measure of maximum throughput for a greedy source, for example when the message size approaches close to the maximum amount.
Asymptotic bandwidths are usually estimated by sending a number of very large messages through the network, measuring the end-to-end throughput. As other bandwidths, the asymptotic bandwidth is measured in multiples of bits per seconds. Since bandwidth spikes can skew the measurement, carriers often use the 95th percentile method. This method continuously measures bandwidth usage and then removes the top 5 percent.

Multimedia bandwidth

Digital bandwidth may also refer to: multimedia bit rate or average bitrate after multimedia data compression, defined as the total amount of data divided by the playback time.
Due to the impractically high bandwidth requirements of uncompressed digital media, the required multimedia bandwidth can be significantly reduced with data compression. The most widely used data compression technique for media bandwidth reduction is the discrete cosine transform, which was first proposed by Nasir Ahmed in the early 1970s. DCT compression significantly reduces the amount of memory and bandwidth required for digital signals, capable of achieving a data compression ratio of up to 100:1 compared to uncompressed media.

Bandwidth in web hosting

In Web hosting service, the term bandwidth is often incorrectly used to describe the amount of data transferred to or from the website or server within a prescribed period of time, for example bandwidth consumption accumulated over a month measured in gigabytes per month. The more accurate phrase used for this meaning of a maximum amount of data transfer each month or given period is monthly data transfer.
A similar situation can occur for end user ISPs as well, especially where network capacity is limited.

Internet connection bandwidth

This table shows the maximum bandwidth of common Internet access technologies. For more detailed lists see
56 kbit/sModem / Dialup
1.5 Mbit/sADSL Lite
1.544 Mbit/sT1/DS1
2.048 Mbit/sE1 / E-carrier
4 Mbit/sADSL1
10 Mbit/sEthernet
11 Mbit/sWireless 802.11b
24 Mbit/sADSL2+
44.736 Mbit/sT3/DS3
54 Mbit/sWireless 802.11g
100 Mbit/sFast Ethernet
155 Mbit/sOC3
600 Mbit/sWireless 802.11n
622 Mbit/sOC12
1 Gbit/sGigabit Ethernet
1.3 Gbit/sWireless 802.11ac
2.5 Gbit/sOC48
5 Gbit/sSuperSpeed USB
7 Gbit/sWireless 802.11ad
9.6 Gbit/sOC192
10 Gbit/s10 Gigabit Ethernet, SuperSpeed USB 10 Gbit/s
20 Gbit/sSuperSpeed USB 20 Gbit/s
40 Gbit/sThunderbolt 3
100 Gbit/s100 Gigabit Ethernet

Edholm's law

, proposed by and named after Phil Edholm in 2004, holds that the bandwidth of telecommunication networks double every 18 months, which has proven to be true since the 1970s. The trend is evident in the cases of Internet, cellular, wireless LAN and wireless personal area networks.
The MOSFET is the most important factor enabling the rapid increase in bandwidth. The MOSFET was invented by Mohamed M. Atalla and Dawon Kahng at Bell Labs in 1959, and went on to become the basic building block of modern telecommunications technology. Continuous MOSFET scaling, along with various advances in MOS technology, has enabled both Moore's law and Edholm's law.