Internet traffic


Internet traffic is the flow of data within the entire Internet, or in certain network links of its constituent networks. Common measurements of traffic are total volume, in units of multiples of the byte, or as transmission rates in bytes per certain time units.
As the topology of the Internet is not hierarchical, no single point of measurement is possible for total Internet traffic. Traffic data may be obtained from peering points of the Tier 1 network providers for indications of volume and growth. Such data, however, excludes traffic that remains within a single service provider's network as well as traffic that crosses private peering points.

Traffic sources

constitutes a large fraction of Internet traffic. The prevalent technology for file sharing is the BitTorrent protocol, which is a peer-to-peer system mediated through indexing sites that provide resource directories. The traffic patterns of P2P systems are often described as problematic and causing congestion. According to a Sandvine Research in 2013, Bit Torrent’s share of Internet traffic decreased by 20% to 7.4% overall, reduced from 31% in 2008.

Traffic management

The Internet does not employ any formally centralized facilities for traffic management. Its progenitor networks, especially the ARPANET established early backbone infrastructure which carried traffic between major interchange centers for traffic, resulting in a tiered, hierarchical system of internet service providers within which the tier 1 networks provided traffic exchange through settlement-free peering and routing of traffic to lower-level tiers of ISPs. The dynamic growth of the worldwide network resulted in ever-increasing interconnections at all peering levels of the Internet, so that a robust system developed that could mediate link failures, bottlenecks, and other congestion at many levels.
Economic traffic management is the term that is sometimes used to point out the opportunities for seeding as a practice that caters contribution within peer-to-peer file sharing and the distribution of content in the digital world in general.

Internet use tax

A planned tax on Internet use in Hungary introduced a 150-forint tax per gigabyte of data traffic, in a move intended to reduce Internet traffic and also assist companies to offset corporate income tax against the new levy. Hungary achieved 1.15 billion gigabytes in 2013 and another 18 million gigabytes accumulated by mobile devices. This would have resulted in extra revenue of 175 billion forints under the new tax based on the consultancy firm eNet.
According to Yahoo News, economy minister Mihály Varga defended the move saying "the tax was fair as it reflected a shift by consumers to the Internet away from phone lines" and that "150 forints on each transferred gigabyte of data – was needed to plug holes in the 2015 budget of one of the EU’s most indebted nations".
Some people argue that the new plan on Internet tax would prove disadvantageous to the country’s economic development, limit access to information and hinder the freedom of expression. Approximately 36,000 people have signed up to take part in an event on Facebook to be held outside the Economy Ministry to protest against the possible tax.

Traffic classification

describes the methods of classifying traffic by observing features passively in the traffic, and in line to particular classification goals. There might be some that only have a vulgar classification goal. For example, whether it is bulk transfer, peer to peer file sharing or transaction-orientated. Some others will set a finer-grained classification goal, for instance the exact number of application represented by the traffic. Traffic features included port number, application payload, temporal, packet size and the characteristic of the traffic. There are a vast range of methods to allocate Internet traffic including exact traffic, for instance port number, payload, heuristic or statistical machine learning.
Accurate network traffic classification is elementary to quite a few Internet activities, from security monitoring to accounting and from quality of service to providing operators with useful forecasts for long-term provisioning. Yet, classification schemes are extremely complex to operate accurately due to the shortage of available knowledge to the network. For example, the packet header related information is always insufficient to allow for an precise methodology. Consequently, the accuracy of any traditional method are between 50%-70%.

Bayesian analysis techniques

Work involving supervised machine learning to classify network traffic. Data are hand-classified to one of a number of categories. A combination of data set category and descriptions of the classified flows are used to train the classifier. To give a better insight of the technique itself, initial assumptions are made as well as applying two other techniques in reality. One is to improve the quality and separation of the input of information leading to an increase in accuracy of the Naive Bayes classifier technique.
The basis of categorizing work is to classify the type of Internet traffic; this is done by putting common groups of applications into different categories, e.g., "normal" versus "malicious", or more complex definitions, e.g., the identification of specific applications or specific Transmission Control Protocol implementations. Adapted from Logg et al.

Survey

Traffic classification is a major component of automated intrusion detection systems. They are used to identify patterns as well as indication of network resources for priority customers, or identify customer use of network resources that in some way contravenes the operator’s terms of service.
Generally deployed Internet Protocol traffic classification techniques are based approximately on direct inspection of each packet’s contents at some point on the network. Source address, port and destination address are included in successive IP packet's with similar if not the same 5-tuple of protocol type. ort are considered to belong to a flow whose controlling application we wish to determine. Simple classification infers the controlling application’s identity by assuming that most applications consistently use well known TCP or UDP port numbers. Even though, many candidates are increasingly using unpredictable port numbers. As a result, more sophisticated classification techniques infer application type by looking for application-specific data within the TCP or User Datagram Protocol payloads.

Global Internet traffic

Aggregating from multiple sources and applying usage and bitrate assumptions, Cisco Systems, a major network systems company, has published the following historical Internet Protocol and Internet traffic figures:

Year
IP Traffic
Fixed Internet traffic
Mobile Internet traffic
19900.0010.001n/a
19910.0020.002n/a
19920.0050.004n/a
19930.01 0.01 n/a
19940.02 0.02 n/a
19950.18 0.17 n/a
19961.9 1.8 n/a
19975.4 5.0 n/a
199812 11 n/a
199928 26 n/a
200084 75 n/a
2001197 175 n/a
2002405 356 n/a
2003784 681 n/a
20041,477 1,267 n/a
20052,426 2,055 0.9
20063,992 3,339 4
20076,430 5,219 15
200810,174 8,140 33
200914,686 10,942 91
201020,151 14,955 237
201130,734 23,288 597
201243,570 31,339 885
201351,168 34,952 1,480
201459,848 39,909 2,514
201572,521 49,494 3,685
201696,054 65,942 7,201
2017122,000 85,000 12,000

"Fixed Internet traffic" refers perhaps to traffic from residential and commercial subscribers to ISPs, cable companies, and other service providers. "Mobile Internet traffic" refers perhaps to backhaul traffic from cellphone towers and providers. The overall "Internet traffic" figures, which can be 30% higher than the sum of the other two, perhaps factors in traffic in the core of the national backbone, whereas the other figures seem to be derived principally from the network periphery.
Cisco also publishes 5-year projections.

Year
Fixed Internet traffic
Mobile Internet traffic
201810719
201913729
202017441
202121957
202227377

Internet backbone traffic in the United States

The following data for the Internet backbone in the US comes from the Minnesota Internet Traffic Studies :
YearData
19901
19912
19924
19938
199416
1995n/a
19961,500
19972,500-4,000
19985,000-8,000
199910,000-16,000
200020,000-35,000
200140,000-70,000
200280,000-140,000
2003n/a
2004n/a
2005n/a
2006450,000-800,000
2007750,000-1,250,000
20081,200,000-1,800,000
20091,900,000-2,400,000
20102,600,000-3,100,000
20113,400,000-4,100,000

The Cisco data can be seven times higher than the Minnesota Internet Traffic Studies data not only because the Cisco figures are estimates for the global—not just the domestic US—Internet, but also because Cisco counts "general IP traffic ". The MINTS estimate of US national backbone traffic for 2004, which may be interpolated as 200 petabytes/month, is a plausible three-fold multiple of the traffic of the US's largest backbone carrier, Level Inc., which claims an average traffic level of 60 petabytes/month.

Edholm's law

in telecommunication networks has been doubling every 18 months, an observation expressed as Edholm's law. This follows the advances in semiconductor technology, such as metal-oxide-silicon scaling, exemplified by the MOSFET transistor, which has shown similar scaling described by Moore's law. In the 1980's, fiber-optical technology using laser light as information carriers accelerated transmission speed and bandwidth of telecommunication circuits. This has led to the bandwidths of communication networks achieving terabit per second transmission speeds.