Packet loss in the context of Throughput


Packet loss in the context of Throughput

Packet loss Study page number 1 of 1

Play TriviaQuestions Online!

or

Skip to study material about Packet loss in the context of "Throughput"


⭐ Core Definition: Packet loss

Packet loss occurs when one or more packets of data travelling across a computer network fail to reach their destination. Packet loss is either caused by errors in data transmission, typically across wireless networks, or network congestion. Packet loss is measured as a percentage of packets lost with respect to packets sent.

The Transmission Control Protocol (TCP) detects packet loss and performs retransmissions to ensure reliable messaging. Packet loss in a TCP connection is also used to avoid congestion and thus produces an intentionally reduced throughput for the connection.

↓ Menu
HINT:

In this Dossier

Packet loss in the context of Quality of service

Quality of service (QoS) is the description or measurement of the overall performance of a service, such as a telephony or computer network, or a cloud computing service, particularly the performance seen by the users of the network. To quantitatively measure quality of service, several related aspects of the network service are often considered, such as packet loss, bit rate, throughput, transmission delay, availability, jitter, etc.

In the field of computer networking and other packet-switched telecommunication networks, quality of service refers to traffic prioritization and resource reservation control mechanisms rather than the achieved service quality. Quality of service is the ability to provide different priorities to different applications, users, or data flows, or to guarantee a certain level of performance to a data flow.

View the full Wikipedia page for Quality of service
↑ Return to Menu

Packet loss in the context of Virtual circuit

A virtual circuit (VC) is a means of transporting data over a data network, based on packet switching and in which a connection is first established across the network between two endpoints. The network, rather than having a fixed data rate reservation per connection as in circuit switching, takes advantage of the statistical multiplexing on its transmission links, an intrinsic feature of packet switching.

A 1978 standardization of virtual circuits by the CCITT imposes per-connection flow controls at all user-to-network and network-to-network interfaces. This permits participation in congestion control and reduces the likelihood of packet loss in a heavily loaded network. Some circuit protocols provide reliable communication service through the use of data retransmissions invoked by error detection and automatic repeat request (ARQ).

View the full Wikipedia page for Virtual circuit
↑ Return to Menu

Packet loss in the context of Congestion control

Network congestion in computer networking and queueing theory is the reduced quality of service that occurs when a network node or link is carrying or processing more load than its capacity. Typical effects include queueing delay, packet loss or the blocking of new connections. A consequence of congestion is that an incremental increase in offered load leads either only to a small increase or even a decrease in network throughput.

Network protocols that use aggressive retransmissions to compensate for packet loss due to congestion can increase congestion, even after the initial load has been reduced to a level that would not normally have induced network congestion. Such networks exhibit two stable states under the same level of load. The stable state with low throughput is known as congestive collapse.

View the full Wikipedia page for Congestion control
↑ Return to Menu