Retransmission (data networks) in the context of Automatic repeat request


Retransmission (data networks) in the context of Automatic repeat request

Retransmission (data networks) Study page number 1 of 1

Play TriviaQuestions Online!

or

Skip to study material about Retransmission (data networks) in the context of "Automatic repeat request"


⭐ Core Definition: Retransmission (data networks)

Retransmission, essentially identical with automatic repeat request (ARQ), is the resending of packets which have been either damaged or lost. Retransmission is one of the basic mechanisms used by protocols operating over a packet switched computer network to provide reliable communication (such as that provided by a reliable byte stream, for example TCP).

Such networks are usually "unreliable", meaning they offer no guarantees that they will not delay, damage, or lose packets, or deliver them out of order. Protocols which provide reliable communication over such networks use a combination of acknowledgments (i.e., an explicit receipt from the destination of the data), retransmission of missing or damaged packets (usually initiated by a time-out), and checksums to provide that reliability.

↓ Menu
HINT:

👉 Retransmission (data networks) in the context of Automatic repeat request

Automatic repeat request (ARQ), also known as automatic repeat query, is an error-control method for data transmission that uses acknowledgements (messages sent by the receiver indicating that it has correctly received a message) and timeouts (specified periods of time allowed to elapse before an acknowledgment is to be received). If the sender does not receive an acknowledgment before the timeout, it re-transmits the message until it receives an acknowledgment or exceeds a predefined number of retransmissions.

ARQ is used to achieve reliable data transmission over an unreliable communication channel. ARQ is appropriate if the communication channel has varying or unknown capacity.

↓ Explore More Topics
In this Dossier

Retransmission (data networks) in the context of Transmission Control Protocol

The Transmission Control Protocol (TCP) is one of the main protocols of the Internet protocol suite. It originated in the initial network implementation in which it complemented the Internet Protocol (IP). Therefore, the entire suite is commonly referred to as TCP/IP. TCP provides reliable, ordered, and error-checked delivery of a stream of octets (bytes) between applications running on hosts communicating via an IP network. Major internet applications such as the World Wide Web, email, remote administration, file transfer and streaming media rely on TCP, which is part of the transport layer of the TCP/IP suite. SSL/TLS often runs on top of TCP. Today, TCP remains a core protocol for most Internet communication, ensuring reliable data transfer across diverse networks.

TCP is connection-oriented, meaning that sender and receiver firstly need to establish a connection based on agreed parameters; they do this through a three-way handshake procedure. The server must be listening (passive open) for connection requests from clients before a connection is established. Three-way handshake (active open), retransmission, and error detection adds to reliability but lengthens latency. Applications that do not require reliable data stream service may use the User Datagram Protocol (UDP) instead, which provides a connectionless datagram service that prioritizes time over reliability. TCP employs network congestion avoidance. However, there are vulnerabilities in TCP, including denial of service, connection hijacking, TCP veto, and reset attack.

View the full Wikipedia page for Transmission Control Protocol
↑ Return to Menu

Retransmission (data networks) in the context of Congestion control

Network congestion in computer networking and queueing theory is the reduced quality of service that occurs when a network node or link is carrying or processing more load than its capacity. Typical effects include queueing delay, packet loss or the blocking of new connections. A consequence of congestion is that an incremental increase in offered load leads either only to a small increase or even a decrease in network throughput.

Network protocols that use aggressive retransmissions to compensate for packet loss due to congestion can increase congestion, even after the initial load has been reduced to a level that would not normally have induced network congestion. Such networks exhibit two stable states under the same level of load. The stable state with low throughput is known as congestive collapse.

View the full Wikipedia page for Congestion control
↑ Return to Menu