Reliability (computer networking) in the context of Virtual circuit


Reliability (computer networking) in the context of Virtual circuit

Reliability (computer networking) Study page number 1 of 1

Play TriviaQuestions Online!

or

Skip to study material about Reliability (computer networking) in the context of "Virtual circuit"


⭐ Core Definition: Reliability (computer networking)

In computer networking, a reliable protocol is a communication protocol that notifies the sender whether or not the delivery of data to intended recipients was successful. Reliability is a synonym for assurance, which is the term used by the ITU and ATM Forum, and leads to fault-tolerant messaging.

Reliable protocols typically incur more overhead than unreliable protocols, and as a result, function more slowly and with less scalability. This often is not an issue for unicast protocols, but it may become a problem for reliable multicast protocols.

↓ Menu
HINT:

👉 Reliability (computer networking) in the context of Virtual circuit

A virtual circuit (VC) is a means of transporting data over a data network, based on packet switching and in which a connection is first established across the network between two endpoints. The network, rather than having a fixed data rate reservation per connection as in circuit switching, takes advantage of the statistical multiplexing on its transmission links, an intrinsic feature of packet switching.

A 1978 standardization of virtual circuits by the CCITT imposes per-connection flow controls at all user-to-network and network-to-network interfaces. This permits participation in congestion control and reduces the likelihood of packet loss in a heavily loaded network. Some circuit protocols provide reliable communication service through the use of data retransmissions invoked by error detection and automatic repeat request (ARQ).

↓ Explore More Topics
In this Dossier

Reliability (computer networking) in the context of User Datagram Protocol

In computer networking, the User Datagram Protocol (UDP) is one of the core communication protocols of the Internet protocol suite used to send messages (transported as datagrams in packets) to other hosts on an Internet Protocol (IP) network. Within an IP network, UDP does not require prior communication to set up communication channels or data paths.

UDP is a connectionless protocol, meaning that messages are sent without negotiating a connection and that UDP does not keep track of what it has sent. UDP provides checksums for data integrity, and port numbers for addressing different functions at the source and destination of the datagram. It has no handshaking dialogues and thus exposes the user's program to any unreliability of the underlying network; there is no guarantee of delivery, ordering, or duplicate protection. If error-correction facilities are needed at the network interface level, an application may instead use Transmission Control Protocol (TCP) or Stream Control Transmission Protocol (SCTP) which are designed for this purpose.

View the full Wikipedia page for User Datagram Protocol
↑ Return to Menu

Reliability (computer networking) in the context of Transmission Control Protocol

The Transmission Control Protocol (TCP) is one of the main protocols of the Internet protocol suite. It originated in the initial network implementation in which it complemented the Internet Protocol (IP). Therefore, the entire suite is commonly referred to as TCP/IP. TCP provides reliable, ordered, and error-checked delivery of a stream of octets (bytes) between applications running on hosts communicating via an IP network. Major internet applications such as the World Wide Web, email, remote administration, file transfer and streaming media rely on TCP, which is part of the transport layer of the TCP/IP suite. SSL/TLS often runs on top of TCP. Today, TCP remains a core protocol for most Internet communication, ensuring reliable data transfer across diverse networks.

TCP is connection-oriented, meaning that sender and receiver firstly need to establish a connection based on agreed parameters; they do this through a three-way handshake procedure. The server must be listening (passive open) for connection requests from clients before a connection is established. Three-way handshake (active open), retransmission, and error detection adds to reliability but lengthens latency. Applications that do not require reliable data stream service may use the User Datagram Protocol (UDP) instead, which provides a connectionless datagram service that prioritizes time over reliability. TCP employs network congestion avoidance. However, there are vulnerabilities in TCP, including denial of service, connection hijacking, TCP veto, and reset attack.

View the full Wikipedia page for Transmission Control Protocol
↑ Return to Menu

Reliability (computer networking) in the context of End-to-end principle

The end-to-end (E2E) principle is a design principle in computer networking that requires application-specific features (such as reliability and security) to be implemented in the communicating end nodes of the network, instead of in the network itself. Intermediary nodes (such as gateways and routers) that exist to establish the network may still implement these features to improve efficiency but do not guarantee end-to-end functionality.

The essence of what would later be called the end-to-end principle was contained in the work of Donald Davies on packet-switched networks in the 1960s. Louis Pouzin pioneered the use of the end-to-end strategy in the CYCLADES network in the 1970s. The principle was first articulated explicitly in 1981 by Saltzer, Reed, and Clark. The meaning of the end-to-end principle has been continuously reinterpreted ever since its initial articulation. Noteworthy formulations of the end-to-end principle can be found before the seminal 1981 Saltzer, Reed, and Clark paper.

View the full Wikipedia page for End-to-end principle
↑ Return to Menu

Reliability (computer networking) in the context of Transport layer

In computer networking, the transport layer is a conceptual division of methods in the layered architecture of protocols in the network stack in the Internet protocol suite and the OSI model. The protocols of this layer provide end-to-end communication services for applications. It provides services such as connection-oriented communication, reliability, flow control, and multiplexing.

The details of implementation and semantics of the transport layer of the Internet protocol suite,, which is the foundation of the Internet, and the OSI model of general networking are different. The protocols in use today in this layer for the Internet all originated in the development of TCP/IP. In the OSI model, the transport layer is often referred to as Layer 4, or L4, while numbered layers are not used in TCP/IP.

View the full Wikipedia page for Transport layer
↑ Return to Menu

Reliability (computer networking) in the context of Automatic repeat request

Automatic repeat request (ARQ), also known as automatic repeat query, is an error-control method for data transmission that uses acknowledgements (messages sent by the receiver indicating that it has correctly received a message) and timeouts (specified periods of time allowed to elapse before an acknowledgment is to be received). If the sender does not receive an acknowledgment before the timeout, it re-transmits the message until it receives an acknowledgment or exceeds a predefined number of retransmissions.

ARQ is used to achieve reliable data transmission over an unreliable communication channel. ARQ is appropriate if the communication channel has varying or unknown capacity.

View the full Wikipedia page for Automatic repeat request
↑ Return to Menu