Computer networking in the context of Congestion control


Computer networking in the context of Congestion control

Computer networking Study page number 1 of 1

Play TriviaQuestions Online!

or

Skip to study material about Computer networking in the context of "Congestion control"


⭐ Core Definition: Computer networking

In computer science, computer engineering, and telecommunications, a network is a group of communicating computers and peripherals known as hosts, which communicate data to other hosts via communication protocols, as facilitated by networking hardware.

Within a computer network, hosts are identified by network addresses, which allow networking hardware to locate and identify hosts. Hosts may also have hostnames, memorable labels for the host nodes, which can be mapped to a network address using a hosts file or a name server such as Domain Name Service. The physical medium that supports information exchange includes wired media like copper cables, optical fibers, and wireless radio-frequency media. The arrangement of hosts and hardware within a network architecture is known as the network topology.

↓ Menu
HINT:

👉 Computer networking in the context of Congestion control

Network congestion in computer networking and queueing theory is the reduced quality of service that occurs when a network node or link is carrying or processing more load than its capacity. Typical effects include queueing delay, packet loss or the blocking of new connections. A consequence of congestion is that an incremental increase in offered load leads either only to a small increase or even a decrease in network throughput.

Network protocols that use aggressive retransmissions to compensate for packet loss due to congestion can increase congestion, even after the initial load has been reduced to a level that would not normally have induced network congestion. Such networks exhibit two stable states under the same level of load. The stable state with low throughput is known as congestive collapse.

↓ Explore More Topics
In this Dossier

Computer networking in the context of Connection-oriented communication

In telecommunications and computer networking, connection-oriented communication is a communication protocol where a communication session or a semi-permanent connection is established before any useful data can be transferred. The established connection ensures that data is delivered in the correct order to the upper communication layer. The alternative is called connectionless communication, such as the datagram mode communication used by Internet Protocol (IP) and User Datagram Protocol (UDP), where data may be delivered out of order, since different network packets are routed independently and may be delivered over different paths.

Connection-oriented communication may be implemented with a circuit switched connection, or a packet-mode virtual circuit connection. In the latter case, it may use either a transport layer virtual circuit protocol such as the Transmission Control Protocol (TCP) protocol, allowing data to be delivered in order. Although the lower-layer switching is connectionless, or it may be a data link layer or network layer switching mode, where all data packets belonging to the same traffic stream are delivered over the same path, and traffic flows are identified by some connection identifier reducing the overhead of routing decisions on a packet-by-packet basis for the network.

View the full Wikipedia page for Connection-oriented communication
↑ Return to Menu

Computer networking in the context of Quality of service

Quality of service (QoS) is the description or measurement of the overall performance of a service, such as a telephony or computer network, or a cloud computing service, particularly the performance seen by the users of the network. To quantitatively measure quality of service, several related aspects of the network service are often considered, such as packet loss, bit rate, throughput, transmission delay, availability, jitter, etc.

In the field of computer networking and other packet-switched telecommunication networks, quality of service refers to traffic prioritization and resource reservation control mechanisms rather than the achieved service quality. Quality of service is the ability to provide different priorities to different applications, users, or data flows, or to guarantee a certain level of performance to a data flow.

View the full Wikipedia page for Quality of service
↑ Return to Menu

Computer networking in the context of Hostname

In computer networking, a hostname (archaically nodename) is a label that is assigned to a device connected to a computer network and that is used to identify the device in various forms of electronic communication, such as the World Wide Web. Hostnames may be simple names consisting of a single word or phrase, or they may be structured. Each hostname usually has at least one numeric network address associated with it for routing packets for performance and other reasons.

View the full Wikipedia page for Hostname
↑ Return to Menu

Computer networking in the context of Network layer

In the seven-layer OSI model of computer networking, the network layer is layer 3. The network layer is responsible for packet forwarding including routing through intermediate routers.

View the full Wikipedia page for Network layer
↑ Return to Menu

Computer networking in the context of Proxy server

In computer networking, a proxy server is a server application that acts as an intermediary between a client requesting a resource and the server then providing that resource.

Instead of connecting directly to a server that can fulfill a request for a resource, such as a file or web page, the client directs the request to the proxy server, which evaluates the request and performs the required network transactions. This serves as a method to simplify or control the complexity of the request, or provide additional benefits such as load balancing, privacy, or security. Proxies were devised to add structure and encapsulation to distributed systems. A proxy server thus functions on behalf of the client when requesting service, potentially masking the true origin of the request to the resource server.

View the full Wikipedia page for Proxy server
↑ Return to Menu

Computer networking in the context of Point-to-point (telecommunications)

In telecommunications, a point-to-point connection is a communications connection between two communication endpoints or nodes. An example is a telephone call, in which one telephone is connected with one other, and what is said by one caller can only be heard by the other. This is contrasted with a point-to-multipoint or broadcast connection, in which many nodes can receive information transmitted by one node. Other examples of point-to-point communications links are leased lines and microwave radio relay.

The term is also used in computer networking and computer architecture to refer to a wire or other connection that links only two computers or circuits, as opposed to other network topologies such as buses or crossbar switches which can connect many communications devices.

View the full Wikipedia page for Point-to-point (telecommunications)
↑ Return to Menu

Computer networking in the context of Upstream (networking)

In computer networking, upstream refers to the direction in which data can be transferred from the client to the server (uploading). This differs greatly from downstream not only in theory and usage, but also in that upstream speeds are usually at a premium. Whereas downstream speed is important to the average home user for purposes of downloading content, uploads are used mainly for web server applications and similar processes where the sending of data is critical. Upstream speeds are also important to users of peer-to-peer software.

ADSL and cable modems are asymmetric, with the upstream data rate much lower than that of its downstream. Symmetric connections such as Symmetric Digital Subscriber Line (SDSL) and T1, however, offer identical upstream and downstream rates.

View the full Wikipedia page for Upstream (networking)
↑ Return to Menu

Computer networking in the context of Online journalism

Digital journalism, also known as netizen journalism or online journalism, is a contemporary form of journalism where editorial content is distributed via the Internet, as opposed to publishing via print or broadcast. What constitutes digital journalism is debated amongst scholars. However, the primary product of journalism, which is news and features on current affairs, is presented solely or in combination as text, audio, video, or some interactive forms like storytelling stories or newsgames and disseminated through digital media technology.

Fewer barriers to entry, lowered distribution costs and diverse computer networking technologies have led to the widespread practice of digital journalism. It has democratized the flow of information that was previously controlled by traditional media including newspapers, magazines, radio and television. In the context of digital journalism, online journalists are often expected to possess a wide range of skills, yet there is a significant gap between the perceived and actual performance of these skills, influenced by time pressures and resource allocation decisions.

View the full Wikipedia page for Online journalism
↑ Return to Menu

Computer networking in the context of Network computer

In computer networking, a thin client (sometimes called a slim client or lean client) is a simple, low-performance computer that has been optimized for remote desktop connections to a server-based computing environment. In some cases, they are also referred to as network computers or, in their simplest form, zero clients.

The server performs most of the workload, including launching software applications, processing computations, and handling data storage. This contrasts with a rich client or a traditional personal computer — the former is designed for a client–server model but retains significant local processing power, while the latter performs most of its functions locally.

View the full Wikipedia page for Network computer
↑ Return to Menu

Computer networking in the context of Multicast

In computer networking, multicast is a type of group communication where data transmission is addressed to a group of destination computers simultaneously. Multicast can be one-to-many or many-to-many distribution. Multicast differs from physical layer point-to-multipoint communication.

Group communication may either be application layer multicast or network-assisted multicast, where the latter makes it possible for the source to efficiently send to the group in a single transmission. Copies are automatically created in other network elements, such as routers, switches and cellular network base stations, but only to network segments that currently contain members of the group. Network assisted multicast may be implemented at the data link layer using one-to-many addressing and switching such as Ethernet multicast addressing, Asynchronous Transfer Mode (ATM), point-to-multipoint virtual circuits (P2MP) or InfiniBand multicast. Network-assisted multicast may also be implemented at the Internet layer using IP multicast. In IP multicast the implementation of the multicast concept occurs at the IP routing level, where routers create optimal distribution paths for datagrams sent to a multicast destination address.

View the full Wikipedia page for Multicast
↑ Return to Menu

Computer networking in the context of Reliable delivery

In computer networking, a reliable protocol is a communication protocol that notifies the sender whether or not the delivery of data to intended recipients was successful. Reliability is a synonym for assurance, which is the term used by the ITU and ATM Forum, and leads to fault-tolerant messaging.

Reliable protocols typically incur more overhead than unreliable protocols, and as a result, function more slowly and with less scalability. This often is not an issue for unicast protocols, but it may become a problem for reliable multicast protocols.

View the full Wikipedia page for Reliable delivery
↑ Return to Menu

Computer networking in the context of Stream Control Transmission Protocol

The Stream Control Transmission Protocol (SCTP) is a computer networking communications protocol in the transport layer of the Internet protocol suite. Originally intended for Signaling System 7 (SS7) message transport in telecommunication, the protocol provides the message-oriented feature of the User Datagram Protocol (UDP) while ensuring reliable, in-sequence transport of messages with congestion control like the Transmission Control Protocol (TCP). Unlike UDP and TCP, the protocol supports multihoming and redundant paths to increase resilience and reliability.

SCTP is standardized by the Internet Engineering Task Force (IETF) in RFC 9260. The SCTP reference implementation was released as part of FreeBSD version 7 and has since been widely ported to other platforms.

View the full Wikipedia page for Stream Control Transmission Protocol
↑ Return to Menu

Computer networking in the context of Reliable byte stream

A reliable byte stream is a common service paradigm in computer networking; it refers to a byte stream in which the bytes which emerge from the communication channel at the recipient are exactly the same, and in exactly the same order, as they were when the sender inserted them into the channel.

The classic example of a reliable byte stream communication protocol is the Transmission Control Protocol, one of the major building blocks of the Internet.

View the full Wikipedia page for Reliable byte stream
↑ Return to Menu

Computer networking in the context of Frame (networking)

A frame is a digital data transmission unit in computer networking and telecommunications. In packet switched systems, a frame is a simple container for a single network packet. In other telecommunications systems, a frame is a repeating structure supporting time-division multiplexing.

A frame typically includes frame synchronization features consisting of a sequence of bits or symbols that indicate to the receiver the beginning and end of the payload data within the stream of symbols or bits it receives. If a receiver is connected to the system during frame transmission, it ignores the data until it detects a new frame synchronization sequence.

View the full Wikipedia page for Frame (networking)
↑ Return to Menu