Channel capacity in the context of Automatic repeat request


Channel capacity in the context of Automatic repeat request

Channel capacity Study page number 1 of 1

Play TriviaQuestions Online!

or

Skip to study material about Channel capacity in the context of "Automatic repeat request"


⭐ Core Definition: Channel capacity

Channel capacity, in electrical engineering, computer science, and information theory, is the theoretical maximum rate at which information can be reliably transmitted over a communication channel.

Following the terms of the noisy-channel coding theorem, the channel capacity of a given channel is the highest information rate (in units of information per unit time) that can be achieved with arbitrarily small error probability.

↓ Menu
HINT:

👉 Channel capacity in the context of Automatic repeat request

Automatic repeat request (ARQ), also known as automatic repeat query, is an error-control method for data transmission that uses acknowledgements (messages sent by the receiver indicating that it has correctly received a message) and timeouts (specified periods of time allowed to elapse before an acknowledgment is to be received). If the sender does not receive an acknowledgment before the timeout, it re-transmits the message until it receives an acknowledgment or exceeds a predefined number of retransmissions.

ARQ is used to achieve reliable data transmission over an unreliable communication channel. ARQ is appropriate if the communication channel has varying or unknown capacity.

↓ Explore More Topics
In this Dossier

Channel capacity in the context of Communication channel

A communication channel refers either to a physical transmission medium such as a wire, or to a logical connection over a multiplexed medium such as a radio channel in telecommunications and computer networking. A channel is used for information transfer of, for example, a digital bit stream, from one or several senders to one or several receivers. A channel has a certain capacity for transmitting information, often measured by its bandwidth in Hz or its data rate in bits per second.

Communicating an information signal across distance requires some form of pathway or medium. These pathways, called communication channels, use two types of media: Transmission line-based telecommunications cable (e.g. twisted-pair, coaxial, and fiber-optic cable) and broadcast (e.g. microwave, satellite, radio, and infrared).

View the full Wikipedia page for Communication channel
↑ Return to Menu

Channel capacity in the context of Information theory

Information theory is the mathematical study of the quantification, storage, and communication of information. The field was established and formalized by Claude Shannon in the 1940s, though early contributions were made in the 1920s through the works of Harry Nyquist and Ralph Hartley. It is at the intersection of electronic engineering, mathematics, statistics, computer science, neurobiology, physics, and electrical engineering.

A key measure in information theory is entropy. Entropy quantifies the amount of uncertainty involved in the value of a random variable or the outcome of a random process. For example, identifying the outcome of a fair coin flip (which has two equally likely outcomes) provides less information (lower entropy, less uncertainty) than identifying the outcome from a roll of a die (which has six equally likely outcomes). Some other important measures in information theory are mutual information, channel capacity, error exponents, and relative entropy. Important sub-fields of information theory include source coding, algorithmic complexity theory, algorithmic information theory and information-theoretic security.

View the full Wikipedia page for Information theory
↑ Return to Menu

Channel capacity in the context of Redundancy (information theory)

In information theory, redundancy measures the fractional difference between the entropy H(X) of an ensemble X, and its maximum possible value . Informally, it is the amount of wasted "space" used to transmit certain data. Data compression is a way to reduce or eliminate unwanted redundancy, while forward error correction is a way of adding desired redundancy for purposes of error detection and correction when communicating over a noisy channel of limited capacity.

View the full Wikipedia page for Redundancy (information theory)
↑ Return to Menu

Channel capacity in the context of Shannon (unit)

The shannon (symbol: Sh) is a unit of information named after Claude Shannon, the founder of information theory. IEC 80000-13 defines the shannon as the information content associated with an event when the probability of the event occurring is 1/2. It is understood as such within the realm of information theory, and is conceptually distinct from the bit, a term used in data processing and storage to denote a single instance of a binary signal. A sequence of n binary symbols (such as contained in computer memory or a binary data transmission) is properly described as consisting of n bits, but the information content of those n symbols may be more or less than n shannons depending on the a priori probability of the actual sequence of symbols.

The shannon also serves as a unit of the information entropy of an event, which is defined as the expected value of the information content of the event (i.e., the probability-weighted average of the information content of all potential events). Given a number of possible outcomes, unlike information content, the entropy has an upper bound, which is reached when the possible outcomes are equiprobable. The maximum entropy of n bits is n Sh. A further quantity that it is used for is channel capacity, which is generally the maximum of the expected value of the information content encoded over a channel that can be transferred with negligible probability of error, typically in the form of an information rate.

View the full Wikipedia page for Shannon (unit)
↑ Return to Menu

Channel capacity in the context of Error exponent

In information theory, the error exponent of a channel code or source code over the block length of the code is the rate at which the error probability decays exponentially with the block length of the code. Formally, it is defined as the limiting ratio of the negative logarithm of the error probability to the block length of the code for large block lengths. For example, if the probability of error of a decoder drops as , where is the block length, the error exponent is . In this example, approaches for large . Many of the information-theoretic theorems are of asymptotic nature, for example, the channel coding theorem states that for any rate less than the channel capacity, the probability of the error of the channel code can be made to go to zero as the block length goes to infinity. In practical situations, there are limitations to the delay of the communication and the block length must be finite. Therefore, it is important to study how the probability of error drops as the block length go to infinity.

View the full Wikipedia page for Error exponent
↑ Return to Menu

Channel capacity in the context of Neural coding

Neural coding (or neural representation) refers to the relationship between a stimulus and its respective neuronal responses, and the signalling relationships among networks of neurons in an ensemble. Action potentials, which act as the primary carrier of information in biological neural networks, are generally uniform regardless of the type of stimulus or the specific type of neuron. The simplicity of action potentials as a methodology of encoding information factored with the indiscriminate process of summation is seen as discontiguous with the specification capacity that neurons demonstrate at the presynaptic terminal, as well as the broad ability for complex neuronal processing and regional specialisation for which the brain-wide integration of such is seen as fundamental to complex derivations; such as intelligence, consciousness, complex social interaction, reasoning and motivation. As such, theoretical frameworks that describe encoding mechanisms of action potential sequences in relationship to observed patterns are seen as fundamental to neuroscientific understanding.

View the full Wikipedia page for Neural coding
↑ Return to Menu

Channel capacity in the context of Time division multiple access

Time-division multiple access (TDMA) is a channel access method for shared-medium networks. It allows several users to share the same frequency channel by dividing the signal into different time slots. The users transmit in rapid succession, one after the other, each using its own time slot. This allows multiple stations to share the same transmission medium (e.g., radio frequency channel) while using only a part of its channel capacity. Dynamic TDMA is a TDMA variant that dynamically reserves a variable number of time slots in each frame to variable bit-rate data streams, based on the traffic demand of each data stream.

TDMA is used in digital 2G cellular systems such as Global System for Mobile Communications (GSM), IS-136, Personal Digital Cellular (PDC) and iDEN, in the Maritime Automatic Identification System, and in the Digital Enhanced Cordless Telecommunications (DECT) standard for portable phones. TDMA was first used in satellite communication systems by Western Union in its Westar 3 communications satellite in 1979. It is now used extensively in satellite communications, combat-net radio systems, and passive optical network (PON) networks for upstream traffic from premises to the operator.

View the full Wikipedia page for Time division multiple access
↑ Return to Menu