Latency (engineering) in the context of Network latency


Latency (engineering) in the context of Network latency

Latency (engineering) Study page number 1 of 1

Play TriviaQuestions Online!

or

Skip to study material about Latency (engineering) in the context of "Network latency"


⭐ Core Definition: Latency (engineering)

Latency, from a general point of view, is a time delay between the cause and the effect of some physical change in the system being observed. Lag, as it is known in gaming circles, refers to the latency between the input to a simulation and the visual or auditory response, often occurring because of network delay in online games. The original meaning of “latency”, as used widely in psychology, medicine and most other disciplines, derives from “latent”, a word of Latin origin meaning “hidden”.  Its different and relatively recent meaning (this topic) of “lateness” or “delay” appears to derive from its superficial similarity to the word “late”, from the old English “laet”.

Latency is physically a consequence of the limited velocity at which any physical interaction can propagate. The magnitude of this velocity is always less than or equal to the speed of light. Therefore, every physical system with any physical separation (distance) between cause and effect will experience some sort of latency, regardless of the nature of the stimulation to which it has been exposed.

↓ Menu
HINT:

In this Dossier

Latency (engineering) in the context of Speed of light

The speed of light in vacuum, often called simply speed of light and commonly denoted c, is a universal physical constant exactly equal to 299,792,458 metres per second (approximately 1 billion kilometres per hour; 700 million miles per hour). It is exact because, by international agreement, a metre is defined as the length of the path travelled by light in vacuum during a time interval of 1299792458 second. The speed of light is the same for all observers, no matter their relative velocity. It is the upper limit for the speed at which information, matter, or energy can travel through space.

All forms of electromagnetic radiation, including visible light, travel in vacuum at the speed c. For many practical purposes, light and other electromagnetic waves will appear to propagate instantaneously, but for long distances and sensitive measurements, their finite speed has noticeable effects. Much starlight viewed on Earth is from the distant past, allowing humans to study the history of the universe by viewing distant objects. When communicating with distant space probes, it can take hours for signals to travel. In computing, the speed of light fixes the ultimate minimum communication delay. The speed of light can be used in time of flight measurements to measure large distances to extremely high precision.

View the full Wikipedia page for Speed of light
↑ Return to Menu

Latency (engineering) in the context of Satellite internet constellation

A satellite internet constellation is a constellation of artificial satellites providing satellite internet service. In particular, the term has come to refer to a new generation of very large constellations (sometimes referred to as megaconstellations) orbiting in low Earth orbit (LEO) to provide low-latency, high bandwidth (broadband) internet service. As of 2020, 63 percent of rural households worldwide lacked internet access due to the infrastructure requirements of underground cables and network towers. Satellite internet constellations offer a low-cost solution for expanding coverage.

View the full Wikipedia page for Satellite internet constellation
↑ Return to Menu

Latency (engineering) in the context of Network delay

Network delay is a design and performance characteristic of a telecommunications network. It specifies the latency for a bit of data to travel across the network from one communication endpoint to another. It is typically measured in multiples or fractions of a second. Delay may differ slightly, depending on the location of the specific pair of communicating endpoints. Engineers usually report both the maximum and average delay, and they divide the delay into several parts:

A certain minimum level of delay is experienced by signals due to the time it takes to transmit a packet serially through a link. This delay is extended by more variable levels of delay due to network congestion. IP network delays can range from a few milliseconds to several hundred milliseconds.

View the full Wikipedia page for Network delay
↑ Return to Menu

Latency (engineering) in the context of Lag (video games)

In computers, lag is delay (latency) between the action of the user (input) and the reaction of the server supporting the task, which has to be sent back to the client.

The player's ability to tolerate lag depends on the type of game being played. For instance, a strategy game or a turn-based game with a slow pace may have a high threshold or even be mostly unaffected by high lag. A game with twitch gameplay such as a first-person shooter or a fighting game with a considerably faster pace may require a significantly lower lag to provide satisfying gameplay.

View the full Wikipedia page for Lag (video games)
↑ Return to Menu

Latency (engineering) in the context of Kuiper Systems

Amazon Leo, formerly known as Project Kuiper, is a subsidiary of Amazon established in 2019 to deploy a large satellite internet constellation providing low-latency broadband connectivity. The project's original codename was inspired by the Kuiper belt. The service was rebranded as Amazon Leo in November 2025.

In July 2020, the Federal Communications Commission authorized Amazon to deploy 3,236 satellites into low Earth orbit. Deployment is planned in five phases, with service expected to begin after the first 578 satellites reach orbit. Under the terms of its license, Amazon must launch and operate half of the constellation by July 30, 2026, and the remainder by July 30, 2029.

View the full Wikipedia page for Kuiper Systems
↑ Return to Menu

Latency (engineering) in the context of Intelligent vehicular ad hoc network

Intelligent vehicular ad hoc networks (InVANETs) use WiFi IEEE 802.11p (WAVE standard) and effective communication between vehicles with dynamic mobility. Effective measures such as media communication between vehicles can be enabled as well methods to track automotive vehicles. InVANET is not foreseen to replace current mobile (cellular phone) communication standards.

"Older" designs within the IEEE 802.11 scope may refer just to IEEE 802.11b/g. More recent designs refer to the latest issues of IEEE 802.11p (WAVE, draft status). Due to inherent lag times, only the latter one in the IEEE 802.11 scope is capable of coping with the typical dynamics of vehicle operation.

View the full Wikipedia page for Intelligent vehicular ad hoc network
↑ Return to Menu

Latency (engineering) in the context of NearLink

NearLink (Chinese: 星闪; also known as SparkLink and formerly Greentooth) is a short-range wireless technology protocol, which was developed by the NearLink Alliance, led by Huawei to set up on September 22, 2020. As of September 2023, the Alliance has more than 300 enterprises and institutions on board, which include automotive manufacturers, chip and module manufacturers, application developers, ICT companies, and research institutions.

On November 4, 2022, the Alliance released the SparkLink Short-range Wireless Communications Standard 1.0, which incorporates two modes of access, namely, SparkLink Low Energy (SLE) and SparkLink Basic (SLB), to integrate the features of traditional wireless technologies, such as Bluetooth and Wi-Fi, with enhanced prerequisites for latency, power consumption, coverage, and security.

View the full Wikipedia page for NearLink
↑ Return to Menu

Latency (engineering) in the context of Hardware acceleration

Hardware acceleration is the use of computer hardware, known as a hardware accelerator, to perform specific functions faster than can be done by software running on a general-purpose central processing unit (CPU). Any transformation of data that can be calculated by software running on a CPU can also be calculated by an appropriate hardware accelerator, or by a combination of both.

To perform computing tasks more efficiently, generally one can invest time and money in improving the software, improving the hardware, or both. There are various approaches with advantages and disadvantages in terms of decreased latency, increased throughput, and reduced energy consumption.

View the full Wikipedia page for Hardware acceleration
↑ Return to Menu

Latency (engineering) in the context of Access time

Access time is the time delay or latency between a request to an electronic system, and the access being initiated or the requested data returned.

In computer and software systems, it is the time interval between the point where an instruction control unit initiates a call to retrieve data or a request to store data, and the point at which delivery of the data is completed or the storage is started. Note that in distributed software systems or other systems with stochastic processes, access time or latency should be measured at the 99th percentile.

View the full Wikipedia page for Access time
↑ Return to Menu

Latency (engineering) in the context of Voice activity detection

Voice activity detection (VAD), also known as speech activity detection or speech detection, is the detection of the presence or absence of human speech, used in speech processing. The main uses of VAD are in speaker diarization, speech coding and speech recognition. It can facilitate speech processing, and can also be used to deactivate some processes during non-speech section of an audio session: it can avoid unnecessary coding/transmission of silence packets in Voice over Internet Protocol (VoIP) applications, saving on computation and on network bandwidth.

VAD is an important enabling technology for a variety of speech-based applications. Therefore, various VAD algorithms have been developed that provide varying features and compromises between latency, sensitivity, accuracy and computational cost. Some VAD algorithms also provide further analysis, for example whether the speech is voiced, unvoiced or sustained. Voice activity detection is usually independent of language.

View the full Wikipedia page for Voice activity detection
↑ Return to Menu

Latency (engineering) in the context of 3D XPoint

3D XPoint (pronounced three-D cross point) is a discontinued non-volatile memory (NVM) technology developed jointly by Intel and Micron Technology. It was announced in July 2015 and was available on the open market under the brand name Optane (Intel) from April 2017 to July 2022. Bit storage is based on a change of bulk resistance, in conjunction with a stackable cross-grid data access array, using a technology known as Ovonic Threshold Switch (OTS). Initial prices were less than dynamic random-access memory (DRAM) but more than flash memory.

As a non-volatile memory, 3D XPoint had a number of features that distinguish it from other currently available RAM and NVRAM. Although the first generations of 3D XPoint were not especially large or fast, 3D XPoint was used to create some of the fastest SSDs available as of 2019, with low write latency. As the memory was inherently fast, and byte-addressable, techniques such as read-modify-write and caching used to enhance traditional SSDs are not needed to obtain high performance. In addition, chipsets such as Cascade Lake were designed with inbuilt support for 3D XPoint, which allowed it to be used as a caching or acceleration disk, and it was also fast enough to be used as non-volatile RAM (NVRAM) or persistent memory in a DIMM package.

View the full Wikipedia page for 3D XPoint
↑ Return to Menu

Latency (engineering) in the context of Write buffer

A write buffer is a type of data buffer that can be used to hold data being written from the cache to main memory or to the next cache in the memory hierarchy to improve performance and reduce latency. It is used in certain CPU cache architectures like Intel's x86 and AMD64. In multi-core systems, write buffers destroy sequential consistency. Some software disciplines, like C11's data-race-freedom, are sufficient to regain a sequentially consistent view of memory.

A variation of write-through caching is called buffered write-through.

View the full Wikipedia page for Write buffer
↑ Return to Menu