Audio compression (data) in the context of MPEG


Audio compression (data) in the context of MPEG

Audio compression (data) Study page number 1 of 1

Play TriviaQuestions Online!

or

Skip to study material about Audio compression (data) in the context of "MPEG"


⭐ Core Definition: Audio compression (data)

In information theory, data compression, source coding, or bit-rate reduction is the process of encoding information using fewer bits than the original representation. Any particular compression is either lossy or lossless. Lossless compression reduces bits by identifying and eliminating statistical redundancy. No information is lost in lossless compression. Lossy compression reduces bits by removing unnecessary or less important information. Typically, a device that performs data compression is referred to as an encoder, and one that performs the reversal of the process (decompression) as a decoder.

The process of reducing the size of a data file is often referred to as data compression. In the context of data transmission, it is called source coding: encoding is done at the source of the data before it is stored or transmitted. Source coding should not be confused with channel coding, for error detection and correction or line coding, the means for mapping data onto a signal.

↓ Menu
HINT:

👉 Audio compression (data) in the context of MPEG

The Moving Picture Experts Group (MPEG) is an alliance of working groups established jointly by ISO and IEC that sets standards for media coding, including compression coding of audio, video, graphics, and genomic data; and transmission and file formats for various applications. Together with JPEG, MPEG is organized under ISO/IEC JTC 1/SC 29Coding of audio, picture, multimedia and hypermedia information (ISO/IEC Joint Technical Committee 1, Subcommittee 29).

MPEG formats are used in various multimedia systems. The most well known older MPEG media formats typically use MPEG-1, MPEG-2, and MPEG-4 AVC media coding and MPEG-2 systems transport streams and program streams. Newer systems typically use the MPEG base media file format and dynamic streaming (a.k.a. MPEG-DASH).

↓ Explore More Topics
In this Dossier

Audio compression (data) in the context of Audio files

An audio file format is a file format for storing digital audio data on a computer system. The bit layout of the audio data (excluding metadata) is called the audio coding format and can be uncompressed, or compressed to reduce the file size, often using lossy compression. The data can be a raw bitstream in an audio coding format, but it is usually embedded in a container format or an audio data format with a defined storage layer.

View the full Wikipedia page for Audio files
↑ Return to Menu

Audio compression (data) in the context of N. Ahmed

Nasir Ahmed (born 1940) is an American electrical engineer and computer scientist. He is Professor Emeritus of Electrical and Computer Engineering at University of New Mexico (UNM). He is best known for inventing the discrete cosine transform (DCT) in the early 1970s. The DCT is the most widely used data compression transformation, the basis for most digital media standards (image, video and audio) and commonly used in digital signal processing. He also described the discrete sine transform (DST), which is related to the DCT.

View the full Wikipedia page for N. Ahmed
↑ Return to Menu

Audio compression (data) in the context of Transient (acoustics)

In acoustics and audio, a transient is a high amplitude, short-duration sound at the beginning of a waveform that occurs in phenomena such as musical sounds, noises or speech. Transients do not necessarily directly depend on the frequency of the tone they initiate. It contains a high degree of non-periodic components and a higher magnitude of high frequencies than the harmonic content of that sound.

Transients are more difficult to encode with many audio compression algorithms, causing pre-echo.

View the full Wikipedia page for Transient (acoustics)
↑ Return to Menu

Audio compression (data) in the context of Dolby Digital

Dolby Digital, originally synonymous with Dolby AC-3 (see below), is the name for a family of audio compression technologies developed by Dolby Laboratories. Called Dolby Stereo Digital until 1995, it uses lossy compression (except for Dolby TrueHD). The first use of Dolby Digital was to provide digital sound in cinemas from 35 mm film prints. It has since also been used for TV broadcast, radio broadcast via satellite, digital video streaming, DVDs, Blu-ray discs and game consoles.

Dolby AC-3 was the original version of the Dolby Digital codec. The basis of the Dolby AC-3 multi-channel audio coding standard is the modified discrete cosine transform (MDCT), a lossy audio compression algorithm. It is a modification of the discrete cosine transform (DCT) algorithm, which was proposed by Nasir Ahmed in 1972 for image compression. The DCT was adapted into the MDCT by J.P. Princen, A.W. Johnson and Alan B. Bradley at the University of Surrey in 1987.

View the full Wikipedia page for Dolby Digital
↑ Return to Menu

Audio compression (data) in the context of TOSLINK

TOSLINK (Toshiba Link) is a standardized optical fiber connector system. Generically known as optical audio, the most common use of the TOSLINK optical fiber connector is in consumer audio equipment in which the digital optical socket carries (transmits) a stream of digital audio signals from audio equipment (CD player, DVD player, Digital Audio Tape recorder, computer, video game console) to an AV receiver that can decode two channels of uncompressed, pulse-code modulated (PCM) audio; or decode compressed 5.1 or 7.1 surround sound audio signals, such as Dolby Digital and DTS. Unlike an HDMI connector cable, a TOSLINK optical fiber connector does not possess the bandwidth capacity to carry the uncompressed audio signals of Dolby TrueHD and of DTS-HD Master Audio, but it can carry up to 8 channels of PCM audio when used to connect two devices that use the ADAT Lightpipe standard.

Although the TOSLINK connector supports several media formats and physical standards, the most common digital audio connectors are the rectangular EIAJ/JEITA RC-5720 (also CP-1201 and JIS C5974-1993 F05). In a TOSLINK connector, the optical signal appears as a red light, with a peak wavelength of 650 nm. Depending on the type of modulated signal being carried, other optical wavelengths can be present.

View the full Wikipedia page for TOSLINK
↑ Return to Menu

Audio compression (data) in the context of Ogg Vorbis

Vorbis is a free and open-source software project headed by the Xiph.Org Foundation. The project produces an audio coding format and software reference encoder/decoder (codec) for lossy audio compression, libvorbis. Vorbis is most commonly used in conjunction with the Ogg container format and it is therefore often referred to as Ogg Vorbis.

Version 1.0 of Vorbis was released in May 2000. Since 2013, the Xiph.Org Foundation has stated that the use of Vorbis should be deprecated in favor of the Opus codec, an improved and more efficient format that has also been developed by Xiph.Org.

View the full Wikipedia page for Ogg Vorbis
↑ Return to Menu

Audio compression (data) in the context of Audio codec

An audio codec is a device or computer program capable of encoding or decoding a digital data stream (a codec) that encodes or decodes audio. In software, an audio codec is a computer program implementing an algorithm that compresses and decompresses digital audio data according to a given audio file or streaming media audio coding format. The objective of the algorithm is to represent the high-fidelity audio signal with a minimum number of bits while retaining quality. This can effectively reduce the storage space and the bandwidth required for transmission of the stored audio file. Most software codecs are implemented as libraries which interface to one or more multimedia players. Most modern audio compression algorithms are based on modified discrete cosine transform (MDCT) coding and linear predictive coding (LPC).

In hardware, audio codec refers to a single device that encodes analog audio as digital signals and decodes digital back into analog. In other words, it contains both an analog-to-digital converter (ADC) and digital-to-analog converter (DAC) running off the same clock signal. This is used in sound cards that support both audio in and out, for instance. Hardware audio codecs send and receive digital data using buses such as AC'97, SoundWire, I²S, SPI, I²C, etc. Most commonly the digital data is linear PCM, and this is the only format that most codecs support, but some legacy codecs support other formats such as G.711 for telephony.

View the full Wikipedia page for Audio codec
↑ Return to Menu

Audio compression (data) in the context of Speex

The Speex project is an attempt to create a free software speech codec, unencumbered by patent restrictions. Speex is licensed under the BSD License and is used with the Xiph.org Foundation's Ogg container format.

The Speex coder uses the Ogg bitstream format, and the Speex designers see their project as complementary to the Vorbis general-purpose audio compression project.

View the full Wikipedia page for Speex
↑ Return to Menu

Audio compression (data) in the context of CELT

Constrained Energy Lapped Transform (CELT) is an open, royalty-free lossy audio compression format and a free software codec with especially low algorithmic delay for use in low-latency audio communication. The algorithms are openly documented and may be used free of software patent restrictions. Development of the format was maintained by the Xiph.Org Foundation (as part of the Ogg codec family) and later coordinated by the Opus working group of the Internet Engineering Task Force (IETF).

CELT was meant to bridge the gap between Vorbis and Speex for applications where both high quality audio and low delay are desired. It is suitable for both speech and music. It borrows ideas from the CELP algorithm, but avoids some of its limitations by operating in the frequency domain exclusively.

View the full Wikipedia page for CELT
↑ Return to Menu

Audio compression (data) in the context of AAC-LD

The MPEG-4 Low Delay Audio Coder (a.k.a. AAC Low Delay, or AAC-LD) is audio compression standard designed to combine the advantages of perceptual audio coding with the low delay necessary for two-way communication. It is closely derived from the MPEG-2 Advanced Audio Coding (AAC) standard. It was published in MPEG-4 Audio Version 2 (ISO/IEC 14496-3:1999/Amd 1:2000) and in its later revisions.

AAC-LD uses a version of the modified discrete cosine transform (MDCT) audio coding technique called the LD-MDCT. AAC-LD is widely used by Apple as the voice-over-IP (VoIP) speech codec in FaceTime.

View the full Wikipedia page for AAC-LD
↑ Return to Menu