Video compression in the context of Motion compensation


Video compression in the context of Motion compensation

Video compression Study page number 1 of 1

Play TriviaQuestions Online!

or

Skip to study material about Video compression in the context of "Motion compensation"


⭐ Core Definition: Video compression

In information theory, data compression, source coding, or bit-rate reduction is the process of encoding information using fewer bits than the original representation. Any particular compression is either lossy or lossless. Lossless compression reduces bits by identifying and eliminating statistical redundancy. No information is lost in lossless compression. Lossy compression reduces bits by removing unnecessary or less important information. Typically, a device that performs data compression is referred to as an encoder, and one that performs the reversal of the process (decompression) as a decoder.

The process of reducing the size of a data file is often referred to as data compression. In the context of data transmission, it is called source coding: encoding is done at the source of the data before it is stored or transmitted. Source coding should not be confused with channel coding, for error detection and correction or line coding, the means for mapping data onto a signal.

↓ Menu
HINT:

👉 Video compression in the context of Motion compensation

Motion compensation in computing is an algorithmic technique used to predict a frame in a video given the previous and/or future frames by accounting for motion of the camera and/or objects in the video. It is employed in the encoding of video data for video compression, for example in the generation of MPEG-2 files. Motion compensation describes a picture in terms of the transformation of a reference picture to the current picture. The reference picture may be previous in time or even from the future. When images can be accurately synthesized from previously transmitted/stored images, the compression efficiency can be improved.

Motion compensation is one of the two key video compression techniques used in video coding standards, along with the discrete cosine transform (DCT). Most video coding standards, such as the H.26x and MPEG formats, typically use motion-compensated DCT hybrid coding, known as block motion compensation (BMC) or motion-compensated DCT (MC DCT).

↓ Explore More Topics
In this Dossier

Video compression in the context of Video coding format

A video coding format (or sometimes video compression format) is an encoded format of digital video content, such as in a data file or bitstream. It typically uses a standardized video compression algorithm, most commonly based on discrete cosine transform (DCT) coding and motion compensation. A computer software or hardware component that compresses or decompresses a specific video coding format is a video codec.

Some video coding formats are documented by a detailed technical specification document known as a video coding specification. Some such specifications are written and approved by standardization organizations as technical standards, and are thus known as a video coding standard. There are de facto standards and formal standards.

View the full Wikipedia page for Video coding format
↑ Return to Menu

Video compression in the context of Color image pipeline

An image pipeline or video pipeline is the set of components commonly used between an image source (such as a camera, a scanner, or the rendering engine in a computer game), and an image renderer (such as a television set, a computer screen, a computer printer or cinema screen), or for performing any intermediate digital image processing consisting of two or more separate processing blocks. An image/video pipeline may be implemented as computer software, in a digital signal processor, on an FPGA, or as fixed-function ASIC. In addition, analog circuits can be used to do many of the same functions.

Typical components include image sensor corrections (including debayering or applying a Bayer filter), noise reduction, image scaling, gamma correction, image enhancement, colorspace conversion (between formats such as RGB, YUV or YCbCr), chroma subsampling, framerate conversion, image compression/video compression (such as JPEG), and computer data storage/data transmission.

View the full Wikipedia page for Color image pipeline
↑ Return to Menu

Video compression in the context of Digital cable

Digital cable is the distribution of cable television using digital data and video compression. The technology was first developed by General Instrument. By 2000, most cable companies offered digital features, eventually replacing their previous analog-based cable by the mid 2010s. During the late 2000s, broadcast television converted to the digital HDTV standard, which was incompatible with existing analog cable systems.

In addition to providing high-definition video, digital cable systems provide more services such as pay-per-view programming, cable internet access and cable telephone services. Most digital cable signals are encrypted, which reduced the incidence of cable television piracy which occurred in analog systems.

View the full Wikipedia page for Digital cable
↑ Return to Menu

Video compression in the context of MPEG-4

MPEG-4 is a group of international standards for the compression of digital audio and visual data, multimedia systems, and file storage formats. It was originally introduced in late 1998 as a group of audio and video coding formats and related technology agreed upon by the ISO/IEC Moving Picture Experts Group (MPEG) (ISO/IEC JTC 1/SC29/WG11) under the formal standard ISO/IEC 14496 – Coding of audio-visual objects. Uses of MPEG-4 include compression of audiovisual data for Internet video and CD distribution, voice (telephone, videophone) and broadcast television applications. The MPEG-4 standard was developed by a group led by Touradj Ebrahimi (later the JPEG president) and Fernando Pereira.

View the full Wikipedia page for MPEG-4
↑ Return to Menu

Video compression in the context of MPEG-2

MPEG-2 (a.k.a. H.222/H.262 as was defined by the ITU) is a standard for "the generic coding of moving pictures and associated audio information". It describes a combination of lossy video compression and lossy audio data compression methods, which permit storage and transmission of movies using currently available storage media and transmission bandwidth. While MPEG-2 is not as efficient as newer standards such as H.264/AVC and H.265/HEVC, backwards compatibility with existing hardware and software means it is still widely used, for example in over-the-air digital television broadcasting and in the DVD-Video standard.

View the full Wikipedia page for MPEG-2
↑ Return to Menu

Video compression in the context of Uncompressed video

Uncompressed video is digital video that either has never been compressed or was generated by decompressing previously compressed digital video. It is commonly used by video cameras, video monitors, video recording devices (including general-purpose computers), and in video processors that perform functions such as image resizing, image rotation, deinterlacing, and text and graphics overlay. It is conveyed over various types of baseband digital video interfaces, such as HDMI, DVI, DisplayPort and SDI. Standards also exist for the carriage of uncompressed video over computer networks.

Some HD video cameras output uncompressed video, whereas others compress the video using a lossy compression method such as MPEG or H.264. In any lossy compression process, some of the video information is removed, which creates compression artifacts and reduces the quality of the resulting decompressed video. When editing video, it is preferred to work with video that has never been compressed (or was losslessly compressed) as this maintains the best possible quality, with compression performed after completion of editing.

View the full Wikipedia page for Uncompressed video
↑ Return to Menu

Video compression in the context of MPEG

The Moving Picture Experts Group (MPEG) is an alliance of working groups established jointly by ISO and IEC that sets standards for media coding, including compression coding of audio, video, graphics, and genomic data; and transmission and file formats for various applications. Together with JPEG, MPEG is organized under ISO/IEC JTC 1/SC 29Coding of audio, picture, multimedia and hypermedia information (ISO/IEC Joint Technical Committee 1, Subcommittee 29).

MPEG formats are used in various multimedia systems. The most well known older MPEG media formats typically use MPEG-1, MPEG-2, and MPEG-4 AVC media coding and MPEG-2 systems transport streams and program streams. Newer systems typically use the MPEG base media file format and dynamic streaming (a.k.a. MPEG-DASH).

View the full Wikipedia page for MPEG
↑ Return to Menu