Image compression in the context of Data transmission


Image compression in the context of Data transmission

Image compression Study page number 1 of 1

Play TriviaQuestions Online!

or

Skip to study material about Image compression in the context of "Data transmission"


⭐ Core Definition: Image compression

Image compression is a type of data compression applied to digital images, to reduce their cost for storage or transmission. Algorithms may take advantage of visual perception and the statistical properties of image data to provide superior results compared with generic data compression methods which are used for other digital data.

↓ Menu
HINT:

In this Dossier

Image compression in the context of Digital imaging

Digital imaging or digital image acquisition is the creation of a digital representation of the visual characteristics of an object, such as a physical scene or the interior structure of an object. The term is often assumed to imply or include the processing, compression, storage, printing and display of such images. A key advantage of a digital image, versus an analog image such as a film photograph, is the ability to digitally propagate copies of the original subject indefinitely without any loss of image quality.

Digital imaging can be classified by the type of electromagnetic radiation or other waves whose variable attenuation, as they pass through or reflect off objects, conveys the information that constitutes the image. In all classes of digital imaging, the information is converted by image sensors into digital signals that are processed by a computer and made output as a visible-light image. For example, the medium of visible light allows digital photography (including digital videography) with various kinds of digital cameras (including digital video cameras). X-rays allow digital X-ray imaging (digital radiography, fluoroscopy, and CT), and gamma rays allow digital gamma ray imaging (digital scintigraphy, SPECT, and PET). Sound allows ultrasonography (such as medical ultrasonography) and sonar, and radio waves allow radar. Digital imaging lends itself well to image analysis by software, as well as to image editing (including image manipulation).

View the full Wikipedia page for Digital imaging
↑ Return to Menu

Image compression in the context of Digital signal processing

Digital signal processing (DSP) is the use of digital processing, such as by computers or more specialized digital signal processors, to perform a wide variety of signal processing operations. The digital signals processed in this manner are a sequence of numbers that represent samples of a continuous variable in a domain such as time, space, or frequency. In digital electronics, a digital signal is represented as a pulse train, which is typically generated by the switching of a transistor.

Digital signal processing and analog signal processing are subfields of signal processing. DSP applications include audio and speech processing, sonar, radar and other sensor array processing, spectral density estimation, statistical signal processing, digital image processing, data compression, video coding, audio coding, image compression, signal processing for telecommunications, control systems, biomedical engineering, and seismology, among others.

View the full Wikipedia page for Digital signal processing
↑ Return to Menu

Image compression in the context of JPEG

JPEG (/ˈpɛɡ/ JAY-peg, short for Joint Photographic Experts Group and sometimes retroactively referred to as JPEG 1) is a commonly used method of lossy compression for digital images, particularly for those images produced by digital photography. The degree of compression can be adjusted, allowing a selectable trade off between storage size and image quality. JPEG typically achieves 10:1 compression with noticeable, but widely agreed to be acceptable perceptible loss in image quality. Since its introduction in 1992, JPEG has been the most widely used image compression standard in the world, and the most widely used digital image format, with several billion JPEG images produced every day as of 2015.

The Joint Photographic Experts Group created the standard in 1992, based on the discrete cosine transform (DCT) algorithm. JPEG was largely responsible for the proliferation of digital images and digital photos across the Internet and later social media. JPEG compression is used in a number of image file formats. JPEG/Exif is the most common image format used by digital cameras and other photographic image capture devices; along with JPEG/JFIF, it is the most common format for storing and transmitting photographic images on the World Wide Web. These format variations are often not distinguished and are simply called JPEG.

View the full Wikipedia page for JPEG
↑ Return to Menu

Image compression in the context of Color image pipeline

An image pipeline or video pipeline is the set of components commonly used between an image source (such as a camera, a scanner, or the rendering engine in a computer game), and an image renderer (such as a television set, a computer screen, a computer printer or cinema screen), or for performing any intermediate digital image processing consisting of two or more separate processing blocks. An image/video pipeline may be implemented as computer software, in a digital signal processor, on an FPGA, or as fixed-function ASIC. In addition, analog circuits can be used to do many of the same functions.

Typical components include image sensor corrections (including debayering or applying a Bayer filter), noise reduction, image scaling, gamma correction, image enhancement, colorspace conversion (between formats such as RGB, YUV or YCbCr), chroma subsampling, framerate conversion, image compression/video compression (such as JPEG), and computer data storage/data transmission.

View the full Wikipedia page for Color image pipeline
↑ Return to Menu

Image compression in the context of N. Ahmed

Nasir Ahmed (born 1940) is an American electrical engineer and computer scientist. He is Professor Emeritus of Electrical and Computer Engineering at University of New Mexico (UNM). He is best known for inventing the discrete cosine transform (DCT) in the early 1970s. The DCT is the most widely used data compression transformation, the basis for most digital media standards (image, video and audio) and commonly used in digital signal processing. He also described the discrete sine transform (DST), which is related to the DCT.

View the full Wikipedia page for N. Ahmed
↑ Return to Menu

Image compression in the context of Dolby Digital

Dolby Digital, originally synonymous with Dolby AC-3 (see below), is the name for a family of audio compression technologies developed by Dolby Laboratories. Called Dolby Stereo Digital until 1995, it uses lossy compression (except for Dolby TrueHD). The first use of Dolby Digital was to provide digital sound in cinemas from 35 mm film prints. It has since also been used for TV broadcast, radio broadcast via satellite, digital video streaming, DVDs, Blu-ray discs and game consoles.

Dolby AC-3 was the original version of the Dolby Digital codec. The basis of the Dolby AC-3 multi-channel audio coding standard is the modified discrete cosine transform (MDCT), a lossy audio compression algorithm. It is a modification of the discrete cosine transform (DCT) algorithm, which was proposed by Nasir Ahmed in 1972 for image compression. The DCT was adapted into the MDCT by J.P. Princen, A.W. Johnson and Alan B. Bradley at the University of Surrey in 1987.

View the full Wikipedia page for Dolby Digital
↑ Return to Menu

Image compression in the context of Standard test image

A standard test image is a digital image file used across different institutions to test image processing and image compression algorithms. By using the same standard test images, different labs are able to compare results, both visually and quantitatively.

The images are in many cases chosen to represent natural or typical images that a class of processing techniques would need to deal with. Other test images are chosen because they present a range of challenges to image reconstruction algorithms, such as the reproduction of fine detail and textures, sharp transitions and edges, and uniform regions.

View the full Wikipedia page for Standard test image
↑ Return to Menu

Image compression in the context of WebP

WebP (/ˈwɛpi/ WEP-ee) is a raster graphics file format developed by Google and intended as a replacement for the JPEG, PNG, and GIF file formats on the web. It supports image compression (both lossy and lossless), as well as animation and alpha compositing. The sister project for video is called WebM.

Google announced the WebP format in September 2010; the company released the first stable version of its supporting library in April 2018. WebP has seen widespread adoption across the Internet in order to reduce image size, with all major browsers currently supporting the format.

View the full Wikipedia page for WebP
↑ Return to Menu

Image compression in the context of JPEG 2000

JPEG 2000 (JP2) is an image compression standard and coding system. It was developed from 1997 to 2000 by a Joint Photographic Experts Group committee chaired by Touradj Ebrahimi (later the JPEG president), with the intention of superseding their original JPEG standard (created in 1992), which is based on a discrete cosine transform (DCT), with a newly designed, wavelet-based method. The standardized filename extension is '.jp2' for ISO/IEC 15444-1 conforming files and .jpx or .jpf for the extended part-2 specifications, published as ISO/IEC 15444-2. The MIME types for JPEG 2000 are defined in RFC 3745. The MIME type for JPEG 2000 (ISO/IEC 15444-1) is image/jp2.

The JPEG 2000 project was motivated by Ricoh's submission in 1995 of the CREW (Compression with Reversible Embedded Wavelets) algorithm to the standardization effort of JPEG LS. Ultimately the LOCO-I algorithm was selected as the basis for JPEG LS, but many of the features of CREW ended up in the JPEG 2000 standard.

View the full Wikipedia page for JPEG 2000
↑ Return to Menu

Image compression in the context of JPEG XR

JPEG XR (JPEG extended range) is an image compression standard for continuous tone photographic images, based on the HD Photo (formerly Windows Media Photo) specifications that Microsoft originally developed and patented. It supports both lossy and lossless compression, and is the preferred image format for Ecma-388 Open XML Paper Specification documents.

The format is natively supported by Windows Vista and later as well as Internet Explorer 9, 10 and 11. Third-party support for the format includes Adobe AIR, Affinity Photo, Paint.NET, and Sumatra PDF.

View the full Wikipedia page for JPEG XR
↑ Return to Menu

Image compression in the context of JBIG

JBIG is an early lossless image compression standard from the Joint Bi-level Image Experts Group, standardized as ISO/IEC standard 11544 and as ITU-T recommendation T.82 in March 1993. It is widely implemented in fax machines. Now that the newer bi-level image compression standard JBIG2 has been released, JBIG is also known as JBIG1. JBIG was designed for compression of binary images, particularly for faxes, but can also be used on other images. In most situations JBIG offers between a 20% and 50% increase in compression efficiency over Fax Group 4 compression, and in some situations, it offers a 30-fold improvement.

JBIG is based on a form of arithmetic coding developed by IBM (known as the Q-coder) that also uses a relatively minor refinement developed by Mitsubishi, resulting in what became known as the QM-coder. It bases the probability estimates for each encoded bit on the values of the previous bits and the values in previous lines of the picture. JBIG also supports progressive transmission, which generally incurs a small overhead in bit rate (around 5%).

View the full Wikipedia page for JBIG
↑ Return to Menu

Image compression in the context of JBIG2

JBIG2 is an image compression standard for bi-level images, developed by the Joint Bi-level Image Experts Group. It is suitable for both lossless and lossy compression. According to a press release from the Group, in its lossless mode JBIG2 typically generates files 3–5 times smaller than Fax Group 4 and 2–4 times smaller than JBIG, the previous bi-level compression standard released by the Group. JBIG2 was published in 2000 as the international standard ITU T.88, and in 2001 as ISO/IEC 14492.

View the full Wikipedia page for JBIG2
↑ Return to Menu