Image compression in the context of "Digital signal processing"

Play Trivia Questions online!

or

Skip to study material about Image compression in the context of "Digital signal processing"

Ad spacer

⭐ Core Definition: Image compression

Image compression is a type of data compression applied to digital images, to reduce their cost for storage or transmission. Algorithms may take advantage of visual perception and the statistical properties of image data to provide superior results compared with generic data compression methods which are used for other digital data.

↓ Menu

>>>PUT SHARE BUTTONS HERE<<<

👉 Image compression in the context of Digital signal processing

Digital signal processing (DSP) is the use of digital processing, such as by computers or more specialized digital signal processors, to perform a wide variety of signal processing operations. The digital signals processed in this manner are a sequence of numbers that represent samples of a continuous variable in a domain such as time, space, or frequency. In digital electronics, a digital signal is represented as a pulse train, which is typically generated by the switching of a transistor.

Digital signal processing and analog signal processing are subfields of signal processing. DSP applications include audio and speech processing, sonar, radar and other sensor array processing, spectral density estimation, statistical signal processing, digital image processing, data compression, video coding, audio coding, image compression, signal processing for telecommunications, control systems, biomedical engineering, and seismology, among others.

↓ Explore More Topics
In this Dossier

Image compression in the context of Digital imaging

Digital imaging or digital image acquisition is the creation of a digital representation of the visual characteristics of an object, such as a physical scene or the interior structure of an object. The term is often assumed to imply or include the processing, compression, storage, printing and display of such images. A key advantage of a digital image, versus an analog image such as a film photograph, is the ability to digitally propagate copies of the original subject indefinitely without any loss of image quality.

Digital imaging can be classified by the type of electromagnetic radiation or other waves whose variable attenuation, as they pass through or reflect off objects, conveys the information that constitutes the image. In all classes of digital imaging, the information is converted by image sensors into digital signals that are processed by a computer and made output as a visible-light image. For example, the medium of visible light allows digital photography (including digital videography) with various kinds of digital cameras (including digital video cameras). X-rays allow digital X-ray imaging (digital radiography, fluoroscopy, and CT), and gamma rays allow digital gamma ray imaging (digital scintigraphy, SPECT, and PET). Sound allows ultrasonography (such as medical ultrasonography) and sonar, and radio waves allow radar. Digital imaging lends itself well to image analysis by software, as well as to image editing (including image manipulation).

↑ Return to Menu

Image compression in the context of JPEG

JPEG (/ˈpɛɡ/ JAY-peg, short for Joint Photographic Experts Group and sometimes retroactively referred to as JPEG 1) is a commonly used method of lossy compression for digital images, particularly for those images produced by digital photography. The degree of compression can be adjusted, allowing a selectable trade off between storage size and image quality. JPEG typically achieves 10:1 compression with noticeable, but widely agreed to be acceptable perceptible loss in image quality. Since its introduction in 1992, JPEG has been the most widely used image compression standard in the world, and the most widely used digital image format, with several billion JPEG images produced every day as of 2015.

The Joint Photographic Experts Group created the standard in 1992, based on the discrete cosine transform (DCT) algorithm. JPEG was largely responsible for the proliferation of digital images and digital photos across the Internet and later social media. JPEG compression is used in a number of image file formats. JPEG/Exif is the most common image format used by digital cameras and other photographic image capture devices; along with JPEG/JFIF, it is the most common format for storing and transmitting photographic images on the World Wide Web. These format variations are often not distinguished and are simply called JPEG.

↑ Return to Menu

Image compression in the context of Color image pipeline

An image pipeline or video pipeline is the set of components commonly used between an image source (such as a camera, a scanner, or the rendering engine in a computer game), and an image renderer (such as a television set, a computer screen, a computer printer or cinema screen), or for performing any intermediate digital image processing consisting of two or more separate processing blocks. An image/video pipeline may be implemented as computer software, in a digital signal processor, on an FPGA, or as fixed-function ASIC. In addition, analog circuits can be used to do many of the same functions.

Typical components include image sensor corrections (including debayering or applying a Bayer filter), noise reduction, image scaling, gamma correction, image enhancement, colorspace conversion (between formats such as RGB, YUV or YCbCr), chroma subsampling, framerate conversion, image compression/video compression (such as JPEG), and computer data storage/data transmission.

↑ Return to Menu

Image compression in the context of N. Ahmed

Nasir Ahmed (born 1940) is an American electrical engineer and computer scientist. He is Professor Emeritus of Electrical and Computer Engineering at University of New Mexico (UNM). He is best known for inventing the discrete cosine transform (DCT) in the early 1970s. The DCT is the most widely used data compression transformation, the basis for most digital media standards (image, video and audio) and commonly used in digital signal processing. He also described the discrete sine transform (DST), which is related to the DCT.

↑ Return to Menu

Image compression in the context of Dolby Digital

Dolby Digital, originally synonymous with Dolby AC-3 (see below), is the name for a family of audio compression technologies developed by Dolby Laboratories. Called Dolby Stereo Digital until 1995, it uses lossy compression (except for Dolby TrueHD). The first use of Dolby Digital was to provide digital sound in cinemas from 35 mm film prints. It has since also been used for TV broadcast, radio broadcast via satellite, digital video streaming, DVDs, Blu-ray discs and game consoles.

Dolby AC-3 was the original version of the Dolby Digital codec. The basis of the Dolby AC-3 multi-channel audio coding standard is the modified discrete cosine transform (MDCT), a lossy audio compression algorithm. It is a modification of the discrete cosine transform (DCT) algorithm, which was proposed by Nasir Ahmed in 1972 for image compression. The DCT was adapted into the MDCT by J.P. Princen, A.W. Johnson and Alan B. Bradley at the University of Surrey in 1987.

↑ Return to Menu

Image compression in the context of Standard test image

A standard test image is a digital image file used across different institutions to test image processing and image compression algorithms. By using the same standard test images, different labs are able to compare results, both visually and quantitatively.

The images are in many cases chosen to represent natural or typical images that a class of processing techniques would need to deal with. Other test images are chosen because they present a range of challenges to image reconstruction algorithms, such as the reproduction of fine detail and textures, sharp transitions and edges, and uniform regions.

↑ Return to Menu