Lossless data compression in the context of JBIG2


Lossless data compression in the context of JBIG2

Lossless data compression Study page number 1 of 1

Play TriviaQuestions Online!

or

Skip to study material about Lossless data compression in the context of "JBIG2"


⭐ Core Definition: Lossless data compression

Lossless compression is a class of data compression that allows the original data to be perfectly reconstructed from the compressed data with no loss of information. Lossless compression is possible because most real-world data exhibits statistical redundancy. By contrast, lossy compression permits reconstruction only of an approximation of the original data, though usually with greatly improved compression rates (and therefore reduced media sizes).

By operation of the pigeonhole principle, no lossless compression algorithm can shrink the size of all possible data: Some data will get longer by at least one symbol or bit.

↓ Menu
HINT:

👉 Lossless data compression in the context of JBIG2

JBIG2 is an image compression standard for bi-level images, developed by the Joint Bi-level Image Experts Group. It is suitable for both lossless and lossy compression. According to a press release from the Group, in its lossless mode JBIG2 typically generates files 3–5 times smaller than Fax Group 4 and 2–4 times smaller than JBIG, the previous bi-level compression standard released by the Group. JBIG2 was published in 2000 as the international standard ITU T.88, and in 2001 as ISO/IEC 14492.

↓ Explore More Topics
In this Dossier

Lossless data compression in the context of File format

A file format is the way that information is encoded for storage in a computer file. It may describe the encoding at various levels of abstraction including low-level bit and byte layout as well high-level organization such as markup and tabular structure. A file format may be standarized (which can be proprietary or open) or it can be an ad hoc convention.

Some file formats are designed for very particular types of data: PNG files, for example, store bitmapped images using lossless data compression. Other file formats, however, are designed for storage of several different types of data: the Ogg format can act as a container for different types of multimedia including any combination of audio and video, with or without text (such as subtitles), and metadata. A text file can contain any stream of characters, including possible control characters, and is encoded in one of various character encoding schemes. Some file formats, such as HTML, scalable vector graphics, and the source code of computer software are text files with defined syntaxes that allow them to be used for specific purposes.

View the full Wikipedia page for File format
↑ Return to Menu

Lossless data compression in the context of PNG

Portable Network Graphics (PNG, officially pronounced /pÉŠÅ‹/ PING, colloquially pronounced /ˌpiːɛnˈdʒiː/ PEE-en-JEE) is a raster-graphics file format that supports lossless data compression. PNG was developed as an improved, non-patented replacement for Graphics Interchange Format (GIF).

PNG supports palette-based images (with palettes of 24-bit RGB or 32-bit RGBA colors), grayscale images (with or without an alpha channel for transparency), and full-color non-palette-based RGB or RGBA images. The PNG working group designed the format for transferring images on the Internet, not for professional-quality print graphics; therefore, non-RGB color spaces such as CMYK are not supported. A PNG file contains a single image in an extensible structure of chunks, encoding the basic pixels and other information such as textual comments and integrity checks documented in RFC 2083.

View the full Wikipedia page for PNG
↑ Return to Menu

Lossless data compression in the context of Arithmetic coding

Arithmetic coding (AC) is a form of entropy encoding used in lossless data compression. Normally, a string of characters is represented using a fixed number of bits per character, as in the ASCII code. When a string is converted to arithmetic encoding, frequently used characters will be stored with fewer bits and not-so-frequently occurring characters will be stored with more bits, resulting in fewer bits used in total. Arithmetic coding differs from other forms of entropy encoding, such as Huffman coding, in that rather than separating the input into component symbols and replacing each with a code, arithmetic coding encodes the entire message into a single number, an arbitrary-precision fraction q, where 0.0 â‰Ī q < 1.0.

View the full Wikipedia page for Arithmetic coding
↑ Return to Menu

Lossless data compression in the context of JBIG

JBIG is an early lossless image compression standard from the Joint Bi-level Image Experts Group, standardized as ISO/IEC standard 11544 and as ITU-T recommendation T.82 in March 1993. It is widely implemented in fax machines. Now that the newer bi-level image compression standard JBIG2 has been released, JBIG is also known as JBIG1. JBIG was designed for compression of binary images, particularly for faxes, but can also be used on other images. In most situations JBIG offers between a 20% and 50% increase in compression efficiency over Fax Group 4 compression, and in some situations, it offers a 30-fold improvement.

JBIG is based on a form of arithmetic coding developed by IBM (known as the Q-coder) that also uses a relatively minor refinement developed by Mitsubishi, resulting in what became known as the QM-coder. It bases the probability estimates for each encoded bit on the values of the previous bits and the values in previous lines of the picture. JBIG also supports progressive transmission, which generally incurs a small overhead in bit rate (around 5%).

View the full Wikipedia page for JBIG
↑ Return to Menu

Lossless data compression in the context of Run-length encoding

Run-length encoding (RLE) is a form of lossless data compression in which runs of data (consecutive occurrences of the same data value) are stored as a single occurrence of that data value and a count of its consecutive occurrences, rather than as the original run. For example, a sequence of "green green green green green" in an image built up from colored dots could be shortened to "green x 5".

Run-length encoding is most efficient on data that contains many such runs, for example, simple graphic images such as icons, line drawings, games, and animations. For files that do not have many runs, encoding them with RLE could increase the file size.

View the full Wikipedia page for Run-length encoding
↑ Return to Menu

Lossless data compression in the context of DTS-HD Master Audio

DTS-HD Master Audio (DTS-HD MA; known as DTS++ before 2004) is a multi-channel, lossless audio codec developed by DTS as an extension of the lossy DTS Coherent Acoustics codec (DTS CA; usually itself referred to as just DTS). Rather than being an entirely new coding mechanism, DTS-HD MA encodes an audio master in lossy DTS first, then stores a concurrent stream of supplementary data representing whatever the DTS encoder discarded. This gives DTS-HD MA a lossy "core" able to be played back by devices that cannot decode the more complex lossless audio. DTS-HD MA's primary application is audio storage and playback for Blu-ray media; it competes in this respect with Dolby TrueHD, another lossless surround format.

View the full Wikipedia page for DTS-HD Master Audio
↑ Return to Menu

Lossless data compression in the context of Variable-width encoding

In coding theory, variable-length encoding is a type of character encoding scheme in which codes of differing lengths are used to encode a character set (a repertoire of symbols) for representation in a computer. The equivalent concept in computer science is bit string.

Variable-length codes can allow sources to be compressed and decompressed with zero error (lossless data compression) and still be read back symbol by symbol. An independent and identically-distributed source may be compressed almost arbitrarily close to its entropy. This is in contrast to fixed-length coding methods, for which data compression is only possible for large blocks of data, and any compression beyond the logarithm of the total number of possibilities comes with a finite (though perhaps arbitrarily small) probability of failure.

View the full Wikipedia page for Variable-width encoding
↑ Return to Menu