Audio signal processing in the context of "Feedback suppressor"

Play Trivia Questions online!

or

Skip to study material about Audio signal processing in the context of "Feedback suppressor"

Ad spacer

⭐ Core Definition: Audio signal processing

Audio signal processing is a subfield of signal processing that is concerned with the electronic manipulation of audio signals. Audio signals are electronic representations of sound waveslongitudinal waves which travel through air, consisting of compressions and rarefactions. The energy contained in audio signals or sound power level is typically measured in decibels. As audio signals may be represented in either digital or analog format, processing may occur in either domain. Analog processors operate directly on the electrical signal, while digital processors operate mathematically on its digital representation.

↓ Menu

>>>PUT SHARE BUTTONS HERE<<<
In this Dossier

Audio signal processing in the context of Signal processing

Signal processing is an electrical engineering subfield that focuses on analyzing, modifying and synthesizing signals, such as sound, images, potential fields, seismic signals, altimetry processing, and scientific measurements. Signal processing techniques are used to optimize transmissions, digital storage efficiency, correcting distorted signals, improve subjective video quality, and to detect or pinpoint components of interest in a measured signal.

↑ Return to Menu

Audio signal processing in the context of Digital signal processing

Digital signal processing (DSP) is the use of digital processing, such as by computers or more specialized digital signal processors, to perform a wide variety of signal processing operations. The digital signals processed in this manner are a sequence of numbers that represent samples of a continuous variable in a domain such as time, space, or frequency. In digital electronics, a digital signal is represented as a pulse train, which is typically generated by the switching of a transistor.

Digital signal processing and analog signal processing are subfields of signal processing. DSP applications include audio and speech processing, sonar, radar and other sensor array processing, spectral density estimation, statistical signal processing, digital image processing, data compression, video coding, audio coding, image compression, signal processing for telecommunications, control systems, biomedical engineering, and seismology, among others.

↑ Return to Menu

Audio signal processing in the context of Musique concrète

Musique concrète (French pronunciation: [myzik kɔ̃kʁɛt]; lit.'concrete music') is a type of music composition that utilizes recorded sounds as raw material. Sounds are often modified through the application of audio signal processing and tape music techniques, and may be assembled into a form of sound collage. It can feature sounds derived from recordings of musical instruments, the human voice, and the natural environment, as well as those created using sound synthesis and computer-based digital signal processing. Compositions in this idiom are not restricted to the normal musical rules of melody, harmony, rhythm, and metre. The technique exploits acousmatic sound, such that sound identities can often be intentionally obscured or appear unconnected to their source cause.

The theoretical basis of musique concrète as a compositional practice was developed by French composer Pierre Schaeffer, beginning in the early 1940s. It was largely an attempt to differentiate between music based on the abstract medium of notation and that created using so-called sound objects (French: l'objet sonore). By the early 1950s, musique concrète was contrasted with "pure" elektronische Musik as then developed in West Germany – based solely on the use of electronically produced sounds rather than recorded sounds – but the distinction has since been blurred such that the term "electronic music" covers both meanings. Schaeffer's work resulted in the establishment of France's Groupe de Recherches de Musique Concrète (GRMC), which attracted important figures including Pierre Henry, Luc Ferrari, Pierre Boulez, Karlheinz Stockhausen, Edgard Varèse, and Iannis Xenakis. From the late 1960s onward, and particularly in France, the term acousmatic music (French: musique acousmatique) was used in reference to fixed media compositions that utilized both musique concrète-based techniques and live sound spatialisation.

↑ Return to Menu

Audio signal processing in the context of Distortion (music)

Distortion and overdrive are forms of audio signal processing used to alter the sound of amplified electric musical instruments, usually by increasing their gain, producing a "fuzzy", "growling", or "gritty" tone. Distortion is most commonly used with the electric guitar, but may be used with other instruments, such as electric bass, electric piano, synthesizer, and Hammond organ. Guitarists playing electric blues originally obtained an overdriven sound by turning up their vacuum tube-powered guitar amplifiers to high volumes, which caused the signal to distort. Other ways to produce distortion have been developed since the 1960s, such as distortion effect pedals. The growling tone of a distorted electric guitar is a key part of many genres, including blues and many rock music genres, notably hard rock, punk rock, hardcore punk, acid rock, grunge and heavy metal music, while the use of distorted bass has been essential in a genre of hip hop music and alternative hip hop known as "SoundCloud rap".

The effects alter the instrument sound by clipping the signal (pushing it past its maximum, which shears off the peaks and troughs of the signal waves), adding sustain and harmonic and inharmonic overtones and leading to a compressed sound that is often described as "warm" and "dirty", depending on the type and intensity of distortion used. The terms distortion and overdrive are often used interchangeably; where a distinction is made, distortion is a more extreme version of the effect than overdrive. Fuzz is a particular form of extreme distortion originally created by guitarists using faulty equipment (such as a misaligned valve (tube); see below), which has been emulated since the 1960s by a number of "fuzzbox" effects pedals.

↑ Return to Menu

Audio signal processing in the context of Flanging

Flanging /ˈflænɪŋ/ is an audio effect produced by mixing two identical signals together, one signal delayed by a small and (usually) gradually changing period, usually smaller than 20 milliseconds. This produces a swept comb filter effect: peaks and notches are produced in the resulting frequency spectrum, related to each other in a linear harmonic series. Varying the time delay causes these to sweep up and down the frequency spectrum. A flanger is an effects unit that creates this effect.

Part of the output signal is usually fed back to the input (a re-circulating delay line), producing a resonance effect that further enhances the intensity of the peaks and troughs. The phase of the fed-back signal is sometimes inverted, producing another variation on the flanger sound.

↑ Return to Menu

Audio signal processing in the context of Echo (phenomenon)

In audio signal processing and acoustics, an echo is a reflection of sound that arrives at the listener with a delay after the direct sound. The delay is directly proportional to the distance of the reflecting surface from the source and the listener. Typical examples are the echo produced by the bottom of a well, a building, or the walls of enclosed and empty rooms.

↑ Return to Menu

Audio signal processing in the context of Digital signal processor

A digital signal processor (DSP) is a specialized microprocessor chip, with its architecture optimized for the operational needs of digital signal processing. DSPs are fabricated on metal–oxide–semiconductor (MOS) integrated circuit chips. They are widely used in audio signal processing, telecommunications, digital image processing, radar, sonar and speech recognition systems, and in common consumer electronic devices such as mobile phones, disk drives and high-definition television (HDTV) products.

The goal of a DSP is usually to measure, filter or compress continuous real-world analog signals. Most general-purpose microprocessors can also execute digital signal processing algorithms successfully, but may not be able to keep up with such processing continuously in real-time. Also, dedicated DSPs usually have better power efficiency, thus they are more suitable in portable devices such as mobile phones because of power consumption constraints. DSPs often use special memory architectures that are able to fetch multiple data or instructions at the same time.

↑ Return to Menu

Audio signal processing in the context of Vibrato

Vibrato (Italian, from past participle of "vibrare", to vibrate) is a musical effect consisting of a regular, pulsating change of pitch. It is used to add expression to vocal and instrumental music. Vibrato is typically characterized in terms of two factors: the amount of pitch variation ("extent of vibrato") and the speed with which the pitch is varied ("rate of vibrato").

In singing, it can occur spontaneously through variations in the larynx. The vibrato of a string instrument and wind instrument is an imitation of that vocal function. Vibrato can also be reproduced mechanically (Leslie speaker) or electronically as an audio effect close to chorus.

↑ Return to Menu