Audio signal in the context of Sound chips


Audio signal in the context of Sound chips

Audio signal Study page number 1 of 6

Play TriviaQuestions Online!

or

Skip to study material about Audio signal in the context of "Sound chips"


⭐ Core Definition: Audio signal

An audio signal is a representation of sound, typically using either a changing level of electrical voltage for analog signals or a series of binary numbers for digital signals. Audio signals have frequencies in the audio frequency range of roughly 20 to 20,000 Hz, which corresponds to the lower and upper limits of human hearing. Audio signals may be synthesized directly, or may originate at a transducer such as a microphone, musical instrument pickup, phonograph cartridge, or tape head. Loudspeakers or headphones convert an electrical audio signal back into sound.

Digital audio systems represent audio signals in a variety of digital formats.

↓ Menu
HINT:

In this Dossier

Audio signal in the context of Soundtrack

A soundtrack is a recorded audio signal accompanying and synchronised to the images of a book, drama, motion picture, radio program, television program, or video game; colloquially, a commercially released soundtrack album of music as featured in the soundtrack of a film, video, or television presentation; or the physical area of a film that contains the synchronised recorded sound.

In movie industry terminology usage, a sound track is an audio recording created or used in film production or post-production. Initially, the dialogue, sound effects, and music in a film each has its own separate track, and these are mixed together to make what is called the composite track, which is heard in the film. A dubbing track is often later created when films are dubbed into another language. This is also known as an M&E (music and effects) track. M&E tracks contain all sound elements minus dialogue, which is then supplied by the foreign distributor in the native language of its territory.

View the full Wikipedia page for Soundtrack
↑ Return to Menu

Audio signal in the context of Radio broadcasting

Radio broadcasting is the transmission of electromagnetic radiation (radio waves) to receivers over a wide area. Most broadcasts are audio (sound), sometimes with embedded metadata. Listeners require a broadcast radio receiver to receive these signals. "Terrestrial" broadcasts, including AM, FM, and DAB stations, originate from land-based transmitters, whereas "satellite radio" signals originate from a satellite in Earth orbit.

Stations may produce their own programming or be affiliated with a radio network that provides content either through broadcast syndication or by simulcasting, or both. The most common transmission technologies are analog and digital. Analog radio uses one of two modulation methods: amplitude modulation (AM) or frequency modulation (FM). Digital radio stations transmit using one of several digital audio standards, such as DAB (Digital Audio Broadcasting), HD Radio, or DRM (Digital Radio Mondiale).

View the full Wikipedia page for Radio broadcasting
↑ Return to Menu

Audio signal in the context of Digital recording

In digital recording, an audio or video signal is converted into a stream of discrete numbers representing the changes over time in air pressure for audio, or chroma and luminance values for video. This number stream is saved to a storage device. To play back a digital recording, the numbers are retrieved and converted back into their original analog audio or video forms so that they can be heard or seen.

In a properly matched analog-to-digital converter (ADC) and digital-to-analog converter (DAC) pair, the analog signal is accurately reconstructed, within the constraints of the Nyquist–Shannon sampling theorem, which dictates the sampling rate and quantization error dependent on the audio or video bit depth. Because the signal is stored digitally, assuming proper error detection and correction, the recording is not degraded by copying, storage or interference.

View the full Wikipedia page for Digital recording
↑ Return to Menu

Audio signal in the context of Frequency

Frequency is the number of occurrences of a repeating event per unit of time. Frequency is an important parameter used in science and engineering to specify the rate of oscillatory and vibratory phenomena, such as mechanical vibrations, audio signals (sound), radio waves, and light.

The interval of time between events is called the period. It is the reciprocal of the frequency. For example, if a heart beats at a frequency of 120 times per minute (2 hertz), its period is one half of a second.

View the full Wikipedia page for Frequency
↑ Return to Menu

Audio signal in the context of Signal (electrical engineering)

A signal is both the process and the result of transmission of data over some media accomplished by embedding some variation. Signals are important in multiple subject fields, including signal processing, information theory and biology.

View the full Wikipedia page for Signal (electrical engineering)
↑ Return to Menu

Audio signal in the context of Distortion

In signal processing, distortion is the alteration of the original shape (or other characteristic) of a signal. In communications and electronics it means the alteration of the waveform of an information-bearing signal, such as an audio signal representing sound or a video signal representing images, in an electronic device or communication channel.

Distortion is usually unwanted, and so engineers strive to eliminate or minimize it. In some situations, however, distortion may be desirable. For example, in noise reduction systems like the Dolby system, an audio signal is deliberately distorted in ways that emphasize aspects of the signal that are subject to electrical noise, then it is symmetrically "undistorted" after passing through a noisy communication channel, reducing the noise in the received signal. Distortion is also used as a musical effect, particularly with electric guitars.

View the full Wikipedia page for Distortion
↑ Return to Menu

Audio signal in the context of Loudspeaker

A loudspeaker (commonly referred to as a speaker or, more fully, a speaker system) is a combination of one or more speaker drivers, an enclosure, and electrical connections (possibly including a crossover network). The speaker driver is an electroacoustic transducer that converts an electrical audio signal into a corresponding sound.

The driver is a linear motor connected to a diaphragm, which transmits the motor's movement to produce sound by moving air. An audio signal, typically originating from a microphone, recording, or radio broadcast, is electronically amplified to a power level sufficient to drive the motor, reproducing the sound corresponding to the original unamplified signal. This process functions as the inverse of a microphone. In fact, the dynamic speaker driver—the most common type—shares the same basic configuration as a dynamic microphone, which operates in reverse as a generator.

View the full Wikipedia page for Loudspeaker
↑ Return to Menu

Audio signal in the context of Spectrogram

A spectrogram is a visual representation of the spectrum of frequencies of a signal as it varies with time. When applied to an audio signal, spectrograms are sometimes called sonographs, voiceprints, or voicegrams. When the data are represented in a 3D plot they may be called waterfall displays.

Spectrograms are used extensively in the fields of music, linguistics, sonar, radar, speech processing, seismology, ornithology, and others. Spectrograms of audio can be used to identify spoken words phonetically, and to analyse the various calls of animals.

View the full Wikipedia page for Spectrogram
↑ Return to Menu

Audio signal in the context of Analog signals

An analog signal (American English) or analogue signal (British and Commonwealth English) is any signal, typically a continuous-time signal, representing some other quantity, i.e., analogous to another quantity. For example, in an analog audio signal, the instantaneous signal voltage varies in a manner analogous to the pressure of the sound waves.

In contrast, a digital signal represents the original time-varying quantity as a sampled sequence of quantized numeric values, typically but not necessarily in the form of a binary value. Digital sampling imposes some bandwidth and dynamic range constraints on the representation and adds quantization noise.

View the full Wikipedia page for Analog signals
↑ Return to Menu

Audio signal in the context of Videoconferencing

Videotelephony (also known as videoconferencing or video calling or telepresense) is the use of audio and video for simultaneous two-way communication. Today, videotelephony is widespread. There are many terms to refer to videotelephony. Videophones are standalone devices for video calling (compare Telephone). In the present day, devices like smartphones and computers are capable of video calling, reducing the demand for separate videophones. Videoconferencing implies group communication. Videoconferencing is used in telepresence, whose goal is to create the illusion that remote participants are in the same room.

The concept of videotelephony was conceived in the late 19th century, and versions were demonstrated to the public starting in the 1930s. In April, 1930, reporters gathered at AT&T corporate headquarters on Broadway in New York City for the first public demonstration of two-way video telephony. The event linked the headquarters building with a Bell laboratories building on West Street.Early demonstrations were installed at booths in post offices and shown at various world expositions. AT&T demonstrated Picturephone at the 1964 World’s Fair in New York City. In 1970, AT&T launched Picturephone as the first commercial personal videotelephone system. In addition to videophones, there existed image phones which exchanged still images between units every few seconds over conventional telephone lines. The development of advanced video codecs, more powerful CPUs, and high-bandwidth Internet service in the late 1990s allowed digital videophones to provide high-quality low-cost color service between users almost any place in the world.

View the full Wikipedia page for Videoconferencing
↑ Return to Menu

Audio signal in the context of Synthesizer

A synthesizer (also synthesiser or synth) is an electronic musical instrument that generates audio signals. Synthesizers typically create sounds by generating waveforms through methods including subtractive synthesis, additive synthesis, and frequency modulation synthesis. These sounds may be altered by components such as filters, which cut or boost frequencies; envelopes, which control articulation, or how notes begin and end; and low-frequency oscillators, which modulate parameters such as pitch, volume, or filter characteristics affecting timbre. Synthesizers are typically played with keyboards or controlled by sequencers, software or other instruments, and can be synchronized to other equipment via MIDI.

Synthesizer-like instruments emerged in the United States in the mid-20th century with instruments such as the RCA Mark II, which was controlled with punch cards and used hundreds of vacuum tubes. The Moog synthesizer, developed by Robert Moog and first sold in 1964, is credited for pioneering concepts such as voltage-controlled oscillators, envelopes, noise generators, filters, and sequencers. In 1970, the smaller, cheaper Minimoog standardized synthesizers as self-contained instruments with built-in keyboards, unlike the larger modular synthesizers before it.

View the full Wikipedia page for Synthesizer
↑ Return to Menu

Audio signal in the context of Modulation

Signal modulation is the process of varying one or more properties of a periodic waveform in electronics and telecommunication for the purpose of transmitting information.

The process encodes information in form of the modulation or message signal onto a carrier signal to be transmitted. For example, the message signal might be an audio signal representing sound from a microphone, a video signal representing moving images from a video camera, or a digital signal representing a sequence of binary digits, a bitstream from a computer.

View the full Wikipedia page for Modulation
↑ Return to Menu

Audio signal in the context of Electronic musical instrument

An electronic musical instrument or electrophone is a musical instrument that produces sound using electronic circuitry. Such an instrument sounds by outputting an electrical, electronic or digital audio signal that ultimately is plugged into a power amplifier which drives a loudspeaker, creating the sound heard by the performer and listener.

An electronic instrument might include a user interface for controlling its sound, often by adjusting the pitch, frequency, or duration of each note. A common user interface is the musical keyboard, which functions similarly to the keyboard on an acoustic piano where the keys are each linked mechanically to swinging string hammers - whereas with an electronic keyboard, the keyboard interface is linked to a synth module, computer or other electronic or digital sound generator, which then creates a sound. However, it is increasingly common to separate user interface and sound-generating functions into a music controller (input device) and a music synthesizer, respectively, with the two devices communicating through a musical performance description language such as MIDI or Open Sound Control. The solid state nature of electronic keyboards also offers differing "feel" and "response", offering a novel experience in playing relative to operating a mechanically linked piano keyboard.

View the full Wikipedia page for Electronic musical instrument
↑ Return to Menu

Audio signal in the context of Amplitude modulation

Amplitude modulation (AM) is a signal modulation technique used in electronic communication, most commonly for transmitting messages with a radio wave. In amplitude modulation, the instantaneous amplitude of the wave is varied in proportion to that of the message signal, such as an audio signal. This technique contrasts with angle modulation, in which either the frequency of the carrier wave is varied, as in frequency modulation, or its phase, as in phase modulation.

AM was the earliest modulation method used for transmitting audio in radio broadcasting. It was developed during the first quarter of the 20th century beginning with Roberto Landell de Moura and Reginald Fessenden's radiotelephone experiments in 1900. This original form of AM is sometimes called double-sideband amplitude modulation (DSBAM), because the standard method produces sidebands on either side of the carrier frequency. Single-sideband modulation uses bandpass filters to eliminate one of the sidebands and possibly the carrier signal, which improves the ratio of message power to total transmission power, reduces power handling requirements of line repeaters, and permits better bandwidth utilization of the transmission medium.

View the full Wikipedia page for Amplitude modulation
↑ Return to Menu