Waveform in the context of Carrier frequency


Waveform in the context of Carrier frequency

Waveform Study page number 1 of 4

Play TriviaQuestions Online!

or

Skip to study material about Waveform in the context of "Carrier frequency"


⭐ Core Definition: Waveform

In electronics, acoustics, and related fields, the waveform of a signal is the shape of its graph as a function of time, independent of its time and magnitude scales and of any displacement in time. Periodic waveforms repeat regularly at a constant period. The term can also be used for non-periodic or aperiodic signals, like chirps and pulses.

In electronics, the term is usually applied to time-varying voltages, currents, or electromagnetic fields. In acoustics, it is usually applied to steady periodic sounds — variations of pressure in air or other media. In these cases, the waveform is an attribute that is independent of the frequency, amplitude, or phase shift of the signal.

↓ Menu
HINT:

In this Dossier

Waveform in the context of Alternating current

Alternating current (AC) is an electric current that periodically reverses direction and changes its magnitude continuously with time, in contrast to direct current (DC), which flows only in one direction. Alternating current is the form in which electric power is delivered to businesses and residences, and it is the form of electrical energy that consumers typically use when they plug kitchen appliances, televisions, fans and electric lamps into a wall socket. The abbreviations AC and DC are often used to mean simply alternating and direct, respectively, as when they modify current or voltage.

The usual waveform of alternating current in most electric power circuits is a sine wave, whose positive half-period corresponds with positive direction of the current and vice versa (the full period is called a cycle). "Alternating current" most commonly refers to power distribution, but a wide range of other applications are technically alternating current although it is less common to describe them by that term. In many applications, like guitar amplifiers, different waveforms are used, such as triangular waves or square waves. Audio and radio signals carried on electrical wires are also examples of alternating current. These types of alternating current carry information such as sound (audio) or images (video) sometimes carried by modulation of an AC carrier signal. These currents typically alternate at higher frequencies than those used in power transmission.

View the full Wikipedia page for Alternating current
↑ Return to Menu

Waveform in the context of Phonograph

A phonograph, later called a gramophone, and since the 1940s a record player, or more recently a turntable, is a device for the mechanical and analogue reproduction of sound.

The sound vibration waveforms are recorded as corresponding physical deviations of a helical or spiral groove engraved, etched, incised, or impressed into the surface of a rotating cylinder or disc, called a record. To recreate the sound, the surface is similarly rotated while a playback stylus traces the groove and is therefore vibrated by it, faintly reproducing the recorded sound. In early acoustic phonographs, the stylus vibrated a diaphragm that produced sound waves coupled to the open air through a flaring horn, or directly to the listener's ears through stethoscope-type earphones.

View the full Wikipedia page for Phonograph
↑ Return to Menu

Waveform in the context of Cathode-ray tube

A cathode-ray tube (CRT) is a vacuum tube containing one or more electron guns, which emit electron beams that are manipulated to display images on a phosphorescent screen. The images may represent electrical waveforms on an oscilloscope, a frame of video on an analog television set (TV), digital raster graphics on a computer monitor, or other phenomena like radar targets. A CRT in a TV is commonly called a picture tube. CRTs have also been used as memory devices, in which case the screen is not intended to be visible to an observer. The term cathode ray was used to describe electron beams when they were first discovered, before it was understood that what was emitted from the cathode was a beam of electrons.

In CRT TVs and computer monitors, the entire front area of the tube is scanned repeatedly and systematically in a fixed pattern called a raster. In color devices, an image is produced by controlling the intensity of each of three electron beams, one for each additive primary color (red, green, and blue) with a video signal as a reference. In modern CRT monitors and TVs the beams are bent by magnetic deflection, using a deflection yoke. Electrostatic deflection is commonly used in oscilloscopes.

View the full Wikipedia page for Cathode-ray tube
↑ Return to Menu

Waveform in the context of Signal processing

Signal processing is an electrical engineering subfield that focuses on analyzing, modifying and synthesizing signals, such as sound, images, potential fields, seismic signals, altimetry processing, and scientific measurements. Signal processing techniques are used to optimize transmissions, digital storage efficiency, correcting distorted signals, improve subjective video quality, and to detect or pinpoint components of interest in a measured signal.

View the full Wikipedia page for Signal processing
↑ Return to Menu

Waveform in the context of Wave

In physics, mathematics, engineering, and related fields, a wave is a propagating dynamic disturbance (change from equilibrium) of one or more quantities. Periodic waves oscillate repeatedly about an equilibrium (resting) value at some frequency. When the entire waveform moves in one direction, it is said to be a travelling wave; by contrast, a pair of superimposed periodic waves traveling in opposite directions makes a standing wave. In a standing wave, the amplitude of vibration has nulls at some positions where the wave amplitude appears smaller or even zero.

There are two types of waves that are most commonly studied in classical physics: mechanical waves and electromagnetic waves. In a mechanical wave, stress and strain fields oscillate about a mechanical equilibrium. A mechanical wave is a local deformation (strain) in some physical medium that propagates from particle to particle by creating local stresses that cause strain in neighboring particles too. For example, sound waves are variations of the local pressure and particle motion that propagate through the medium. Other examples of mechanical waves are seismic waves, gravity waves, surface waves and string vibrations. In an electromagnetic wave (such as light), coupling between the electric and magnetic fields sustains propagation of waves involving these fields according to Maxwell's equations. Electromagnetic waves can travel through a vacuum and through some dielectric media (at wavelengths where they are considered transparent). Electromagnetic waves, as determined by their frequencies (or wavelengths), have more specific designations including radio waves, infrared radiation, terahertz waves, visible light, ultraviolet radiation, X-rays and gamma rays.

View the full Wikipedia page for Wave
↑ Return to Menu

Waveform in the context of Distortion

In signal processing, distortion is the alteration of the original shape (or other characteristic) of a signal. In communications and electronics it means the alteration of the waveform of an information-bearing signal, such as an audio signal representing sound or a video signal representing images, in an electronic device or communication channel.

Distortion is usually unwanted, and so engineers strive to eliminate or minimize it. In some situations, however, distortion may be desirable. For example, in noise reduction systems like the Dolby system, an audio signal is deliberately distorted in ways that emphasize aspects of the signal that are subject to electrical noise, then it is symmetrically "undistorted" after passing through a noisy communication channel, reducing the noise in the received signal. Distortion is also used as a musical effect, particularly with electric guitars.

View the full Wikipedia page for Distortion
↑ Return to Menu

Waveform in the context of Musical tone

Traditionally in Western music, a musical tone is a steady periodic sound. A musical tone is characterized by its duration, pitch, intensity (or loudness), and timbre (or quality). The notes used in music can be more complex than musical tones, as they may include aperiodic aspects, such as attack transients, vibrato, and envelope modulation.

A simple tone, or pure tone, has a sinusoidal waveform. A complex tone is a combination of two or more pure tones that have a periodic pattern of repetition, unless specified otherwise.

View the full Wikipedia page for Musical tone
↑ Return to Menu

Waveform in the context of Data communication

Data communication is the transfer of data over a point-to-point or point-to-multipoint communication channel. Data communication comprises data transmission and data reception and can be classified as analog transmission and digital communications.

Analog data communication conveys voice, data, image, signal or video information using a continuous signal, which varies in amplitude, phase, or some other property. In baseband analog transmission, messages are represented by a sequence of pulses by means of a line code; in passband analog transmission, they are communicated by a limited set of continuously varying waveforms, using a digital modulation method. Passband modulation and demodulation is carried out by modem equipment.

View the full Wikipedia page for Data communication
↑ Return to Menu

Waveform in the context of Acoustic phonetics

Acoustic phonetics is a subfield of phonetics, which deals with acoustic aspects of speech sounds. Acoustic phonetics investigates features of waveforms as they pertain to the time domain (e.g. duration, amplitude, fundamental frequency), frequency domain (e.g. frequency spectrum), or combined spectrotemporal domains. Acoustic phonetics is also concerned with how these properties relate to other branches of phonetics (e.g. articulatory or auditory phonetics), as well as abstract linguistic concepts such as phonemes, phrases, or utterances.

The study of acoustic phonetics was greatly enhanced in the late 19th century by the invention of the Edison phonograph. The phonograph allowed the speech signal to be recorded and then later processed and analyzed. By replaying the same speech signal from the phonograph several times, filtering it each time with a different band-pass filter, a spectrogram of the speech utterance could be built up. A series of papers by Ludimar Hermann published in Pflügers Archiv in the last two decades of the 19th century investigated the spectral properties of vowels and consonants using the Edison phonograph, and it was in these papers that the term formant was first introduced. Hermann also played back vowel recordings made with the Edison phonograph at different speeds to distinguish between Willis' and Wheatstone's theories of vowel production.

View the full Wikipedia page for Acoustic phonetics
↑ Return to Menu

Waveform in the context of GW150914

The first direct observation of gravitational waves was made on 14 September 2015 and was announced by the LIGO and Virgo collaborations on 11 February 2016. Previously, gravitational waves had been inferred only indirectly, via their effect on the timing of pulsars in binary star systems. The waveform, detected by both LIGO observatories, matched the predictions of general relativity for a gravitational wave emanating from the inward spiral and merger of two black holes (of 36 M and 29 M) and the subsequent ringdown of a single, 62 M black hole remnant. The signal was named GW150914 (from gravitational wave and the date of observation 2015-09-14). It was also the first observation of a binary black hole merger, demonstrating both the existence of binary stellar-mass black hole systems and the fact that such mergers could occur within the current age of the universe.

This first direct observation was reported around the world as a remarkable accomplishment for many reasons. Efforts to directly prove the existence of such waves had been ongoing for over fifty years, and the waves are so minuscule that Albert Einstein himself doubted that they could ever be detected. The waves given off by the cataclysmic merger of GW150914 reached Earth as a ripple in spacetime that changed the length of a 1,120 km LIGO effective span by a thousandth of the width of a proton, proportionally equivalent to changing the distance to the nearest star outside the Solar System by one hair's width. The energy released by the binary as it spiralled together and merged was immense, with the energy of 3.0+0.5
−0.5
cM (5.3+0.9
−0.8
×10 joules or 5300+900
−800
foes) in total radiated as gravitational waves, reaching a peak emission rate in its final few milliseconds of about 3.6+0.5
−0.4
×10 watts – a level greater than the combined power of all light radiated by all the stars in the observable universe.

View the full Wikipedia page for GW150914
↑ Return to Menu

Waveform in the context of Synthesizer

A synthesizer (also synthesiser or synth) is an electronic musical instrument that generates audio signals. Synthesizers typically create sounds by generating waveforms through methods including subtractive synthesis, additive synthesis, and frequency modulation synthesis. These sounds may be altered by components such as filters, which cut or boost frequencies; envelopes, which control articulation, or how notes begin and end; and low-frequency oscillators, which modulate parameters such as pitch, volume, or filter characteristics affecting timbre. Synthesizers are typically played with keyboards or controlled by sequencers, software or other instruments, and can be synchronized to other equipment via MIDI.

Synthesizer-like instruments emerged in the United States in the mid-20th century with instruments such as the RCA Mark II, which was controlled with punch cards and used hundreds of vacuum tubes. The Moog synthesizer, developed by Robert Moog and first sold in 1964, is credited for pioneering concepts such as voltage-controlled oscillators, envelopes, noise generators, filters, and sequencers. In 1970, the smaller, cheaper Minimoog standardized synthesizers as self-contained instruments with built-in keyboards, unlike the larger modular synthesizers before it.

View the full Wikipedia page for Synthesizer
↑ Return to Menu

Waveform in the context of Modulation

Signal modulation is the process of varying one or more properties of a periodic waveform in electronics and telecommunication for the purpose of transmitting information.

The process encodes information in form of the modulation or message signal onto a carrier signal to be transmitted. For example, the message signal might be an audio signal representing sound from a microphone, a video signal representing moving images from a video camera, or a digital signal representing a sequence of binary digits, a bitstream from a computer.

View the full Wikipedia page for Modulation
↑ Return to Menu

Waveform in the context of Sine wave

A sine wave, sinusoidal wave, or sinusoid (symbol: ) is a periodic wave whose waveform (shape) is the trigonometric sine function. In mechanics, as a linear motion over time, this is simple harmonic motion; as rotation, it corresponds to uniform circular motion. Sine waves occur often in physics, including wind waves, sound waves, and light waves, such as monochromatic radiation. In engineering, signal processing, and mathematics, Fourier analysis decomposes general functions into a sum of sine waves of various frequencies, relative phases, and magnitudes.

When any two sine waves of the same frequency (but arbitrary phase) are linearly combined, the result is another sine wave of the same frequency; this property is unique among periodic waves. Conversely, if some phase is chosen as a zero reference, a sine wave of arbitrary phase can be written as the linear combination of two sine waves with phases of zero and a quarter cycle, the sine and cosine components, respectively.

View the full Wikipedia page for Sine wave
↑ Return to Menu

Waveform in the context of Oscilloscope

An oscilloscope (formerly known as an oscillograph, informally scope or O-scope) is a type of electronic test instrument that graphically displays varying voltages of one or more signals as a function of time. Their main purpose is capturing information on electrical signals for debugging, analysis, or characterization. The displayed waveform can then be analyzed for properties such as amplitude, frequency, rise time, time interval, distortion, and others. Originally, calculation of these values required manually measuring the waveform against the scales built into the screen of the instrument. Modern digital instruments may calculate and display these properties directly.

Oscilloscopes are used in the sciences, engineering, biomedical, automotive and the telecommunications industry. General-purpose instruments are used for maintenance of electronic equipment and laboratory work. Special-purpose oscilloscopes may be used to analyze an automotive ignition system or to display the waveform of the heartbeat as an electrocardiogram, for instance.

View the full Wikipedia page for Oscilloscope
↑ Return to Menu

Waveform in the context of Phonautograph

The phonautograph is the earliest known device for recording sound. Previously, tracings had been obtained of the sound-producing vibratory motions of tuning forks and other objects by physical contact with them, but not of actual sound waves as they propagated through air or other mediums. Invented by Frenchman Édouard-Léon Scott de Martinville, it was patented on March 25, 1857. It transcribed sound waves as undulations or other deviations in a line traced on smoke-blackened paper or glass. Scott believed that future technology would allow the traces to be deciphered as a kind of "natural stenography". Intended as a laboratory instrument for the study of acoustics, it was used to visually study and measure the amplitude envelopes and waveforms of speech and other sounds or to determine the frequency of a given musical pitch by comparison with a simultaneously recorded reference frequency.

It did not occur to anyone before the 1870s that the recordings, called phonautograms, contained enough information about the sound that they could, in theory, be used to recreate it. Because the phonautogram tracing was an insubstantial two-dimensional line, direct physical playback was impossible in any case. However, several phonautograms recorded before 1861 were successfully converted and played as sound in 2008 by optically scanning them and using a computer to process the scans into digital audio files.

View the full Wikipedia page for Phonautograph
↑ Return to Menu

Waveform in the context of Zero crossing

A zero-crossing is a point where the sign of a mathematical function changes (e.g. from positive to negative), represented by an intercept of the axis (zero value) in the graph of the function. It is a commonly used term in electronics, mathematics, acoustics, and image processing.

View the full Wikipedia page for Zero crossing
↑ Return to Menu