Autocorrelation in the context of Cross-correlation


Autocorrelation in the context of Cross-correlation

Autocorrelation Study page number 1 of 1

Play TriviaQuestions Online!

or

Skip to study material about Autocorrelation in the context of "Cross-correlation"


⭐ Core Definition: Autocorrelation

Autocorrelation, sometimes known as serial correlation in the discrete time case, measures the correlation of a signal with a delayed copy of itself. Essentially, it quantifies the similarity between observations of a random variable at different points in time. The analysis of autocorrelation is a mathematical tool for identifying repeating patterns or hidden periodicities within a signal obscured by noise. Autocorrelation is widely used in signal processing, time domain and time series analysis to understand the behavior of data over time.

Different fields of study define autocorrelation differently, and not all of these definitions are equivalent. In some fields, the term is used interchangeably with autocovariance.

↓ Menu
HINT:

👉 Autocorrelation in the context of Cross-correlation

In signal processing, cross-correlation is a measure of similarity of two series as a function of the displacement of one relative to the other. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomography, averaging, cryptanalysis, and neurophysiology. The cross-correlation is similar in nature to the convolution of two functions. In an autocorrelation, which is the cross-correlation of a signal with itself, there will always be a peak at a lag of zero, and its size will be the signal energy.

In probability and statistics, the term cross-correlations refers to the correlations between the entries of two random vectors and , while the correlations of a random vector are the correlations between the entries of itself, those forming the correlation matrix of . If each of and is a scalar random variable which is realized repeatedly in a time series, then the correlations of the various temporal instances of are known as autocorrelations of , and the cross-correlations of with across time are temporal cross-correlations. In probability and statistics, the definition of correlation always includes a standardising factor in such a way that correlations have values between −1 and +1.

↓ Explore More Topics
In this Dossier

Autocorrelation in the context of Geary's C

Geary's C is a measure of spatial autocorrelation that attempts to determine if observations of the same variable are spatially autocorrelated globally (rather than at the neighborhood level). Spatial autocorrelation is more complex than autocorrelation because the correlation is multi-dimensional and bi-directional.

View the full Wikipedia page for Geary's C
↑ Return to Menu

Autocorrelation in the context of Convolution

In mathematics (in particular, functional analysis), convolution is a mathematical operation on two functions and that produces a third function , as the integral of the product of the two functions after one is reflected about the y-axis and shifted. The term convolution refers to both the resulting function and to the process of computing it. The integral is evaluated for all values of shift, producing the convolution function. The choice of which function is reflected and shifted before the integral does not change the integral result (see commutativity). Graphically, it expresses how the 'shape' of one function is modified by the other.

Some features of convolution are similar to cross-correlation: for real-valued functions, of a continuous or discrete variable, convolution differs from cross-correlation only in that either or is reflected about the y-axis in convolution; thus it is a cross-correlation of and , or and . For complex-valued functions, the cross-correlation operator is the adjoint of the convolution operator.

View the full Wikipedia page for Convolution
↑ Return to Menu

Autocorrelation in the context of Correlogram

In the analysis of data, a correlogram is a chart of correlation statistics. For example, in time series analysis, a plot of the sample autocorrelations versus (the time lags) is an autocorrelogram. If cross-correlation is plotted, the result is called a cross-correlogram.

The correlogram is a commonly used tool for checking randomness in a data set. If random, autocorrelations should be near zero for any and all time-lag separations. If non-random, then one or more of the autocorrelations will be significantly non-zero.

View the full Wikipedia page for Correlogram
↑ Return to Menu

Autocorrelation in the context of Hurst exponent

The Hurst exponent is used as a measure of long-term memory of time series. It relates to the autocorrelations of the time series, and the rate at which these decrease as the lag between pairs of values increases. Studies involving the Hurst exponent were originally developed in hydrology for the practical matter of determining optimum dam sizing for the Nile river's volatile rain and drought conditions that had been observed over a long period of time. The name "Hurst exponent", or "Hurst coefficient", derives from Harold Edwin Hurst (1880–1978), who was the lead researcher in these studies; the use of the standard notation H for the coefficient also relates to his name.

In fractal geometry, the generalized Hurst exponent has been denoted by H or Hq in honor of both Harold Edwin Hurst and Ludwig Otto Hölder (1859–1937) by Benoît Mandelbrot (1924–2010). H is directly related to fractal dimension, D, and is a measure of a data series' "mild" or "wild" randomness.

View the full Wikipedia page for Hurst exponent
↑ Return to Menu

Autocorrelation in the context of Optical autocorrelation

In optics, various autocorrelation functions can be experimentally realized. The field autocorrelation may be used to calculate the spectrum of a source of light, while the intensity autocorrelation and the interferometric autocorrelation are commonly used to estimate the duration of ultrashort pulses produced by modelocked lasers. The laser pulse duration cannot be easily measured by optoelectronic methods, since the response time of photodiodes and oscilloscopes are at best of the order of 200 femtoseconds, yet laser pulses can be made as short as a few femtoseconds.

In the following examples, the autocorrelation signal is generated by the nonlinear process of second-harmonic generation (SHG). Other techniques based on two-photon absorption may also be used in autocorrelation measurements, as well as higher-order nonlinear optical processes such as third-harmonic generation, in which case the mathematical expressions of the signal will be slightly modified, but the basic interpretation of an autocorrelation trace remains the same. A detailed discussion on interferometric autocorrelation is given in several well-known textbooks.

View the full Wikipedia page for Optical autocorrelation
↑ Return to Menu

Autocorrelation in the context of Wiener–Khinchin theorem

In applied mathematics and statistics, the Wiener–Khinchin theorem or Wiener–Khintchine theorem, also known as the Wiener–Khinchin–Einstein theorem or the Khinchin–Kolmogorov theorem, states that the power spectral density of a wide-sense-stationary random process is equal to the Fourier transform of that process's autocorrelation function.

View the full Wikipedia page for Wiener–Khinchin theorem
↑ Return to Menu