Standard deviation in the context of Sample (statistics)


Standard deviation in the context of Sample (statistics)

Standard deviation Study page number 1 of 3

Play TriviaQuestions Online!

or

Skip to study material about Standard deviation in the context of "Sample (statistics)"


⭐ Core Definition: Standard deviation

In statistics, the standard deviation is a measure of the amount of variation of the values of a variable about its mean. A low standard deviation indicates that the values tend to be close to the mean (also called the expected value) of the set, while a high standard deviation indicates that the values are spread out over a wider range. The standard deviation is commonly used in the determination of what constitutes an outlier and what does not. Standard deviation may be abbreviated SD or std dev, and is most commonly represented in mathematical texts and equations by the lowercase Greek letter σ (sigma), for the population standard deviation, or the Latin letter s, for the sample standard deviation.

The standard deviation of a random variable, sample, statistical population, data set, or probability distribution is the square root of its variance. (For a finite population, variance is the average of the squared deviations from the mean.) A useful property of the standard deviation is that, unlike the variance, it is expressed in the same unit as the data. Standard deviation can also be used to calculate standard error for a finite sample, and to determine statistical significance.

↓ Menu
HINT:

In this Dossier

Standard deviation in the context of Stunted growth

Stunted growth, also known as stunting or linear growth failure, is defined as impaired growth and development manifested by low height-for-age. Stunted growth is often caused by malnutrition, and can also be caused by endogenous factors such as chronic food insecurity or exogenous factors such as parasitic infection. Stunting is largely irreversible if occurring in the first 1000 days from conception to two years of age. The international definition of childhood stunting is a child whose height-for-age value is at least two standard deviations below the median of the World Health Organization's (WHO) Child Growth Standards. Stunted growth is associated with poverty, maternal undernutrition, poor health, frequent illness, or inappropriate feeding practices and care during the early years of life.

Among children under five years of age, the global stunting prevalence declined from 26.3% in 2012 to 22.3% in 2022. It is projected that 19.5% of all children under five will be stunted in 2030. More than 85% of the world's stunted children live in Asia and Africa. Once stunting occurs, its effects are often long-lasting. Stunted children generally do not recover lost height, and they may experience long-term impacts on body composition and overall health.

View the full Wikipedia page for Stunted growth
↑ Return to Menu

Standard deviation in the context of Statistical parameter

In statistics, as opposed to its general use in mathematics, a parameter is any quantity of a statistical population that summarizes or describes an aspect of the population, such as a mean or a standard deviation. If a population exactly follows a known and defined distribution, for example the normal distribution, then a small set of parameters can be measured which provide a comprehensive description of the population and can be considered to define a probability distribution for the purposes of extracting samples from this population.

A "parameter" is to a population as a "statistic" is to a sample; that is to say, a parameter describes the true value calculated from the full population (such as the population mean), whereas a statistic is an estimated measurement of the parameter based on a sample (such as the sample mean, which is the mean of gathered data per sampling, called sample). Thus a "statistical parameter" can be more specifically referred to as a population parameter.

View the full Wikipedia page for Statistical parameter
↑ Return to Menu

Standard deviation in the context of Intelligence quotient

An intelligence quotient (IQ) is a total score derived from a set of standardized tests or subtests designed to assess human intelligence. Originally, IQ was a score obtained by dividing a person's estimated mental age, obtained by administering an intelligence test, by the person's chronological age. The resulting fraction (quotient) was multiplied by 100 to obtain the IQ score. For modern IQ tests, the raw score is transformed to a normal distribution with mean 100 and standard deviation 15. This results in approximately two-thirds of the population scoring between IQ 85 and IQ 115 and about 2 percent each above 130 and below 70.

Scores from intelligence tests are estimates of intelligence. Unlike quantities such as distance and mass, a concrete measure of intelligence cannot be achieved given the abstract nature of the concept of "intelligence". IQ scores have been shown to be associated with factors such as nutrition, parental socioeconomic status, morbidity and mortality, parental social status, and perinatal environment. While the heritability of IQ has been studied for nearly a century, there is still debate over the significance of heritability estimates and the mechanisms of inheritance. The best estimates for heritability range from 40 to 60% of the variance between individuals in IQ being explained by genetics.

View the full Wikipedia page for Intelligence quotient
↑ Return to Menu

Standard deviation in the context of Statistical dispersion

In statistics, dispersion (also called variability, scatter, or spread) is the extent to which a distribution is stretched or squeezed. Common examples of measures of statistical dispersion are the variance, standard deviation, and interquartile range. For instance, when the variance of data in a set is large, the data is widely scattered. On the other hand, when the variance is small, the data in the set is clustered.

Dispersion is contrasted with location or central tendency, and together they are the most used properties of distributions.

View the full Wikipedia page for Statistical dispersion
↑ Return to Menu

Standard deviation in the context of Robust statistics

Robust statistics are statistics that maintain their properties even if the underlying distributional assumptions are incorrect. Robust statistical methods have been developed for many common problems, such as estimating location, scale, and regression parameters. One motivation is to produce statistical methods that are not unduly affected by outliers. Another motivation is to provide methods with good performance when there are small departures from a parametric distribution. For example, robust methods work well for mixtures of two normal distributions with different standard deviations; under this model, non-robust methods like a t-test work poorly.

View the full Wikipedia page for Robust statistics
↑ Return to Menu

Standard deviation in the context of Variance

In probability theory and statistics, variance is the expected value of the squared deviation from the mean of a random variable. The standard deviation (SD) is obtained as the square root of the variance. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbers are spread out from their average value. It is the second central moment of a distribution, and the covariance of the random variable with itself, and it is often represented by , , , , or .

An advantage of variance as a measure of dispersion is that it is more amenable to algebraic manipulation than other measures of dispersion such as the expected absolute deviation; for example, the variance of a sum of uncorrelated random variables is equal to the sum of their variances. A disadvantage of the variance for practical applications is that, unlike the standard deviation, its units differ from the random variable, which is why the standard deviation is more commonly reported as a measure of dispersion once the calculation is finished. Another disadvantage is that the variance is not finite for many distributions.

View the full Wikipedia page for Variance
↑ Return to Menu

Standard deviation in the context of Normal distribution

In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is

The parameter is the mean or expectation of the distribution (and also its median and mode), while the parameter is the variance. The standard deviation of the distribution is (sigma). A random variable with a Gaussian distribution is said to be normally distributed and is called a normal deviate.

View the full Wikipedia page for Normal distribution
↑ Return to Menu

Standard deviation in the context of Descriptive statistics

A descriptive statistic (in the count noun sense) is a summary statistic that quantitatively describes or summarizes features from a collection of information, while descriptive statistics (in the mass noun sense) is the process of using and analysing those statistics. Descriptive statistics is distinguished from inferential statistics (or inductive statistics) by its aim to summarize a sample, rather than use the data to learn about the population that the sample of data is thought to represent. This generally means that descriptive statistics, unlike inferential statistics, is not developed on the basis of probability theory, and are frequently nonparametric statistics. Even when a data analysis draws its main conclusions using inferential statistics, descriptive statistics are generally also presented. For example, in papers reporting on human subjects, typically a table is included giving the overall sample size, sample sizes in important subgroups (e.g., for each treatment or exposure group), and demographic or clinical characteristics such as the average age, the proportion of subjects of each sex, the proportion of subjects with related co-morbidities, etc.

Some measures that are commonly used to describe a data set are measures of central tendency and measures of variability or dispersion. Measures of central tendency include the mean, median and mode, while measures of variability include the standard deviation (or variance), the minimum and maximum values of the variables, kurtosis and skewness.

View the full Wikipedia page for Descriptive statistics
↑ Return to Menu

Standard deviation in the context of Financial risk

Financial risk is any of various types of risk associated with financing, including financial transactions that include company loans in risk of default. Often it is understood to include only downside risk, meaning the potential for financial loss and uncertainty about its extent.

Modern portfolio theory initiated by Harry Markowitz in 1952 under his thesis titled "Portfolio Selection" is the discipline and study which pertains to managing market and financial risk. In modern portfolio theory, the variance (or standard deviation) of a portfolio is used as the definition of risk.

View the full Wikipedia page for Financial risk
↑ Return to Menu

Standard deviation in the context of Short stature

Short stature refers to a height of a human that is below typical. Whether a person is considered short depends on the context. Because of the lack of preciseness, there is often disagreement about the degree of shortness that should be called short. Dwarfism is the condition of being very short, often caused by a medical condition. In a medical context, short stature is typically defined as an adult height that is more than two standard deviations below a population’s mean for age and sex, which corresponds to the shortest 2.3% of individuals in that population.

Shortness in children and young adults nearly always results from below-average growth in childhood, while shortness in older adults usually results from loss of height due to kyphosis of the spine or collapsed vertebrae from osteoporosis. The most common causes of short stature in childhood are constitutional growth delay or familial short stature.

View the full Wikipedia page for Short stature
↑ Return to Menu

Standard deviation in the context of IQ classification

IQ classification is the practice of categorizing human intelligence, as measured by intelligence quotient (IQ) tests, into categories such as "superior" and "average".

With the usual IQ scoring methods, an IQ score of 100 means that the test-taker's performance on the test is of average performance in the sample of test-takers of about the same age as was used to norm the test. An IQ score of 115 means performance one standard deviation above the mean, while a score of 85 means performance one standard deviation below the mean, and so on. This "deviation IQ" method is used for standard scoring of all IQ tests in large part because they allow a consistent definition of IQ for both children and adults. By the existing "deviation IQ" definition of IQ test standard scores, about two-thirds of all test-takers obtain scores from 85 to 115, and about 5 percent of the population scores above 125 (i.e. normal distribution).

View the full Wikipedia page for IQ classification
↑ Return to Menu

Standard deviation in the context of Limiting magnitude

In astronomy, limiting magnitude is the faintest apparent magnitude of a celestial body that is detectable or detected by a given instrument.

In some cases, limiting magnitude refers to the upper threshold of detection. In more formal uses, limiting magnitude is specified along with the strength of the signal (e.g., "10th magnitude at 20 sigma"). Sometimes limiting magnitude is qualified by the purpose of the instrument (e.g., "10th magnitude for photometry") This statement recognizes that a photometric detector can detect light far fainter than it can reliably measure.

View the full Wikipedia page for Limiting magnitude
↑ Return to Menu

Standard deviation in the context of Plus–minus sign

The plus–minus sign or plus-or-minus sign (±) and the complementary minus-or-plus sign () are symbols with broadly similar multiple meanings.

  • In mathematics, the ± sign generally indicates a choice of exactly two possible values, one of which is obtained through addition and the other through subtraction. The is typically used only in tandem with the ± sign and indicates that in the case that the ± is a +, the would be a − (and vice-versa).
  • In statistics and experimental sciences, the ± sign commonly indicates the confidence interval or uncertainty bounding a range of possible errors in a measurement, often the standard deviation or standard error. The sign may also represent an inclusive range of values that a reading might have.
  • In chess, the ± sign indicates a clear advantage for the white player; the complementary minus-plus sign () indicates a clear advantage for the black player.

Other meanings occur in other fields, including medicine, engineering, chemistry, electronics, linguistics, and philosophy.

View the full Wikipedia page for Plus–minus sign
↑ Return to Menu

Standard deviation in the context of Pearson correlation coefficient

In statistics, the Pearson correlation coefficient (PCC) is a correlation coefficient that measures linear correlation between two sets of data. It is the ratio between the covariance of two variables and the product of their standard deviations; thus, it is essentially a normalized measurement of the covariance, such that the result always has a value between −1 and 1. A key difference is that unlike covariance, this correlation coefficient does not have units, allowing comparison of the strength of the joint association between different pairs of random variables that do not necessarily have the same units. As with covariance itself, the measure can only reflect a linear correlation of variables, and ignores many other types of relationships or correlations. As a simple example, one would expect the age and height of a sample of children from a school to have a Pearson correlation coefficient significantly greater than 0, but less than 1 (as 1 would represent an unrealistically perfect correlation).

View the full Wikipedia page for Pearson correlation coefficient
↑ Return to Menu

Standard deviation in the context of Precision (statistics)

In statistics, the precision matrix or concentration matrix is the matrix inverse of the covariance matrix or dispersion matrix, .For univariate distributions, the precision matrix degenerates into a scalar precision, defined as the reciprocal of the variance, .

Other summary statistics of statistical dispersion also called precision (or imprecision)include the reciprocal of the standard deviation, ; the standard deviation itself and the relative standard deviation;as well as the standard error and the confidence interval (or its half-width, the margin of error).

View the full Wikipedia page for Precision (statistics)
↑ Return to Menu

Standard deviation in the context of Financial volatility

In finance, volatility (usually denoted by "σ") is the degree of variation of a trading price series over time, usually measured by the standard deviation of logarithmic returns.

Historic volatility measures a time series of past market prices. Implied volatility looks forward in time, being derived from the market price of a market-traded derivative (in particular, an option).

View the full Wikipedia page for Financial volatility
↑ Return to Menu

Standard deviation in the context of Modern portfolio theory

Modern portfolio theory (MPT), or mean-variance analysis, is a mathematical framework for assembling a portfolio of assets such that the expected return is maximized for a given level of risk. It is a formalization and extension of diversification in investing, the idea that owning different kinds of financial assets is less risky than owning only one type. Its key insight is that an asset's risk and return should not be assessed by itself, but by how it contributes to a portfolio's overall risk and return. The variance of return (or its transformation, the standard deviation) is used as a measure of risk, because it is tractable when assets are combined into portfolios. Often, the historical variance and covariance of returns is used as a proxy for the forward-looking versions of these quantities, but other, more sophisticated methods are available.

Economist Harry Markowitz introduced MPT in a 1952 paper, for which he was later awarded a Nobel Memorial Prize in Economic Sciences; see Markowitz model.

View the full Wikipedia page for Modern portfolio theory
↑ Return to Menu