Standard deviation in the context of "Statistical parameter"

⭐ In the context of statistical parameters, what fundamental distinction exists between a parameter and a statistic?

Ad spacer

⭐ Core Definition: Standard deviation

In statistics, the standard deviation is a measure of the amount of variation of the values of a variable about its mean. A low standard deviation indicates that the values tend to be close to the mean (also called the expected value) of the set, while a high standard deviation indicates that the values are spread out over a wider range. The standard deviation is commonly used in the determination of what constitutes an outlier and what does not. Standard deviation may be abbreviated SD or std dev, and is most commonly represented in mathematical texts and equations by the lowercase Greek letter σ (sigma), for the population standard deviation, or the Latin letter s, for the sample standard deviation.

The standard deviation of a random variable, sample, statistical population, data set, or probability distribution is the square root of its variance. (For a finite population, variance is the average of the squared deviations from the mean.) A useful property of the standard deviation is that, unlike the variance, it is expressed in the same unit as the data. Standard deviation can also be used to calculate standard error for a finite sample, and to determine statistical significance.

↓ Menu

>>>PUT SHARE BUTTONS HERE<<<

👉 Standard deviation in the context of Statistical parameter

In statistics, as opposed to its general use in mathematics, a parameter is any quantity of a statistical population that summarizes or describes an aspect of the population, such as a mean or a standard deviation. If a population exactly follows a known and defined distribution, for example the normal distribution, then a small set of parameters can be measured which provide a comprehensive description of the population and can be considered to define a probability distribution for the purposes of extracting samples from this population.

A "parameter" is to a population as a "statistic" is to a sample; that is to say, a parameter describes the true value calculated from the full population (such as the population mean), whereas a statistic is an estimated measurement of the parameter based on a sample (such as the sample mean, which is the mean of gathered data per sampling, called sample). Thus a "statistical parameter" can be more specifically referred to as a population parameter.

↓ Explore More Topics
In this Dossier

Standard deviation in the context of Stunted growth

Stunted growth, also known as stunting or linear growth failure, is defined as impaired growth and development manifested by low height-for-age. Stunted growth is often caused by malnutrition, and can also be caused by endogenous factors such as chronic food insecurity or exogenous factors such as parasitic infection. Stunting is largely irreversible if occurring in the first 1000 days from conception to two years of age. The international definition of childhood stunting is a child whose height-for-age value is at least two standard deviations below the median of the World Health Organization's (WHO) Child Growth Standards. Stunted growth is associated with poverty, maternal undernutrition, poor health, frequent illness, or inappropriate feeding practices and care during the early years of life.

Among children under five years of age, the global stunting prevalence declined from 26.3% in 2012 to 22.3% in 2022. It is projected that 19.5% of all children under five will be stunted in 2030. More than 85% of the world's stunted children live in Asia and Africa. Once stunting occurs, its effects are often long-lasting. Stunted children generally do not recover lost height, and they may experience long-term impacts on body composition and overall health.

↑ Return to Menu

Standard deviation in the context of Intelligence quotient

An intelligence quotient (IQ) is a total score derived from a set of standardized tests or subtests designed to assess human intelligence. Originally, IQ was a score obtained by dividing a person's estimated mental age, obtained by administering an intelligence test, by the person's chronological age. The resulting fraction (quotient) was multiplied by 100 to obtain the IQ score. For modern IQ tests, the raw score is transformed to a normal distribution with mean 100 and standard deviation 15. This results in approximately two-thirds of the population scoring between IQ 85 and IQ 115 and about 2 percent each above 130 and below 70.

Scores from intelligence tests are estimates of intelligence. Unlike quantities such as distance and mass, a concrete measure of intelligence cannot be achieved given the abstract nature of the concept of "intelligence". IQ scores have been shown to be associated with factors such as nutrition, parental socioeconomic status, morbidity and mortality, parental social status, and perinatal environment. While the heritability of IQ has been studied for nearly a century, there is still debate over the significance of heritability estimates and the mechanisms of inheritance. The best estimates for heritability range from 40 to 60% of the variance between individuals in IQ being explained by genetics.

↑ Return to Menu

Standard deviation in the context of Statistical dispersion

In statistics, dispersion (also called variability, scatter, or spread) is the extent to which a distribution is stretched or squeezed. Common examples of measures of statistical dispersion are the variance, standard deviation, and interquartile range. For instance, when the variance of data in a set is large, the data is widely scattered. On the other hand, when the variance is small, the data in the set is clustered.

Dispersion is contrasted with location or central tendency, and together they are the most used properties of distributions.

↑ Return to Menu

Standard deviation in the context of Robust statistics

Robust statistics are statistics that maintain their properties even if the underlying distributional assumptions are incorrect. Robust statistical methods have been developed for many common problems, such as estimating location, scale, and regression parameters. One motivation is to produce statistical methods that are not unduly affected by outliers. Another motivation is to provide methods with good performance when there are small departures from a parametric distribution. For example, robust methods work well for mixtures of two normal distributions with different standard deviations; under this model, non-robust methods like a t-test work poorly.

↑ Return to Menu

Standard deviation in the context of Variance

In probability theory and statistics, variance is the expected value of the squared deviation from the mean of a random variable. The standard deviation (SD) is obtained as the square root of the variance. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbers are spread out from their average value. It is the second central moment of a distribution, and the covariance of the random variable with itself, and it is often represented by , , , , or .

An advantage of variance as a measure of dispersion is that it is more amenable to algebraic manipulation than other measures of dispersion such as the expected absolute deviation; for example, the variance of a sum of uncorrelated random variables is equal to the sum of their variances. A disadvantage of the variance for practical applications is that, unlike the standard deviation, its units differ from the random variable, which is why the standard deviation is more commonly reported as a measure of dispersion once the calculation is finished. Another disadvantage is that the variance is not finite for many distributions.

↑ Return to Menu

Standard deviation in the context of Normal distribution

In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is

The parameter is the mean or expectation of the distribution (and also its median and mode), while the parameter is the variance. The standard deviation of the distribution is (sigma). A random variable with a Gaussian distribution is said to be normally distributed and is called a normal deviate.

↑ Return to Menu

Standard deviation in the context of Descriptive statistics

A descriptive statistic (in the count noun sense) is a summary statistic that quantitatively describes or summarizes features from a collection of information, while descriptive statistics (in the mass noun sense) is the process of using and analysing those statistics. Descriptive statistics is distinguished from inferential statistics (or inductive statistics) by its aim to summarize a sample, rather than use the data to learn about the population that the sample of data is thought to represent. This generally means that descriptive statistics, unlike inferential statistics, is not developed on the basis of probability theory, and are frequently nonparametric statistics. Even when a data analysis draws its main conclusions using inferential statistics, descriptive statistics are generally also presented. For example, in papers reporting on human subjects, typically a table is included giving the overall sample size, sample sizes in important subgroups (e.g., for each treatment or exposure group), and demographic or clinical characteristics such as the average age, the proportion of subjects of each sex, the proportion of subjects with related co-morbidities, etc.

Some measures that are commonly used to describe a data set are measures of central tendency and measures of variability or dispersion. Measures of central tendency include the mean, median and mode, while measures of variability include the standard deviation (or variance), the minimum and maximum values of the variables, kurtosis and skewness.

↑ Return to Menu

Standard deviation in the context of Financial risk

Financial risk is any of various types of risk associated with financing, including financial transactions that include company loans in risk of default. Often it is understood to include only downside risk, meaning the potential for financial loss and uncertainty about its extent.

Modern portfolio theory initiated by Harry Markowitz in 1952 under his thesis titled "Portfolio Selection" is the discipline and study which pertains to managing market and financial risk. In modern portfolio theory, the variance (or standard deviation) of a portfolio is used as the definition of risk.

↑ Return to Menu