Variance in the context of "Sampling fraction"

Play Trivia Questions online!

or

Skip to study material about Variance in the context of "Sampling fraction"

Ad spacer

⭐ Core Definition: Variance

In probability theory and statistics, variance is the expected value of the squared deviation from the mean of a random variable. The standard deviation (SD) is obtained as the square root of the variance. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbers are spread out from their average value. It is the second central moment of a distribution, and the covariance of the random variable with itself, and it is often represented by , , , , or .

An advantage of variance as a measure of dispersion is that it is more amenable to algebraic manipulation than other measures of dispersion such as the expected absolute deviation; for example, the variance of a sum of uncorrelated random variables is equal to the sum of their variances. A disadvantage of the variance for practical applications is that, unlike the standard deviation, its units differ from the random variable, which is why the standard deviation is more commonly reported as a measure of dispersion once the calculation is finished. Another disadvantage is that the variance is not finite for many distributions.

↓ Menu

>>>PUT SHARE BUTTONS HERE<<<

👉 Variance in the context of Sampling fraction

In sampling theory, the sampling fraction is the ratio of sample size to population size or, in the context of stratified sampling, the ratio of the sample size to the size of the stratum.The formula for the sampling fraction is

where n is the sample size and N is the population size. A sampling fraction value close to 1 will occur if the sample size is relatively close to the population size. When sampling from a finite population without replacement, this may cause dependence between individual samples. To correct for this dependence when calculating the sample variance, a finite population correction (or finite population multiplier) of may be used. If the sampling fraction is small, less than 0.05, then the sample variance is not appreciably affected by dependence, and the finite population correction may be ignored.

↓ Explore More Topics
In this Dossier

Variance in the context of Mayor

In many countries, a mayor is the highest-ranking official in a municipal government such as that of a city or a town. Worldwide, there is a wide variance in local laws and customs regarding the powers and responsibilities of a mayor as well as the means by which a mayor is elected or otherwise mandated. Depending on the system chosen, a mayor may be the chief executive officer of the municipal government, may simply chair a multi-member governing body with little or no independent power, or may play a solely ceremonial role. A mayor's duties and responsibilities may be to appoint and oversee municipal managers and employees, provide basic governmental services to constituents, and execute the laws and ordinances passed by a municipal governing body (or mandated by a state, territorial or national governingbody). Options for selection of a mayor include direct election by the public, or selection by an elected governing council or board.

The term mayor shares a linguistic origin with the military rank of major, both ultimately derived from French majeur, which in turn derives from Latin maior, the comparative form of the adjective magnus.

↑ Return to Menu

Variance in the context of Statistical dispersion

In statistics, dispersion (also called variability, scatter, or spread) is the extent to which a distribution is stretched or squeezed. Common examples of measures of statistical dispersion are the variance, standard deviation, and interquartile range. For instance, when the variance of data in a set is large, the data is widely scattered. On the other hand, when the variance is small, the data in the set is clustered.

Dispersion is contrasted with location or central tendency, and together they are the most used properties of distributions.

↑ Return to Menu

Variance in the context of Normal distribution

In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is

The parameter is the mean or expectation of the distribution (and also its median and mode), while the parameter is the variance. The standard deviation of the distribution is (sigma). A random variable with a Gaussian distribution is said to be normally distributed and is called a normal deviate.

↑ Return to Menu

Variance in the context of Descriptive statistics

A descriptive statistic (in the count noun sense) is a summary statistic that quantitatively describes or summarizes features from a collection of information, while descriptive statistics (in the mass noun sense) is the process of using and analysing those statistics. Descriptive statistics is distinguished from inferential statistics (or inductive statistics) by its aim to summarize a sample, rather than use the data to learn about the population that the sample of data is thought to represent. This generally means that descriptive statistics, unlike inferential statistics, is not developed on the basis of probability theory, and are frequently nonparametric statistics. Even when a data analysis draws its main conclusions using inferential statistics, descriptive statistics are generally also presented. For example, in papers reporting on human subjects, typically a table is included giving the overall sample size, sample sizes in important subgroups (e.g., for each treatment or exposure group), and demographic or clinical characteristics such as the average age, the proportion of subjects of each sex, the proportion of subjects with related co-morbidities, etc.

Some measures that are commonly used to describe a data set are measures of central tendency and measures of variability or dispersion. Measures of central tendency include the mean, median and mode, while measures of variability include the standard deviation (or variance), the minimum and maximum values of the variables, kurtosis and skewness.

↑ Return to Menu

Variance in the context of Standard deviation

In statistics, the standard deviation is a measure of the amount of variation of the values of a variable about its mean. A low standard deviation indicates that the values tend to be close to the mean (also called the expected value) of the set, while a high standard deviation indicates that the values are spread out over a wider range. The standard deviation is commonly used in the determination of what constitutes an outlier and what does not. Standard deviation may be abbreviated SD or std dev, and is most commonly represented in mathematical texts and equations by the lowercase Greek letter σ (sigma), for the population standard deviation, or the Latin letter s, for the sample standard deviation.

The standard deviation of a random variable, sample, statistical population, data set, or probability distribution is the square root of its variance. (For a finite population, variance is the average of the squared deviations from the mean.) A useful property of the standard deviation is that, unlike the variance, it is expressed in the same unit as the data. Standard deviation can also be used to calculate standard error for a finite sample, and to determine statistical significance.

↑ Return to Menu

Variance in the context of Pairwise independence

In probability theory, a pairwise independent collection of random variables is a set of random variables any two of which are independent. Any collection of mutually independent random variables is pairwise independent, but some pairwise independent collections are not mutually independent. Pairwise independent random variables with finite variance are uncorrelated.

A pair of random variables X and Y are independent if and only if the random vector (X, Y) with joint cumulative distribution function (CDF) satisfies

↑ Return to Menu

Variance in the context of Financial risk

Financial risk is any of various types of risk associated with financing, including financial transactions that include company loans in risk of default. Often it is understood to include only downside risk, meaning the potential for financial loss and uncertainty about its extent.

Modern portfolio theory initiated by Harry Markowitz in 1952 under his thesis titled "Portfolio Selection" is the discipline and study which pertains to managing market and financial risk. In modern portfolio theory, the variance (or standard deviation) of a portfolio is used as the definition of risk.

↑ Return to Menu

Variance in the context of Sample mean

The sample mean (sample average) or empirical mean (empirical average), and the sample covariance or empirical covariance are statistics computed from a sample of data on one or more random variables.

The sample mean is the average value (or mean value) of a sample of numbers taken from a larger population of numbers, where "population" indicates not number of people but the entirety of relevant data, whether collected or not. A sample of 40 companies' sales from the Fortune 500 might be used for convenience instead of looking at the population, all 500 companies' sales. The sample mean is used as an estimator for the population mean, the average value in the entire population, where the estimate is more likely to be close to the population mean if the sample is large and representative. The reliability of the sample mean is estimated using the standard error, which in turn is calculated using the variance of the sample. If the sample is random, the standard error falls with the size of the sample and the sample mean's distribution approaches the normal distribution as the sample size increases.

↑ Return to Menu