Independence (probability theory)

⭐ In the context of probability theory, independence between multiple events can be defined in different ways. What distinguishes 'mutual independence' from 'pairwise independence'?

Ad spacer

⭐ Core Definition: Independence (probability theory)

Independence is a fundamental notion in probability theory, as in statistics and the theory of stochastic processes. Two events are independent, statistically independent, or stochastically independent if, informally speaking, the occurrence of one does not affect the probability of occurrence of the other or, equivalently, does not affect the odds. Similarly, two random variables are independent if the realization of one does not affect the probability distribution of the other.

When dealing with collections of more than two events, two notions of independence need to be distinguished. The events are called pairwise independent if any two events in the collection are independent of each other, while mutual independence (or collective independence) of events means, informally speaking, that each event is independent of any combination of other events in the collection. A similar notion exists for collections of random variables. Mutual independence implies pairwise independence, but not the other way around. In the standard literature of probability theory, statistics, and stochastic processes, independence without further qualification usually refers to mutual independence.

↓ Menu

>>>PUT SHARE BUTTONS HERE<<<
In this Dossier

Independence (probability theory) in the context of Number theory

Number theory is a branch of pure mathematics devoted primarily to the study of the integers and arithmetic functions. Number theorists study prime numbers as well as the properties of mathematical objects constructed from integers (for example, rational numbers), or defined as generalizations of the integers (for example, algebraic integers).

Integers can be considered either in themselves or as solutions to equations (Diophantine geometry). Questions in number theory can often be understood through the study of analytical objects, such as the Riemann zeta function, that encode properties of the integers, primes or other number-theoretic objects in some fashion (analytic number theory). One may also study real numbers in relation to rational numbers, as for instance how irrational numbers can be approximated by fractions (Diophantine approximation).

↑ Return to Menu

Independence (probability theory) in the context of Fair coin

In probability theory and statistics, a sequence of independent Bernoulli trials with probability 1/2 of success on each trial is metaphorically called a fair coin. One for which the probability is not 1/2 is called a biased or unfair coin. In theoretical studies, the assumption that a coin is fair is often made by referring to an ideal coin.

John Edmund Kerrich performed experiments in coin flipping and found that a coin made from a wooden disk about the size of a crown and coated on one side with lead landed heads (wooden side up) 679 times out of 1000. In this experiment the coin was tossed by balancing it on the forefinger, flipping it using the thumb so that it spun through the air for about a foot before landing on a flat cloth spread over a table. Edwin Thompson Jaynes claimed that when a coin is caught in the hand, instead of being allowed to bounce, the physical bias in the coin is insignificant compared to the method of the toss, where with sufficient practice a coin can be made to land heads 100% of the time. Exploring the problem of checking whether a coin is fair is a well-established pedagogical tool in teaching statistics.

↑ Return to Menu

Independence (probability theory) in the context of Odds ratio

An odds ratio (OR) is a statistic that quantifies the strength of the association between two events, A and B. The odds ratio is defined as the ratio of the odds of event A taking place in the presence of B, and the odds of A in the absence of B. Due to symmetry, odds ratio reciprocally calculates the ratio of the odds of B occurring in the presence of A, and the odds of B in the absence of A. Two events are independent if and only if the OR equals 1, i.e., the odds of one event are the same in either the presence or absence of the other event. If the OR is greater than 1, then A and B are associated (correlated) in the sense that, compared to the absence of B, the presence of B raises the odds of A, and symmetrically the presence of A raises the odds of B. Conversely, if the OR is less than 1, then A and B are negatively correlated, and the presence of one event reduces the odds of the other event occurring.

Note that the odds ratio is symmetric in the two events, and no causal direction is implied (correlation does not imply causation): an OR greater than 1 does not establish that B causes A, or that A causes B.

↑ Return to Menu

Independence (probability theory) in the context of Set estimation

In statistics, a random vector x is classically represented by a probability density function. In a set-membership approach or set estimation, x is represented by a set X to which x is assumed to belong. This means that the support of the probability distribution function of x is included inside X. On the one hand, representing random vectors by sets makes it possible to provide fewer assumptions on the random variables (such as independence) and dealing with nonlinearities is easier. On the other hand, a probability distribution function provides a more accurate information than a set enclosing its support.

↑ Return to Menu