In statistics, an effect size is a value measuring the strength of the relationship between two variables in a population, or a sample-based estimate of that quantity. It can refer to the value of a statistic calculated from a sample of data, the value of one parameter for a hypothetical population, or the equation that operationalizes how statistics or parameters lead to the effect size value. Examples of effect sizes include the correlation between two variables, the regression coefficient in a regression, the mean difference, and the risk of a particular event (such as a heart attack). Effect sizes are a complementary tool for statistical hypothesis testing, and play an important role in statistical power analyses to assess the sample size required for new experiments. Effect size calculations are fundamental to meta-analysis, which aims to provide the combined effect size based on data from multiple studies. The group of data-analysis methods concerning effect sizes is referred to as estimation statistics.
Effect size is an essential component in the evaluation of the strength of a statistical claim, and it is the first item (magnitude) in the MAGIC criteria. The standard deviation of the effect size is of critical importance, as it indicates how much uncertainty is included in the observed measurement. A standard deviation that is too large will make the measurement nearly meaningless. In meta-analysis, which aims to summarize multiple effect sizes into a single estimate, the uncertainty in studies' effect sizes is used to weight the contribution of each study, so larger studies are considered more important than smaller ones. The uncertainty in the effect size is calculated differently for each type of effect size, but generally only requires knowing the study's sample size (N), or the number of observations (n) in each group.