Outcome (probability) in the context of "Relative frequency"

Play Trivia Questions online!

or

Skip to study material about Outcome (probability) in the context of "Relative frequency"

Ad spacer

⭐ Core Definition: Outcome (probability)

In probability theory, an outcome is a possible result of an experiment or trial. Each possible outcome of a particular experiment is unique, and different outcomes are mutually exclusive (only one outcome will occur on each trial of the experiment). All of the possible outcomes of an experiment form the elements of a sample space.

↓ Menu

>>>PUT SHARE BUTTONS HERE<<<

πŸ‘‰ Outcome (probability) in the context of Relative frequency

In probability theory and statistics, the empirical probability, relative frequency, or experimental probability of an event is the ratio of the number of outcomes in which a specified event occurs to the total number of trials, i.e. by means not of a theoretical sample space but of an actual experiment. More generally, empirical probability estimates probabilities from experience and observation.

Given an event A in a sample space, the relative frequency of A is the ratio ⁠⁠ m being the number of outcomes in which the event A occurs, and n being the total number of outcomes of the experiment.

↓ Explore More Topics
In this Dossier

Outcome (probability) in the context of Propensity probability

The propensity theory of probability is a probability interpretation in which the probability is thought of as a physical propensity, disposition, or tendency of a given type of situation to yield an outcome of a certain kind, or to yield a long-run relative frequency of such an outcome.

Propensities are not relative frequencies, but purported causes of the observed stable relative frequencies. Propensities are invoked to explain why repeating a certain kind of experiment will generate a given outcome type at a persistent rate. Stable long-run frequencies are a manifestation of invariant single-case probabilities. Frequentists are unable to take this approach, since relative frequencies do not exist for single tosses of a coin, but only for large ensembles or collectives. These single-case probabilities are known as propensities or chances.

↑ Return to Menu

Outcome (probability) in the context of Event (probability theory)

Typically, when the sample space is finite, any subset of the sample space is an event (that is, all elements of the power set of the sample space are defined as events). However, this approach does not work well in cases where the sample space is uncountably infinite. So, when defining a probability space it is possible, and often necessary, to exclude certain subsets of the sample space from being events (see Β§Β Events in probability spaces, below).

↑ Return to Menu

Outcome (probability) in the context of Random variable

A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. The term 'random variable' in its mathematical definition refers to neither randomness nor variability but instead is a mathematical function in which

  • the domain is the set of possible outcomes in a sample space (e.g. the set which are the possible upper sides of a flipped coin heads or tails as the result from tossing a coin); and
  • the range is a measurable space (e.g. corresponding to the domain above, the range might be the set if say heads mapped to -1 and mapped to 1). Typically, the range of a random variable is a subset of the real numbers.

Informally, randomness typically represents some fundamental element of chance, such as in the roll of a die; it may also represent uncertainty, such as measurement error. However, the interpretation of probability is philosophically complicated, and even in specific cases is not always straightforward. The purely mathematical analysis of random variables is independent of such interpretational difficulties, and can be based upon a rigorous axiomatic setup.

↑ Return to Menu

Outcome (probability) in the context of Jointly exhaustive

In probability theory and logic, a set of events is jointly or collectively exhaustive if at least one of the events must occur. For example, when rolling a six-sided die, the events 1, 2, 3, 4, 5, and 6 are collectively exhaustive, because they encompass the entire range of possible outcomes.

Another way to describe collectively exhaustive events is that their union must cover all the events within the entire sample space. For example, events A and B are said to be collectively exhaustive if

↑ Return to Menu

Outcome (probability) in the context of Experiment (probability theory)

In probability theory, an experiment or trial (see below) is the mathematical model of any procedure that can be infinitely repeated and has a well-defined set of possible outcomes, known as the sample space. An experiment is said to be random if it has more than one possible outcome, and deterministic if it has only one. A random experiment that has exactly two (mutually exclusive) possible outcomes is known as a Bernoulli trial.

When an experiment is conducted, one (and only one) outcome resultsβ€” although this outcome may be included in any number of events, all of which would be said to have occurred on that trial. After conducting many trials of the same experiment and pooling the results, an experimenter can begin to assess the empirical probabilities of the various outcomes and events that can occur in the experiment and apply the methods of statistical analysis.

↑ Return to Menu

Outcome (probability) in the context of Sample space

In probability theory, the sample space (also called sample description space, possibility space, or outcome space) of an experiment or random trial is the set of all possible outcomes or results of that experiment. A sample space is usually denoted using set notation, and the possible ordered outcomes, or sample points, are listed as elements in the set. It is common to refer to a sample space by the labels S, Ξ©, or U (for "universal set"). The elements of a sample space may be numbers, words, letters, or symbols. They can also be finite, countably infinite, or uncountably infinite.

A subset of the sample space is an event, denoted by . If the outcome of an experiment is included in , then event has occurred.

↑ Return to Menu

Outcome (probability) in the context of Random variables

A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. The term 'random variable' in its mathematical definition refers to neither randomness nor variability but instead is a mathematical function in which

  • the domain is the set of possible outcomes in a sample space (e.g. the set which are the possible upper sides of a flipped coin heads or tails as the result from tossing a coin); and
  • the range is a measurable space (e.g. corresponding to the domain above, the range might be the set if say heads mapped to βˆ’1 and mapped to 1). Typically, the range of a random variable is a subset of the real numbers.

Informally, randomness typically represents some fundamental element of chance, such as in the roll of a die; it may also represent uncertainty, such as measurement error. However, the interpretation of probability is philosophically complicated, and even in specific cases is not always straightforward. The purely mathematical analysis of random variables is independent of such interpretational difficulties, and can be based upon a rigorous axiomatic setup.

↑ Return to Menu

Outcome (probability) in the context of Generative model

In statistical classification, two main approaches are called the generative approach and the discriminative approach. These compute classifiers by different approaches, differing in the degree of statistical modelling. Terminology is inconsistent, but three major types can be distinguished:

  1. A generative model is a statistical model of the joint probability distribution on a given observable variable X and target variable Y; A generative model can be used to "generate" random instances (outcomes) of an observation x.
  2. A discriminative model is a model of the conditional probability of the target Y, given an observation x. It can be used to "discriminate" the value of the target variable Y, given an observation x.
  3. Classifiers computed without using a probability model are also referred to loosely as "discriminative".

The distinction between these last two classes is not consistently made; Jebara (2004) refers to these three classes as generative learning, conditional learning, and discriminative learning, but Ng & Jordan (2002) only distinguish two classes, calling them generative classifiers (joint distribution) and discriminative classifiers (conditional distribution or no distribution), not distinguishing between the latter two classes. Analogously, a classifier based on a generative model is a generative classifier, while a classifier based on a discriminative model is a discriminative classifier, though this term also refers to classifiers that are not based on a model.

↑ Return to Menu