Inverse probability
Encyclopedia
In probability theory
, inverse probability is an obsolete term for the probability distribution
of an unobserved variable.
Today, the problem of determining an unobserved variable (by whatever method) is called inferential statistics, the method of inverse probability (assigning a probability distribution to an unobserved variable) is called Bayesian probability
, the "distribution" of an unobserved variable given data is rather the likelihood function
(which is not a probability distribution), and the distribution of an unobserved variable, given both data and a prior distribution, is the posterior distribution. The development of the field and terminology from "inverse probability" to "Bayesian probability" is described by Fienberg (2006). The term "Bayesian", which displaced "inverse probability", was in fact introduced by R. A. Fisher as a derogatory term.
The term "inverse probability" appears in an 1837 paper of De Morgan
, in reference to Laplace's method of probability (developed in a 1774 paper, which independently discovered and popularized Bayesian methods, and 1812 book), though the term "inverse probability" does not occur in these.
Inverse probability, variously interpreted, was the dominant approach to statistics until the development of frequentism in the early 20th century by R. A. Fisher, Jerzy Neyman
and Egon Pearson
. Following the development of frequentism, the terms frequentist and Bayesian
developed to contrast these approaches, and became common in the 1950s.
The inverse probability problem (in the 18th and 19th centuries) was the problem of estimating a parameter from experimental data in the experimental sciences, especially astronomy
and biology
. A simple example would be the problem of estimating the position of a star in the sky (at a certain time on a certain date) for purposes of navigation
. Given the data, one must estimate the true position (probably by averaging). This problem would now be considered one of inferential statistics.
The terms "direct probability" and "inverse probability" were in use until the middle part of the 20th century, when the terms "likelihood function
" and "posterior distribution" became prevalent.
Probability theory
Probability theory is the branch of mathematics concerned with analysis of random phenomena. The central objects of probability theory are random variables, stochastic processes, and events: mathematical abstractions of non-deterministic events or measured quantities that may either be single...
, inverse probability is an obsolete term for the probability distribution
Probability distribution
In probability theory, a probability mass, probability density, or probability distribution is a function that describes the probability of a random variable taking certain values....
of an unobserved variable.
Today, the problem of determining an unobserved variable (by whatever method) is called inferential statistics, the method of inverse probability (assigning a probability distribution to an unobserved variable) is called Bayesian probability
Bayesian probability
Bayesian probability is one of the different interpretations of the concept of probability and belongs to the category of evidential probabilities. The Bayesian interpretation of probability can be seen as an extension of logic that enables reasoning with propositions, whose truth or falsity is...
, the "distribution" of an unobserved variable given data is rather the likelihood function
Likelihood function
In statistics, a likelihood function is a function of the parameters of a statistical model, defined as follows: the likelihood of a set of parameter values given some observed outcomes is equal to the probability of those observed outcomes given those parameter values...
(which is not a probability distribution), and the distribution of an unobserved variable, given both data and a prior distribution, is the posterior distribution. The development of the field and terminology from "inverse probability" to "Bayesian probability" is described by Fienberg (2006). The term "Bayesian", which displaced "inverse probability", was in fact introduced by R. A. Fisher as a derogatory term.
The term "inverse probability" appears in an 1837 paper of De Morgan
Augustus De Morgan
Augustus De Morgan was a British mathematician and logician. He formulated De Morgan's laws and introduced the term mathematical induction, making its idea rigorous. The crater De Morgan on the Moon is named after him....
, in reference to Laplace's method of probability (developed in a 1774 paper, which independently discovered and popularized Bayesian methods, and 1812 book), though the term "inverse probability" does not occur in these.
Inverse probability, variously interpreted, was the dominant approach to statistics until the development of frequentism in the early 20th century by R. A. Fisher, Jerzy Neyman
Jerzy Neyman
Jerzy Neyman , born Jerzy Spława-Neyman, was a Polish American mathematician and statistician who spent most of his professional career at the University of California, Berkeley.-Life and career:...
and Egon Pearson
Egon Pearson
Egon Sharpe Pearson, CBE FRS was the only son of Karl Pearson, and like his father, a leading British statistician....
. Following the development of frequentism, the terms frequentist and Bayesian
Bayesian statistics
Bayesian statistics is that subset of the entire field of statistics in which the evidence about the true state of the world is expressed in terms of degrees of belief or, more specifically, Bayesian probabilities...
developed to contrast these approaches, and became common in the 1950s.
Details
In modern terms, given a probability distribution p(x|θ) for an observable quantity x conditional on an unobserved variable θ, the "inverse probability" is the posterior distribution p(θ|x), which depends both on the likelihood function (the inversion of the probability distribution) and a prior distribution. The distribution p(x|θ) itself is called the direct probability.The inverse probability problem (in the 18th and 19th centuries) was the problem of estimating a parameter from experimental data in the experimental sciences, especially astronomy
Astronomy
Astronomy is a natural science that deals with the study of celestial objects and phenomena that originate outside the atmosphere of Earth...
and biology
Biology
Biology is a natural science concerned with the study of life and living organisms, including their structure, function, growth, origin, evolution, distribution, and taxonomy. Biology is a vast subject containing many subdivisions, topics, and disciplines...
. A simple example would be the problem of estimating the position of a star in the sky (at a certain time on a certain date) for purposes of navigation
Navigation
Navigation is the process of monitoring and controlling the movement of a craft or vehicle from one place to another. It is also the term of art used for the specialized knowledge used by navigators to perform navigation tasks...
. Given the data, one must estimate the true position (probably by averaging). This problem would now be considered one of inferential statistics.
The terms "direct probability" and "inverse probability" were in use until the middle part of the 20th century, when the terms "likelihood function
Likelihood function
In statistics, a likelihood function is a function of the parameters of a statistical model, defined as follows: the likelihood of a set of parameter values given some observed outcomes is equal to the probability of those observed outcomes given those parameter values...
" and "posterior distribution" became prevalent.