![](http://image.absoluteastronomy.com/images//topicimages/noimage.gif)
Pivotal quantity
Encyclopedia
In statistics
, a pivotal quantity or pivot is a function of observations and unobservable parameters whose probability distribution
does not depend on unknown parameter
s.
Note that a pivot quantity need not be a statistic
—the function and its value can depend on parameters of the model, but its distribution must not. If it is a statistic, then it is known as an ancillary statistic
.
More formally, given an independent and identically distributed sample
from a distribution with parameter
, a function
is a pivotal quantity if the distribution of
is independent of
. Pivotal quantities are commonly used for normalization
to allow data from different data sets to be compared.
It is relatively easy to construct pivots for location and scale parameters: for the former we form differences so that location cancels, for the latter ratios so that scale cancels.
Pivotal quantities are fundamental to the construction of test statistic
s, as they allow the statistic to not depend on parameters – for example, Student's t-statistic
is for a normal distribution with unknown variance (and mean). They also provide one method of constructing confidence interval
s, and the use of pivotal quantities improves performance of the bootstrap
. In the form of ancillary statistics, they can be used to construct frequentist prediction interval
s (predictive confidence intervals).
and variance
, and an observation x, the z-score:
has distribution
– a normal distribution with mean 0 and variance 1. Similarly, since the n-sample sample mean has sampling distribution
the z-score of the mean
also has distribution
Note that while these functions depend on the parameters – and thus one can only compute them if the parameters are known (they are not statistics) – the distribution is independent of the parameters.
Given
independent, identically distributed (i.i.d.) observations
from the normal distribution with unknown mean
and variance
, a pivotal quantity can be obtained from the function:![](http://image.absoluteastronomy.com/images/formulas/9/2/2929804-17.gif)
where![](http://image.absoluteastronomy.com/images/formulas/9/2/2929804-18.gif)
and![](http://image.absoluteastronomy.com/images/formulas/9/2/2929804-19.gif)
are unbiased estimates of
and
, respectively. The function
is the Student's t-statistic
for a new value
, to be drawn from the same population as the already observed set of values
.
Using
the function
becomes a pivotal quantity, which is also distributed by the Student's t-distribution with
degrees of freedom. As required, even though
appears as an argument to the function
, the distribution of
does not depend on the parameters
or
of the normal probability distribution that governs the observations
.
This can be used to compute a prediction interval
for the next observation
see Prediction interval: Normal distribution.
Suppose a sample of size
of vectors
is taken from a bivariate normal distribution with unknown correlation
.
An estimator of
is the sample (Pearson, moment) correlation![](http://image.absoluteastronomy.com/images/formulas/9/2/2929804-39.gif)
where
are sample variances of
and
. The sample statistic
has an asymptotically normal distribution:
.
However, a variance-stabilizing transformation
![](http://image.absoluteastronomy.com/images/formulas/9/2/2929804-45.gif)
known as Fisher's z transformation
of the correlation coefficient allows to make the distribution of
asymptotically independent of unknown parameters:![](http://image.absoluteastronomy.com/images/formulas/9/2/2929804-47.gif)
where
is the corresponding population parameter. For finite samples sizes
, the random variable
will have distribution closer to normal than that of
. An even closer approximation to the standard normal distribution is obtained by using a better approximation for the exact variance: the usual form is![](http://image.absoluteastronomy.com/images/formulas/9/2/2929804-52.gif)
, pivotal quantities are robust to changes in the parameters – indeed, independent of the parameters – but not in general robust to changes in the model, such as violations of the assumption of normality.
This is fundamental to the robust critique of non-robust statistics, often derived from pivotal quantities: such statistics may be robust within the family, but are not robust outside it.
Statistics
Statistics is the study of the collection, organization, analysis, and interpretation of data. It deals with all aspects of this, including the planning of data collection in terms of the design of surveys and experiments....
, a pivotal quantity or pivot is a function of observations and unobservable parameters whose probability distribution
Probability distribution
In probability theory, a probability mass, probability density, or probability distribution is a function that describes the probability of a random variable taking certain values....
does not depend on unknown parameter
Parameter
Parameter from Ancient Greek παρά also “para” meaning “beside, subsidiary” and μέτρον also “metron” meaning “measure”, can be interpreted in mathematics, logic, linguistics, environmental science and other disciplines....
s.
Note that a pivot quantity need not be a statistic
Statistic
A statistic is a single measure of some attribute of a sample . It is calculated by applying a function to the values of the items comprising the sample which are known together as a set of data.More formally, statistical theory defines a statistic as a function of a sample where the function...
—the function and its value can depend on parameters of the model, but its distribution must not. If it is a statistic, then it is known as an ancillary statistic
Ancillary statistic
In statistics, an ancillary statistic is a statistic whose sampling distribution does not depend on which of the probability distributions among those being considered is the distribution of the statistical population from which the data were taken...
.
More formally, given an independent and identically distributed sample
![](http://image.absoluteastronomy.com/images/formulas/9/2/2929804-1.gif)
![](http://image.absoluteastronomy.com/images/formulas/9/2/2929804-2.gif)
![](http://image.absoluteastronomy.com/images/formulas/9/2/2929804-3.gif)
![](http://image.absoluteastronomy.com/images/formulas/9/2/2929804-4.gif)
![](http://image.absoluteastronomy.com/images/formulas/9/2/2929804-5.gif)
Normalization (statistics)
In one usage in statistics, normalization is the process of isolating statistical error in repeated measured data. A normalization is sometimes based on a property...
to allow data from different data sets to be compared.
It is relatively easy to construct pivots for location and scale parameters: for the former we form differences so that location cancels, for the latter ratios so that scale cancels.
Pivotal quantities are fundamental to the construction of test statistic
Test statistic
In statistical hypothesis testing, a hypothesis test is typically specified in terms of a test statistic, which is a function of the sample; it is considered as a numerical summary of a set of data that...
s, as they allow the statistic to not depend on parameters – for example, Student's t-statistic
Student's t-statistic
In statistics, the t-statistic is a ratio of the departure of an estimated parameter from its notional value and its standard error. It is used in hypothesis testing, for example in the Student's t-test, in the augmented Dickey–Fuller test, and in bootstrapping.-Definition:Let \scriptstyle\hat\beta...
is for a normal distribution with unknown variance (and mean). They also provide one method of constructing confidence interval
Confidence interval
In statistics, a confidence interval is a particular kind of interval estimate of a population parameter and is used to indicate the reliability of an estimate. It is an observed interval , in principle different from sample to sample, that frequently includes the parameter of interest, if the...
s, and the use of pivotal quantities improves performance of the bootstrap
Bootstrapping (statistics)
In statistics, bootstrapping is a computer-based method for assigning measures of accuracy to sample estimates . This technique allows estimation of the sample distribution of almost any statistic using only very simple methods...
. In the form of ancillary statistics, they can be used to construct frequentist prediction interval
Prediction interval
In statistical inference, specifically predictive inference, a prediction interval is an estimate of an interval in which future observations will fall, with a certain probability, given what has already been observed...
s (predictive confidence intervals).
Normal distribution
One of the simplest pivotal quantities is the z-score; given a normal distribution with![](http://image.absoluteastronomy.com/images/formulas/9/2/2929804-6.gif)
![](http://image.absoluteastronomy.com/images/formulas/9/2/2929804-7.gif)
has distribution
![](http://image.absoluteastronomy.com/images/formulas/9/2/2929804-9.gif)
![](http://image.absoluteastronomy.com/images/formulas/9/2/2929804-10.gif)
also has distribution
![](http://image.absoluteastronomy.com/images/formulas/9/2/2929804-12.gif)
Given
![](http://image.absoluteastronomy.com/images/formulas/9/2/2929804-13.gif)
![](http://image.absoluteastronomy.com/images/formulas/9/2/2929804-14.gif)
![](http://image.absoluteastronomy.com/images/formulas/9/2/2929804-15.gif)
![](http://image.absoluteastronomy.com/images/formulas/9/2/2929804-16.gif)
![](http://image.absoluteastronomy.com/images/formulas/9/2/2929804-17.gif)
where
![](http://image.absoluteastronomy.com/images/formulas/9/2/2929804-18.gif)
and
![](http://image.absoluteastronomy.com/images/formulas/9/2/2929804-19.gif)
are unbiased estimates of
![](http://image.absoluteastronomy.com/images/formulas/9/2/2929804-20.gif)
![](http://image.absoluteastronomy.com/images/formulas/9/2/2929804-21.gif)
![](http://image.absoluteastronomy.com/images/formulas/9/2/2929804-22.gif)
Student's t-statistic
In statistics, the t-statistic is a ratio of the departure of an estimated parameter from its notional value and its standard error. It is used in hypothesis testing, for example in the Student's t-test, in the augmented Dickey–Fuller test, and in bootstrapping.-Definition:Let \scriptstyle\hat\beta...
for a new value
![](http://image.absoluteastronomy.com/images/formulas/9/2/2929804-23.gif)
![](http://image.absoluteastronomy.com/images/formulas/9/2/2929804-24.gif)
Using
![](http://image.absoluteastronomy.com/images/formulas/9/2/2929804-25.gif)
![](http://image.absoluteastronomy.com/images/formulas/9/2/2929804-26.gif)
![](http://image.absoluteastronomy.com/images/formulas/9/2/2929804-27.gif)
![](http://image.absoluteastronomy.com/images/formulas/9/2/2929804-28.gif)
![](http://image.absoluteastronomy.com/images/formulas/9/2/2929804-29.gif)
![](http://image.absoluteastronomy.com/images/formulas/9/2/2929804-30.gif)
![](http://image.absoluteastronomy.com/images/formulas/9/2/2929804-31.gif)
![](http://image.absoluteastronomy.com/images/formulas/9/2/2929804-32.gif)
![](http://image.absoluteastronomy.com/images/formulas/9/2/2929804-33.gif)
This can be used to compute a prediction interval
Prediction interval
In statistical inference, specifically predictive inference, a prediction interval is an estimate of an interval in which future observations will fall, with a certain probability, given what has already been observed...
for the next observation
![](http://image.absoluteastronomy.com/images/formulas/9/2/2929804-34.gif)
Bivariate normal distribution
In more complicated cases, it is impossible to construct exact pivots. However, having approximate pivots improves convergence to asymptotic normality.Suppose a sample of size
![](http://image.absoluteastronomy.com/images/formulas/9/2/2929804-35.gif)
![](http://image.absoluteastronomy.com/images/formulas/9/2/2929804-36.gif)
Correlation
In statistics, dependence refers to any statistical relationship between two random variables or two sets of data. Correlation refers to any of a broad class of statistical relationships involving dependence....
![](http://image.absoluteastronomy.com/images/formulas/9/2/2929804-37.gif)
An estimator of
![](http://image.absoluteastronomy.com/images/formulas/9/2/2929804-38.gif)
![](http://image.absoluteastronomy.com/images/formulas/9/2/2929804-39.gif)
where
![](http://image.absoluteastronomy.com/images/formulas/9/2/2929804-40.gif)
![](http://image.absoluteastronomy.com/images/formulas/9/2/2929804-41.gif)
![](http://image.absoluteastronomy.com/images/formulas/9/2/2929804-42.gif)
![](http://image.absoluteastronomy.com/images/formulas/9/2/2929804-43.gif)
![](http://image.absoluteastronomy.com/images/formulas/9/2/2929804-44.gif)
However, a variance-stabilizing transformation
Variance-stabilizing transformation
In applied statistics, a variance-stabilizing transformation is a data transformation that is specifically chosen either to simplify considerations in graphical exploratory data analysis or to allow the application of simple regression-based or analysis of variance techniques.The aim behind the...
![](http://image.absoluteastronomy.com/images/formulas/9/2/2929804-45.gif)
known as Fisher's z transformation
Fisher transformation
In statistics, hypotheses about the value of the population correlation coefficient ρ between variables X and Y can be tested using the Fisher transformation applied to the sample correlation coefficient r.-Definition:...
of the correlation coefficient allows to make the distribution of
![](http://image.absoluteastronomy.com/images/formulas/9/2/2929804-46.gif)
![](http://image.absoluteastronomy.com/images/formulas/9/2/2929804-47.gif)
where
![](http://image.absoluteastronomy.com/images/formulas/9/2/2929804-48.gif)
![](http://image.absoluteastronomy.com/images/formulas/9/2/2929804-49.gif)
![](http://image.absoluteastronomy.com/images/formulas/9/2/2929804-50.gif)
![](http://image.absoluteastronomy.com/images/formulas/9/2/2929804-51.gif)
![](http://image.absoluteastronomy.com/images/formulas/9/2/2929804-52.gif)
Robustness
From the point of view of robust statisticsRobust statistics
Robust statistics provides an alternative approach to classical statistical methods. The motivation is to produce estimators that are not unduly affected by small departures from model assumptions.- Introduction :...
, pivotal quantities are robust to changes in the parameters – indeed, independent of the parameters – but not in general robust to changes in the model, such as violations of the assumption of normality.
This is fundamental to the robust critique of non-robust statistics, often derived from pivotal quantities: such statistics may be robust within the family, but are not robust outside it.