In Bayesian statistics, a credible interval is an interval within which an unobserved parameter value falls with a particular probability. It is an interval in the domain of a posterior probability distribution or a predictive distribution.[1] The generalisation to multivariate problems is the credible region.

Credible intervals are analogous to confidence intervals and confidence regions in frequentist statistics,[2] although they differ on a philosophical basis:[3] Bayesian intervals treat their bounds as fixed and the estimated parameter as a random variable, whereas frequentist confidence intervals treat their bounds as random variables and the parameter as a fixed value. Also, Bayesian credible intervals use (and indeed, require) knowledge of the situation-specific prior distribution, while the frequentist confidence intervals do not.

For example, in an experiment that determines the distribution of possible values of the parameter , if the subjective probability that lies between 35 and 45 is 0.95, then is a 95% credible interval.

Choosing a credible interval

Credible intervals are not unique on a posterior distribution. Methods for defining a suitable credible interval include:

It is possible to frame the choice of a credible interval within decision theory and, in that context, an optimal interval will always be a highest probability density set.[4]

Credible intervals can also be estimated through the use of simulation techniques such as Markov chain Monte Carlo.[5]

Contrasts with confidence interval

See also: Confidence interval § Credible interval

A frequentist 95% confidence interval means that with a large number of repeated samples, 95% of such calculated confidence intervals would include the true value of the parameter. In frequentist terms, the parameter is fixed (cannot be considered to have a distribution of possible values) and the confidence interval is random (as it depends on the random sample).

Bayesian credible intervals can be quite different from frequentist confidence intervals for two reasons:

For the case of a single parameter and data that can be summarised in a single sufficient statistic, it can be shown that the credible interval and the confidence interval will coincide if the unknown parameter is a location parameter (i.e. the forward probability function has the form ), with a prior that is a uniform flat distribution;[6] and also if the unknown parameter is a scale parameter (i.e. the forward probability function has the form ), with a Jeffreys' prior   [6] — the latter following because taking the logarithm of such a scale parameter turns it into a location parameter with a uniform distribution. But these are distinctly special (albeit important) cases; in general no such equivalence can be made.

References

  1. ^ Edwards, Ward, Lindman, Harold, Savage, Leonard J. (1963) "Bayesian statistical inference in psychological research". Psychological Review, 70, 193-242
  2. ^ Lee, P.M. (1997) Bayesian Statistics: An Introduction, Arnold. ISBN 0-340-67785-6
  3. ^ VanderPlas, Jake. "Frequentism and Bayesianism III: Confidence, Credibility, and why Frequentism and Science do not Mix | Pythonic Perambulations". jakevdp.github.io.
  4. ^ O'Hagan, A. (1994) Kendall's Advanced Theory of Statistics, Vol 2B, Bayesian Inference, Section 2.51. Arnold, ISBN 0-340-52922-9
  5. ^ Chen, Ming-Hui; Shao, Qi-Man (1 March 1999). "Monte Carlo Estimation of Bayesian Credible and HPD Intervals". Journal of Computational and Graphical Statistics. 8 (1): 69–92. doi:10.1080/10618600.1999.10474802.
  6. ^ a b Jaynes, E. T. (1976). "Confidence Intervals vs Bayesian Intervals", in Foundations of Probability Theory, Statistical Inference, and Statistical Theories of Science, (W. L. Harper and C. A. Hooker, eds.), Dordrecht: D. Reidel, pp. 175 et seq

Further reading