Probabilistic programming (PP) is a programming paradigm in which probabilistic models are specified and inference for these models is performed automatically.^{[1]} It represents an attempt to unify probabilistic modeling and traditional general purpose programming in order to make the former easier and more widely applicable.^{[2]}^{[3]} It can be used to create systems that help make decisions in the face of uncertainty.
Programming languages used for probabilistic programming are referred to as "probabilistic programming languages" (PPLs).
Probabilistic reasoning has been used for a wide variety of tasks such as predicting stock prices, recommending movies, diagnosing computers, detecting cyber intrusions and image detection.^{[4]} However, until recently (partially due to limited computing power), probabilistic programming was limited in scope, and most inference algorithms had to be written manually for each task.
Nevertheless, in 2015, a 50-line probabilistic computer vision program was used to generate 3D models of human faces based on 2D images of those faces. The program used inverse graphics as the basis of its inference method, and was built using the Picture package in Julia.^{[4]} This made possible "in 50 lines of code what used to take thousands".^{[5]}^{[6]}
The Gen probabilistic programming library (also written in Julia) has been applied to vision and robotics tasks.^{[7]}
More recently, the probabilistic programming system Turing.jl has been applied in various pharmaceutical^{[8]} and economics applications.^{[9]}
Probabilistic programming in Julia has also been combined with differentiable programming by combining the Julia package Zygote.jl with Turing.jl. ^{[10]}
Probabilistic programming languages are also commonly used in Bayesian cognitive science to develop and evaluate models of cognition. ^{[11]}
PPLs often extend from a basic language. For instance, Turing.jl^{[12]} is based on Julia, Infer.NET is based on .NET Framework,^{[13]} while PRISM extends from Prolog.^{[14]} However, some PPLs, such as WinBUGS, offer a self-contained language that maps closely to the mathematical representation of the statistical models, with no obvious origin in another programming language.^{[15]}^{[16]}
The language for WinBUGS was implemented to perform Bayesian computation using Gibbs Sampling and related algorithms. Although implemented in a relatively unknown programming language (Component Pascal), this language permits Bayesian inference for a wide variety of statistical models using a flexible computational approach. The same BUGS language may be used to specify Bayesian models for inference via different computational choices ("samplers") and conventions or defaults, using a standalone program WinBUGS (or related R packages, rbugs and r2winbugs) and JAGS (Just Another Gibbs Sampler, another standalone program with related R packages including rjags, R2jags, and runjags). More recently, other languages to support Bayesian model specification and inference allow different or more efficient choices for the underlying Bayesian computation, and are accessible from the R data analysis and programming environment, e.g.: Stan, NIMBLE and NUTS. The influence of the BUGS language is evident in these later languages, which even use the same syntax for some aspects of model specification.
Several PPLs are in active development, including some in beta test. Two popular tools are Stan and PyMC.^{[17]}
A probabilistic relational programming language (PRPL) is a PPL specially designed to describe and infer with probabilistic relational models (PRMs).
A PRM is usually developed with a set of algorithms for reducing, inference about and discovery of concerned distributions, which are embedded into the corresponding PRPL.
Most approaches to probabilistic logic programming are based on the distribution semantics, which splits a program into a set of probabilistic facts and a logic program. It defines a probability distribution on interpretations of the Herbrand universe of the program.^{[18]}
Reasoning about variables as probability distributions causes difficulties for novice programmers, but these difficulties can be addressed through use of Bayesian network visualisations and graphs of variable distributions embedded within the source code editor.^{[51]}
^Innes, Mike; Edelman, Alan; Fischer, Keno; Rackauckas, Chris; Saba, Elliot; Viral B Shah; Tebbutt, Will (2019). "∂P: A Differentiable Programming System to Bridge Machine Learning and Scientific Computing". arXiv:1907.07587 [cs.PL].
^Goodman, Noah D; Tenenbaum, Joshua B; Buchsbaum, Daphna; Hartshorne, Joshua; Hawkins, Robert; O'Donnell, Timothy J; Tessler, Michael Henry. "Probabilistic Models of Cognition". Probabilistic Models of Cognition - 2nd Edition. Retrieved May 27, 2023.
^Dey, Debabrata; Sarkar, Sumit (1998). "PSQL: A query language for probabilistic relational data". Data & Knowledge Engineering. 28: 107–120. doi:10.1016/S0169-023X(98)00015-9.
^"Dyna". www.dyna.org. Archived from the original on January 17, 2016. Retrieved January 12, 2011.
^Perov, Yura; Graham, Logan; Gourgoulias, Kostis; Richens, Jonathan G.; Lee, Ciarán M.; Baker, Adam; Johri, Saurabh (January 28, 2020), MultiVerse: Causal Reasoning using Importance Sampling in Probabilistic Programming, arXiv:1910.08091
^Gorinova, Maria I.; Sarkar, Advait; Blackwell, Alan F.; Syme, Don (January 1, 2016). "A Live, Multiple-Representation Probabilistic Programming Environment for Novices". Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. CHI '16. New York, NY, USA: ACM. pp. 2533–2537. doi:10.1145/2858036.2858221. ISBN9781450333627. S2CID3201542.