**Probabilistic logic** (also **probability logic** and **probabilistic reasoning**) involves the use of probability and logic to deal with uncertain situations. Compared to traditional logic, probabilistic logics result in a richer and more expressive formalism with a broad range of possible application areas.^{[promotional language]} Probabilistic logics attempt to find a natural extension of traditional logic truth tables: the results they define are derived through probabilistic expressions instead. A difficulty of probabilistic logics is their tendency to multiply the computational complexities of their probabilistic and logical components. Other difficulties include the possibility of counter-intuitive results, such as in case of belief fusion in Dempster–Shafer theory. Source trust and epistemic uncertainty about the probabilities they provide, such as defined in subjective logic, are additional elements to consider. The need to deal with a broad variety of contexts and issues has led to many different proposals.

This section is too long to read comfortably, and needs subsections. Please format the article according to the guidelines laid out in the Manual of Style. (May 2022)

There are numerous proposals for probabilistic logics. Very roughly, they can be categorized into two different classes: those logics that attempt to make a probabilistic extension to logical entailment, such as Markov logic networks, and those that attempt to address the problems of uncertainty and lack of evidence (evidentiary logics).

That the concept of probability can have different meanings may be understood by noting that, despite the mathematization of probability in the Enlightenment, mathematical probability theory remains, to this very day, entirely unused in criminal courtrooms, when evaluating the "probability" of the guilt of a suspected criminal.^{[1]}

More precisely, in evidentiary logic, there is a need to distinguish the objective truth of a statement from our decision about the truth of that statement, which in turn must be distinguished from our confidence in its truth: thus, a suspect's real guilt is not necessarily the same as the judge's decision on guilt, which in turn is not the same as assigning a numerical probability to the commission of the crime, and deciding whether it is above a numerical threshold of guilt. The verdict on a single suspect may be guilty or not guilty with some uncertainty, just as the flipping of a coin may be predicted as heads or tails with some uncertainty. Given a large collection of suspects, a certain percentage may be guilty, just as the probability of flipping "heads" is one-half. However, it is incorrect to take this law of averages with regard to a single criminal (or single coin-flip): the criminal is no more "a little bit guilty" than predicting a single coin flip to be "a little bit heads and a little bit tails": we are merely uncertain as to which it is. Expressing uncertainty as a numerical probability may be acceptable when making scientific measurements of physical quantities, but it is merely a mathematical model of the uncertainty we perceive in the context of "common sense" reasoning and logic. Just as in courtroom reasoning, the goal of employing uncertain inference is to gather evidence to strengthen the confidence of a proposition, as opposed to performing some sort of probabilistic entailment.

Historically, attempts to quantify probabilistic reasoning date back to antiquity. There was a particularly strong interest starting in the 12th century, with the work of the Scholastics, with the invention of the half-proof (so that two half-proofs are sufficient to prove guilt), the elucidation of moral certainty (sufficient certainty to act upon, but short of absolute certainty), the development of Catholic probabilism (the idea that it is always safe to follow the established rules of doctrine or the opinion of experts, even when they are less probable), the case-based reasoning of casuistry, and the scandal of Laxism (whereby probabilism was used to give support to almost any statement at all, it being possible to find an expert opinion in support of almost any proposition.).^{[1]}

Below is a list of proposals for probabilistic and evidentiary extensions to classical and predicate logic.

- The term "
*probabilistic logic*" was first used in a paper by Nils Nilsson published in 1986, where the truth values of sentences are probabilities.^{[2]}The proposed semantical generalization induces a probabilistic logical entailment, which reduces to ordinary logical entailment when the probabilities of all sentences are either 0 or 1. This generalization applies to any logical system for which the consistency of a finite set of sentences can be established. - The central concept in the theory of subjective logic
^{[3]}is*opinions*about some of the propositional variables involved in the given logical sentences. A binomial opinion applies to a single proposition and is represented as a 3-dimensional extension of a single probability value to express probabilistic and epistemic uncertainty about the truth of the proposition. For the computation of derived opinions based on a structure of argument opinions, the theory proposes respective operators for various logical connectives, such as e.g. multiplication (AND), comultiplication (OR), division (UN-AND) and co-division (UN-OR) of opinions,^{[4]}conditional deduction (MP) and abduction (MT).,^{[5]}as well as Bayes' theorem.^{[6]} - The approximate reasoning formalism proposed by fuzzy logic can be used to obtain a logic in which the models are the probability distributions and the theories are the lower envelopes.
^{[7]}In such a logic the question of the consistency of the available information is strictly related with the one of the coherence of partial probabilistic assignment and therefore with Dutch book phenomena. - Markov logic networks implement a form of uncertain inference based on the maximum entropy principle—the idea that probabilities should be assigned in such a way as to maximize entropy, in analogy with the way that Markov chains assign probabilities to finite state machine transitions.
- Systems such as Pei Wang's Non-Axiomatic Reasoning System (NARS) or Ben Goertzel's Probabilistic Logic Networks (PLN) add an explicit confidence ranking, as well as a probability to atoms and sentences. The rules of deduction and induction incorporate this uncertainty, thus side-stepping difficulties in purely Bayesian approaches to logic (including Markov logic), while also avoiding the paradoxes of Dempster–Shafer theory. The implementation of PLN attempts to use and generalize algorithms from logic programming, subject to these extensions.
- In the field of probabilistic argumentation, various formal frameworks have been put forward. The framework of "probabilistic labellings",
^{[8]}for example, refers to probability spaces where a sample space is a set of labellings of argumentation graphs. In the framework of "probabilistic argumentation systems"^{[9]}^{[10]}probabilities are not directly attached to arguments or logical sentences. Instead it is assumed that a particular subset of the variables involved in the sentences defines a probability space over the corresponding sub-σ-algebra. This induces two distinct probability measures with respect to , which are called*degree of support*and*degree of possibility*, respectively. Degrees of support can be regarded as non-additive*probabilities of provability*, which generalizes the concepts of ordinary logical entailment (for ) and classical posterior probabilities (for ). Mathematically, this view is compatible with the Dempster–Shafer theory. - The theory of evidential reasoning
^{[11]}also defines non-additive*probabilities of probability*(or*epistemic probabilities*) as a general notion for both logical entailment (provability) and probability. The idea is to augment standard propositional logic by considering an epistemic operator**K**that represents the state of knowledge that a rational agent has about the world. Probabilities are then defined over the resulting*epistemic universe***K***p*of all propositional sentences*p*, and it is argued that this is the best information available to an analyst. From this view, Dempster–Shafer theory appears to be a generalized form of probabilistic reasoning.