In the context of hardware and software systems, **formal verification** is the act of proving or disproving the correctness of a system with respect to a certain formal specification or property, using formal methods of mathematics.^{[1]}
Formal verification is a key incentive for formal specification of systems, and is at the core of formal methods.
It represents an important dimension of analysis and verification in electronic design automation and is one approach to software verification. The use of formal verification enables the highest Evaluation Assurance Level (EAL7) in the framework of common criteria for computer security certification.

Formal verification can be helpful in proving the correctness of systems such as: cryptographic protocols, combinational circuits, digital circuits with internal memory, and software expressed as source code in a programming language. Prominent examples of verified software systems include the CompCert verified C compiler and the seL4 high-assurance operating system kernel.

The verification of these systems is done by ensuring the existence of a formal proof of a mathematical model of the system.^{[2]} Examples of mathematical objects used to model systems are: finite-state machines, labelled transition systems, Horn clauses, Petri nets, vector addition systems, timed automata, hybrid automata, process algebra, formal semantics of programming languages such as operational semantics, denotational semantics, axiomatic semantics and Hoare logic.^{[3]}

One approach and formation is model checking, which consists of a systematically exhaustive exploration of the mathematical model (this is possible for finite models, but also for some infinite models where infinite sets of states can be effectively represented finitely by using abstraction or taking advantage of symmetry). Usually, this consists of exploring all states and transitions in the model, by using smart and domain-specific abstraction techniques to consider whole groups of states in a single operation and reduce computing time. Implementation techniques include state space enumeration, symbolic state space enumeration, abstract interpretation, symbolic simulation, abstraction refinement.^{[citation needed]} The properties to be verified are often described in temporal logics, such as linear temporal logic (LTL), Property Specification Language (PSL), SystemVerilog Assertions (SVA),^{[4]} or computational tree logic (CTL). The great advantage of model checking is that it is often fully automatic; its primary disadvantage is that it does not in general scale to large systems; symbolic models are typically limited to a few hundred bits of state, while explicit state enumeration requires the state space being explored to be relatively small.

Another approach is deductive verification. It consists of generating from the system and its specifications (and possibly other annotations) a collection of mathematical *proof obligations*, the truth of which imply conformance of the system to its specification, and discharging these obligations using either proof assistants (interactive theorem provers) (such as HOL, ACL2, Isabelle, Coq or PVS), or automatic theorem provers, including in particular satisfiability modulo theories (SMT) solvers. This approach has the disadvantage that it may require the user to understand in detail why the system works correctly, and to convey this information to the verification system, either in the form of a sequence of theorems to be proved or in the form of specifications (invariants, preconditions, postconditions) of system components (e.g. functions or procedures) and perhaps subcomponents (such as loops or data structures).

Formal verification of software programs involves proving that a program satisfies a formal specification of its behavior. Subareas of formal verification include deductive verification (see above), abstract interpretation, automated theorem proving, type systems, and lightweight formal methods. A promising type-based verification approach is dependently typed programming, in which the types of functions include (at least part of) those functions' specifications, and type-checking the code establishes its correctness against those specifications. Fully featured dependently typed languages support deductive verification as a special case.

Another complementary approach is program derivation, in which efficient code is produced from functional specifications by a series of correctness-preserving steps. An example of this approach is the Bird–Meertens formalism, and this approach can be seen as another form of correctness by construction.

These techniques can be *sound*, meaning that the verified properties can be logically deduced from the semantics, or *unsound*, meaning that there is no such guarantee. A sound technique yields a result only once it has covered the entire space of possibilities. An example of an unsound technique is one that covers only a subset of the possibilities, for instance only integers up to a certain number, and give a "good-enough" result. Techniques can also be *decidable*, meaning that their algorithmic implementations are guaranteed to terminate with an answer, or undecidable, meaning that they may never terminate. By bounding the scope of possibilities, unsound techniques that are decidable might be able to be constructed when no decidable sound techniques are available.

Main article: Verification and validation |

Verification is one aspect of testing a product's fitness for purpose. Validation is the complementary aspect. Often one refers to the overall checking process as V & V.

**Validation**: "Are we trying to make the right thing?", i.e., is the product specified to the user's actual needs?**Verification**: "Have we made what we were trying to make?", i.e., does the product conform to the specifications?

The verification process consists of static/structural and dynamic/behavioral aspects. E.g., for a software product one can inspect the source code (static) and run against specific test cases (dynamic). Validation usually can be done only dynamically, i.e., the product is tested by putting it through typical and atypical usages ("Does it satisfactorily meet all use cases?").

Main article: Automatic bug fixing |

Program repair is performed with respect to an oracle, encompassing the desired functionality of the program which is used for validation of the generated fix. A simple example is a test-suite—the input/output pairs specify the functionality of the program. A variety of techniques are employed, most notably using satisfiability modulo theories (SMT) solvers, and genetic programming,^{[5]} using evolutionary computing to generate and evaluate possible candidates for fixes. The former method is deterministic, while the latter is randomized.

Program repair combines techniques from formal verification and program synthesis. Fault-localization techniques in formal verification are used to compute program points which might be possible bug-locations, which can be targeted by the synthesis modules. Repair systems often focus on a small pre-defined class of bugs in order to reduce the search space. Industrial use is limited owing to the computational cost of existing techniques.

This section may require cleanup to meet Wikipedia's quality standards. The specific problem is: needs a more well-rounded overview of the subject; filter for what is actually used in industry; only mention notable papers. Please help improve this section if you can. (October 2022) (Learn how and when to remove this template message)

The growth in complexity of designs increases the importance of formal verification techniques in the hardware industry.^{[6]}^{[7]} At present, formal verification is used by most or all leading hardware companies,^{[8]} but its use in the software industry is still languishing.^{[citation needed]} This could be attributed to the greater need in the hardware industry, where errors have greater commercial significance.^{[citation needed]} Because of the potential subtle interactions between components, it is increasingly difficult to exercise a realistic set of possibilities by simulation. Important aspects of hardware design are amenable to automated proof methods, making formal verification easier to introduce and more productive.^{[9]}

As of 2011^{[update]}, several operating systems have been formally verified:
NICTA's Secure Embedded L4 microkernel, sold commercially as seL4 by OK Labs;^{[10]} OSEK/VDX based real-time operating system ORIENTAIS by East China Normal University;^{[citation needed]} Green Hills Software's Integrity operating system;^{[citation needed]} and SYSGO's PikeOS.^{[11]}^{[12]}
In 2016, a team led by Zhong Shao at Yale developed a formally verified operating system kernel called CertiKOS.^{[13]}^{[14]}

As of 2017, formal verification has been applied to the design of large computer networks through a mathematical model of the network,^{[15]} and as part of a new network technology category, intent-based networking.^{[16]} Network software vendors that offer formal verification solutions include Cisco^{[17]} Forward Networks^{[18]}^{[19]} and Veriflow Systems.^{[20]}

The SPARK programming language provides a toolset which enables software development with formal verification and is used in several high-integrity systems.^{[citation needed]}

The CompCert C compiler is a formally verified C compiler implementing the majority of ISO C.^{[21]}^{[22]}