In computer science, **concurrency** is the ability of different parts or units of a program, algorithm, or problem to be executed out-of-order or in partial order, without affecting the outcome. This allows for parallel execution of the concurrent units, which can significantly improve overall speed of the execution in multi-processor and multi-core systems. In more technical terms, concurrency refers to the decomposability of a program, algorithm, or problem into order-independent or partially-ordered components or units of computation.^{[1]}

According to Rob Pike, concurrency is the composition of independently executing computations,^{[2]} and concurrency is not parallelism: concurrency is about dealing with lots of things at once but parallelism is about doing lots of things at once. Concurrency is about structure, parallelism is about execution, concurrency provides a way to structure a solution to solve a problem that may (but not necessarily) be parallelizable.^{[3]}

A number of mathematical models have been developed for general concurrent computation including Petri nets, process calculi, the parallel random-access machine model, the actor model and the Reo Coordination Language.

As Leslie Lamport (2015) notes, "While concurrent program execution had been considered for years, the computer science of concurrency began with Edsger Dijkstra's seminal 1965 paper that introduced the mutual exclusion problem. ... The ensuing decades have seen a huge growth of interest in concurrency—particularly in distributed systems. Looking back at the origins of the field, what stands out is the fundamental role played by Edsger Dijkstra".^{[4]}

Because computations in a concurrent system can interact with each other while being executed, the number of possible execution paths in the system can be extremely large, and the resulting outcome can be indeterminate. Concurrent use of shared resources can be a source of indeterminacy leading to issues such as deadlocks, and resource starvation.^{[5]}

Design of concurrent systems often entails finding reliable techniques for coordinating their execution, data exchange, memory allocation, and execution scheduling to minimize response time and maximise throughput.^{[6]}

Concurrency theory has been an active field of research in theoretical computer science. One of the first proposals was Carl Adam Petri's seminal work on Petri nets in the early 1960s. In the years since, a wide variety of formalisms have been developed for modeling and reasoning about concurrency.

A number of formalisms for modeling and understanding concurrent systems have been developed, including:^{[7]}

- The parallel random-access machine
^{[8]} - The actor model
- Computational bridging models such as the bulk synchronous parallel (BSP) model
- Petri nets
- Process calculi
- Calculus of communicating systems (CCS)
- Communicating sequential processes (CSP) model
- π-calculus

- Tuple spaces, e.g., Linda
- Simple Concurrent Object-Oriented Programming (SCOOP)
- Reo Coordination Language
- Trace monoids

Some of these models of concurrency are primarily intended to support reasoning and specification, while others can be used through the entire development cycle, including design, implementation, proof, testing and simulation of concurrent systems. Some of these are based on message passing, while others have different mechanisms for concurrency.

The proliferation of different models of concurrency has motivated some researchers to develop ways to unify these different theoretical models. For example, Lee and Sangiovanni-Vincentelli have demonstrated that a so-called "tagged-signal" model can be used to provide a common framework for defining the denotational semantics of a variety of different models of concurrency,^{[9]} while Nielsen, Sassone, and Winskel have demonstrated that category theory can be used to provide a similar unified understanding of different models.^{[10]}

The Concurrency Representation Theorem in the actor model provides a fairly general way to represent concurrent systems that are closed in the sense that they do not receive communications from outside. (Other concurrency systems, e.g., process calculi can be modeled in the actor model using a two-phase commit protocol.^{[11]}) The mathematical denotation denoted by a closed system S is constructed increasingly better approximations from an initial behavior called ⊥_{S} using a behavior approximating function **progression**_{S} to construct a denotation (meaning ) for S as follows:^{[12]}

**Denote**_{S}≡ ⊔_{i∈ω}**progression**_{S}^{i}(⊥_{S})

In this way, S can be mathematically characterized in terms of all its possible behaviors.

Various types of temporal logic^{[13]} can be used to help reason about concurrent systems. Some of these logics, such as linear temporal logic and computation tree logic, allow assertions to be made about the sequences of states that a concurrent system can pass through. Others, such as action computational tree logic, Hennessy–Milner logic, and Lamport's temporal logic of actions, build their assertions from sequences of *actions* (changes in state). The principal application of these logics is in writing specifications for concurrent systems.^{[5]}

This section does not cite any sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed. (April 2007) (Learn how and when to remove this template message)

Concurrent programming encompasses programming languages and algorithms used to implement concurrent systems. Concurrent programming is usually considered to be more general than parallel programming because it can involve arbitrary and dynamic patterns of communication and interaction, whereas parallel systems generally have a predefined and well-structured communications pattern. The base goals of concurrent programming include *correctness*, *performance* and *robustness*. Concurrent systems such as Operating systems and Database management systems are generally designed to operate indefinitely, including automatic recovery from failure, and not terminate unexpectedly (see Concurrency control). Some concurrent systems implement a form of transparent concurrency, in which concurrent computational entities may compete for and share a single resource, but the complexities of this competition and sharing are shielded from the programmer.

Because they use shared resources, concurrent systems in general require the inclusion of some kind of arbiter somewhere in their implementation (often in the underlying hardware), to control access to those resources. The use of arbiters introduces the possibility of indeterminacy in concurrent computation which has major implications for practice including correctness and performance. For example, arbitration introduces unbounded nondeterminism which raises issues with model checking because it causes explosion in the state space and can even cause models to have an infinite number of states.

Some concurrent programming models include coprocesses and deterministic concurrency. In these models, threads of control explicitly yield their timeslices, either to the system or to another process.