Numerical methods for ordinary differential equations are methods used to find numerical approximations to the solutions of ordinary differential equations (ODEs). Their use is also known as "numerical integration", although this term can also refer to the computation of integrals.
Many differential equations cannot be solved exactly. For practical purposes, however – such as in engineering – a numeric approximation to the solution is often sufficient. The algorithms studied here can be used to compute such an approximation. An alternative method is to use techniques from calculus to obtain a series expansion of the solution.
Ordinary differential equations occur in many scientific disciplines, including physics, chemistry, biology, and economics.^{[1]} In addition, some methods in numerical partial differential equations convert the partial differential equation into an ordinary differential equation, which must then be solved.
A firstorder differential equation is an Initial value problem (IVP) of the form,^{[2]}

(1) 
where is a function , and the initial condition is a given vector. Firstorder means that only the first derivative of y appears in the equation, and higher derivatives are absent.
Without loss of generality to higherorder systems, we restrict ourselves to firstorder differential equations, because a higherorder ODE can be converted into a larger system of firstorder equations by introducing extra variables. For example, the secondorder equation y′′ = −y can be rewritten as two firstorder equations: y′ = z and z′ = −y.
In this section, we describe numerical methods for IVPs, and remark that boundary value problems (BVPs) require a different set of tools. In a BVP, one defines values, or components of the solution y at more than one point. Because of this, different methods need to be used to solve BVPs. For example, the shooting method (and its variants) or global methods like finite differences,^{[3]} Galerkin methods,^{[4]} or collocation methods are appropriate for that class of problems.
The Picard–Lindelöf theorem states that there is a unique solution, provided f is Lipschitzcontinuous.
Numerical methods for solving firstorder IVPs often fall into one of two large categories:^{[5]} linear multistep methods, or Runge–Kutta methods. A further division can be realized by dividing methods into those that are explicit and those that are implicit. For example, implicit linear multistep methods include AdamsMoulton methods, and backward differentiation methods (BDF), whereas implicit Runge–Kutta methods^{[6]} include diagonally implicit Runge–Kutta (DIRK),^{[7]}^{[8]} singly diagonally implicit Runge–Kutta (SDIRK),^{[9]} and Gauss–Radau^{[10]} (based on Gaussian quadrature^{[11]}) numerical methods. Explicit examples from the linear multistep family include the Adams–Bashforth methods, and any Runge–Kutta method with a lower diagonal Butcher tableau is explicit. A loose rule of thumb dictates that stiff differential equations require the use of implicit schemes, whereas nonstiff problems can be solved more efficiently with explicit schemes.
The socalled general linear methods (GLMs) are a generalization of the above two large classes of methods.^{[12]}
Further information: Euler method 
From any point on a curve, you can find an approximation of a nearby point on the curve by moving a short distance along a line tangent to the curve.
Starting with the differential equation (1), we replace the derivative y′ by the finite difference approximation

(2) 
which when rearranged yields the following formula
and using (1) gives:

(3) 
This formula is usually applied in the following way. We choose a step size h, and we construct the sequence We denote by a numerical estimate of the exact solution . Motivated by (3), we compute these estimates by the following recursive scheme

(4) 
This is the Euler method (or forward Euler method, in contrast with the backward Euler method, to be described below). The method is named after Leonhard Euler who described it in 1768.
The Euler method is an example of an explicit method. This means that the new value y_{n+1} is defined in terms of things that are already known, like y_{n}.
Further information: Backward Euler method 
If, instead of (2), we use the approximation

(5) 
we get the backward Euler method:

(6) 
The backward Euler method is an implicit method, meaning that we have to solve an equation to find y_{n+1}. One often uses fixedpoint iteration or (some modification of) the Newton–Raphson method to achieve this.
It costs more time to solve this equation than explicit methods; this cost must be taken into consideration when one selects the method to use. The advantage of implicit methods such as (6) is that they are usually more stable for solving a stiff equation, meaning that a larger step size h can be used.
Further information: Exponential integrator 
Exponential integrators describe a large class of integrators that have recently seen a lot of development.^{[13]} They date back to at least the 1960s.
In place of (1), we assume the differential equation is either of the form

(7) 
or it has been locally linearized about a background state to produce a linear term and a nonlinear term .
Exponential integrators are constructed by multiplying (7) by , and exactly integrating the result over a time interval :
This integral equation is exact, but it doesn't define the integral.
The firstorder exponential integrator can be realized by holding constant over the full interval:

(8) 
The Euler method is often not accurate enough. In more precise terms, it only has order one (the concept of order is explained below). This caused mathematicians to look for higherorder methods.
One possibility is to use not only the previously computed value y_{n} to determine y_{n+1}, but to make the solution depend on more past values. This yields a socalled multistep method. Perhaps the simplest is the leapfrog method which is second order and (roughly speaking) relies on two time values.
Almost all practical multistep methods fall within the family of linear multistep methods, which have the form
Another possibility is to use more points in the interval . This leads to the family of Runge–Kutta methods, named after Carl Runge and Martin Kutta. One of their fourthorder methods is especially popular.
A good implementation of one of these methods for solving an ODE entails more than the timestepping formula.
It is often inefficient to use the same step size all the time, so variable stepsize methods have been developed. Usually, the step size is chosen such that the (local) error per step is below some tolerance level. This means that the methods must also compute an error indicator, an estimate of the local error.
An extension of this idea is to choose dynamically between different methods of different orders (this is called a variable order method). Methods based on Richardson extrapolation,^{[14]} such as the Bulirsch–Stoer algorithm,^{[15]}^{[16]} are often used to construct various methods of different orders.
Other desirable features include:
Many methods do not fall within the framework discussed here. Some classes of alternative methods are:
For applications that require parallel computing on supercomputers, the degree of concurrency offered by a numerical method becomes relevant. In view of the challenges from exascale computing systems, numerical methods for initial value problems which can provide concurrency in temporal direction are being studied.^{[20]} Parareal is a relatively well known example of such a parallelintime integration method, but early ideas go back into the 1960s.^{[21]} In the advent of exascale computing, timeparallel integration methods receive again increased attention. Algorithms for exponential integrators can leverage e.g., the standardized Batched BLAS functions that allow an easy and efficient implementation of parallelized integrators.^{[22]}
Numerical analysis is not only the design of numerical methods, but also their analysis. Three central concepts in this analysis are:
Main articles: Sequence, Limit (mathematics), and Limit of a sequence 
A numerical method is said to be convergent if the numerical solution approaches the exact solution as the step size h goes to 0. More precisely, we require that for every ODE (1) with a Lipschitz function f and every t^{*} > 0,
All the methods mentioned above are convergent.
Further information: Truncation error (numerical integration) 
Suppose the numerical method is
The local (truncation) error of the method is the error committed by one step of the method. That is, it is the difference between the result given by the method, assuming that no error was made in earlier steps, and the exact solution:
The method is said to be consistent if
The method has order if
Hence a method is consistent if it has an order greater than 0. The (forward) Euler method (4) and the backward Euler method (6) introduced above both have order 1, so they are consistent. Most methods being used in practice attain higher order. Consistency is a necessary condition for convergence^{[citation needed]}, but not sufficient; for a method to be convergent, it must be both consistent and zerostable.
A related concept is the global (truncation) error, the error sustained in all the steps one needs to reach a fixed time . Explicitly, the global error at time is where . The global error of a th order onestep method is ; in particular, such a method is convergent. This statement is not necessarily true for multistep methods.
Further information: Stiff equation 
For some differential equations, application of standard methods—such as the Euler method, explicit Runge–Kutta methods, or multistep methods (for example, Adams–Bashforth methods)—exhibit instability in the solutions, though other methods may produce stable solutions. This "difficult behaviour" in the equation (which may not necessarily be complex itself) is described as stiffness, and is often caused by the presence of different time scales in the underlying problem.^{[24]} For example, a collision in a mechanical system like in an impact oscillator typically occurs at much smaller time scale than the time for the motion of objects; this discrepancy makes for very "sharp turns" in the curves of the state parameters.
Stiff problems are ubiquitous in chemical kinetics, control theory, solid mechanics, weather forecasting, biology, plasma physics, and electronics. One way to overcome stiffness is to extend the notion of differential equation to that of differential inclusion, which allows for and models nonsmoothness.^{[25]}^{[26]}
Below is a timeline of some important developments in this field.^{[27]}^{[28]}
Boundary value problems (BVPs) are usually solved numerically by solving an approximately equivalent matrix problem obtained by discretizing the original BVP.^{[29]} The most commonly used method for numerically solving BVPs in one dimension is called the Finite Difference Method.^{[3]} This method takes advantage of linear combinations of point values to construct finite difference coefficients that describe derivatives of the function. For example, the secondorder central difference approximation to the first derivative is given by:
and the secondorder central difference for the second derivative is given by:
In both of these formulae, is the distance between neighbouring x values on the discretized domain. One then constructs a linear system that can then be solved by standard matrix methods. For example, suppose the equation to be solved is:
The next step would be to discretize the problem and use linear derivative approximations such as
and solve the resulting system of linear equations. This would lead to equations such as:
On first viewing, this system of equations appears to have difficulty associated with the fact that the equation involves no terms that are not multiplied by variables, but in fact this is false. At i = 1 and n − 1 there is a term involving the boundary values and and since these two values are known, one can simply substitute them into this equation and as a result have a nonhomogeneous linear system of equations that has nontrivial solutions.