In calculus, integration by substitution, also known as u-substitution, reverse chain rule or change of variables,[1] is a method for evaluating integrals and antiderivatives. It is the counterpart to the chain rule for differentiation, and can loosely be thought of as using the chain rule "backwards."

## Substitution for a single variable

### Introduction (indefinite integrals)

Before stating the result rigorously, consider a simple case using indefinite integrals.

Compute ${\textstyle \int (2x^{3}+1)^{7}(x^{2})\,dx.}$[2]

Set ${\displaystyle u=2x^{3}+1.}$ This means ${\textstyle {\frac {du}{dx))=6x^{2},}$ or as a differential form, ${\textstyle du=6x^{2}\,dx.}$ Now: {\displaystyle {\begin{aligned}\int (2x^{3}+1)^{7}(x^{2})\,dx&={\frac {1}{6))\int \underbrace {(2x^{3}+1)^{7)) _{u^{7))\underbrace {(6x^{2})\,dx} _{du}\\&={\frac {1}{6))\int u^{7}\,du\\&={\frac {1}{6))\left({\frac {1}{8))u^{8}\right)+C\\&={\frac {1}{48))(2x^{3}+1)^{8}+C,\end{aligned))} where ${\displaystyle C}$ is an arbitrary constant of integration.

This procedure is frequently used, but not all integrals are of a form that permits its use. In any event, the result should be verified by differentiating and comparing to the original integrand. ${\displaystyle {\frac {d}{dx))\left[{\frac {1}{48))(2x^{3}+1)^{8}+C\right]={\frac {1}{6))(2x^{3}+1)^{7}(6x^{2})=(2x^{3}+1)^{7}(x^{2}).}$ For definite integrals, the limits of integration must also be adjusted, but the procedure is mostly the same.

### Statement for definite integrals

Let ${\displaystyle g:[a,b]\to I}$ be a differentiable function with a continuous derivative, where ${\displaystyle I\subset \mathbb {R} }$ is an interval. Suppose that ${\displaystyle f:I\to \mathbb {R} }$ is a continuous function. Then:[3] ${\displaystyle \int _{a}^{b}f(g(x))\cdot g'(x)\,dx=\int _{g(a)}^{g(b)}f(u)\ du.}$

In Leibniz notation, the substitution ${\displaystyle u=g(x)}$ yields: ${\displaystyle {\frac {du}{dx))=g'(x).}$ Working heuristically with infinitesimals yields the equation ${\displaystyle du=g'(x)\,dx,}$ which suggests the substitution formula above. (This equation may be put on a rigorous foundation by interpreting it as a statement about differential forms.) One may view the method of integration by substitution as a partial justification of Leibniz's notation for integrals and derivatives.

The formula is used to transform one integral into another integral that is easier to compute. Thus, the formula can be read from left to right or from right to left in order to simplify a given integral. When used in the former manner, it is sometimes known as u-substitution or w-substitution in which a new variable is defined to be a function of the original variable found inside the composite function multiplied by the derivative of the inner function. The latter manner is commonly used in trigonometric substitution, replacing the original variable with a trigonometric function of a new variable and the original differential with the differential of the trigonometric function.

### Proof

Integration by substitution can be derived from the fundamental theorem of calculus as follows. Let ${\displaystyle f}$ and ${\displaystyle g}$ be two functions satisfying the above hypothesis that ${\displaystyle f}$ is continuous on ${\displaystyle I}$ and ${\displaystyle g'}$ is integrable on the closed interval ${\displaystyle [a,b]}$. Then the function ${\displaystyle f(g(x))\cdot g'(x)}$ is also integrable on ${\displaystyle [a,b]}$. Hence the integrals ${\displaystyle \int _{a}^{b}f(g(x))\cdot g'(x)\ dx}$ and ${\displaystyle \int _{g(a)}^{g(b)}f(u)\ du}$ in fact exist, and it remains to show that they are equal.

Since ${\displaystyle f}$ is continuous, it has an antiderivative ${\displaystyle F}$. The composite function ${\displaystyle F\circ g}$ is then defined. Since ${\displaystyle g}$ is differentiable, combining the chain rule and the definition of an antiderivative gives: ${\displaystyle (F\circ g)'(x)=F'(g(x))\cdot g'(x)=f(g(x))\cdot g'(x).}$

Applying the fundamental theorem of calculus twice gives: {\displaystyle {\begin{aligned}\int _{a}^{b}f(g(x))\cdot g'(x)\ dx&=\int _{a}^{b}(F\circ g)'(x)\ dx\\&=(F\circ g)(b)-(F\circ g)(a)\\&=F(g(b))-F(g(a))\\&=\int _{g(a)}^{g(b)}f(u)\,du,\end{aligned))} which is the substitution rule.

### Examples: Antiderivatives (indefinite integrals)

Substitution can be used to determine antiderivatives. One chooses a relation between ${\displaystyle x}$ and ${\displaystyle u,}$ determines the corresponding relation between ${\displaystyle dx}$ and ${\displaystyle du}$ by differentiating, and performs the substitutions. An antiderivative for the substituted function can hopefully be determined; the original substitution between ${\displaystyle x}$ and ${\displaystyle u}$ is then undone.

#### Example 1

Consider the integral: ${\displaystyle \int x\cos(x^{2}+1)\ dx.}$ Make the substitution ${\textstyle u=x^{2}+1}$ to obtain ${\displaystyle du=2x\ dx,}$ meaning ${\textstyle x\ dx={\frac {1}{2))\ du.}$ Therefore: {\displaystyle {\begin{aligned}\int x\cos(x^{2}+1)\,dx&={\frac {1}{2))\int 2x\cos(x^{2}+1)\,dx\\[6pt]&={\frac {1}{2))\int \cos u\,du\\[6pt]&={\frac {1}{2))\sin u+C\\[6pt]&={\frac {1}{2))\sin(x^{2}+1)+C,\end{aligned))} where ${\displaystyle C}$ is an arbitrary constant of integration.

#### Example 2: Antiderivatives of tangent and cotangent

The tangent function can be integrated using substitution by expressing it in terms of the sine and cosine: ${\displaystyle \tan x={\tfrac {\sin x}{\cos x))}$.

Using the substitution ${\displaystyle u=\cos x}$ gives ${\displaystyle du=-\sin x\,dx}$ and {\displaystyle {\begin{aligned}\int \tan x\,dx&=\int {\frac {\sin x}{\cos x))\,dx\\&=\int -{\frac {du}{u))\\&=-\ln \left|u\right|+C\\&=-\ln \left|\cos x\right|+C\\&=\ln \left|\sec x\right|+C.\end{aligned))}

The cotangent function can be integrated similarly by expressing it as ${\displaystyle \cot x={\tfrac {\cos x}{\sin x))}$ and using the substitution ${\displaystyle u=\sin {x},du=\cos {x}\,dx}$: {\displaystyle {\begin{aligned}\int \cot x\,dx&=\int {\frac {\cos x}{\sin x))\,dx\\&=\int {\frac {du}{u))\\&=\ln \left|u\right|+C\\&=\ln \left|\sin x\right|+C.\end{aligned))}

### Examples: Definite integrals

When evaluating definite integrals by substitution, one may calculate the antiderivative fully first, then apply the boundary conditions. In that case, there is no need to transform the boundary terms. Alternatively, one may fully evaluate the indefinite integral (see above) first then apply the boundary conditions. This becomes especially handy when multiple substitutions are used.

#### Example 1

Consider the integral: ${\displaystyle \int _{0}^{2}{\frac {x}{\sqrt {x^{2}+1))}dx.}$ Make the substitution ${\textstyle u=x^{2}+1}$ to obtain ${\displaystyle du=2x\ dx,}$ meaning ${\textstyle x\ dx={\frac {1}{2))\ du.}$ Therefore: {\displaystyle {\begin{aligned}\int _{x=0}^{x=2}{\frac {x}{\sqrt {x^{2}+1))}\ dx&={\frac {1}{2))\int _{u=1}^{u=5}{\frac {du}{\sqrt {u))}\\[6pt]&={\frac {1}{2))\left(2{\sqrt {5))-2{\sqrt {1))\right)\\[6pt]&={\sqrt {5))-1.\end{aligned))} Since the lower limit ${\displaystyle x=0}$ was replaced with ${\displaystyle u=1,}$ and the upper limit ${\displaystyle x=2}$ with ${\displaystyle 2^{2}+1=5,}$ a transformation back into terms of ${\displaystyle x}$ was unnecessary.

#### Example 2: Trigonometric substitution

For the integral ${\displaystyle \int _{0}^{1}{\sqrt {1-x^{2))}\,dx,}$ a variation of the above procedure is needed. The substitution ${\displaystyle x=\sin u}$ implying ${\displaystyle dx=\cos u\,du}$ is useful because ${\textstyle {\sqrt {1-\sin ^{2}u))=\cos u.}$ We thus have: {\displaystyle {\begin{aligned}\int _{0}^{1}{\sqrt {1-x^{2))}\ dx&=\int _{0}^{\pi /2}{\sqrt {1-\sin ^{2}u))\cos u\ du\\[6pt]&=\int _{0}^{\pi /2}\cos ^{2}u\ du\\[6pt]&=\left[{\frac {u}{2))+{\frac {\sin(2u)}{4))\right]_{0}^{\pi /2}\\[6pt]&={\frac {\pi }{4))+0\\[6pt]&={\frac {\pi }{4)).\end{aligned))}

The resulting integral can be computed using integration by parts or a double angle formula, ${\textstyle 2\cos ^{2}u=1+\cos(2u),}$ followed by one more substitution. One can also note that the function being integrated is the upper right quarter of a circle with a radius of one, and hence integrating the upper right quarter from zero to one is the geometric equivalent to the area of one quarter of the unit circle, or ${\displaystyle {\tfrac {\pi }{4)).}$

## Substitution for multiple variables

One may also use substitution when integrating functions of several variables.

Here, the substitution function (v1,...,vn) = φ(u1, ..., un) needs to be injective and continuously differentiable, and the differentials transform as: ${\displaystyle dv_{1}\cdots dv_{n}=\left|\det(D\varphi )(u_{1},\ldots ,u_{n})\right|\,du_{1}\cdots du_{n},}$ where det()(u1, ..., un) denotes the determinant of the Jacobian matrix of partial derivatives of φ at the point (u1, ..., un). This formula expresses the fact that the absolute value of the determinant of a matrix equals the volume of the parallelotope spanned by its columns or rows.

More precisely, the change of variables formula is stated in the next theorem:

Theorem — Let U be an open set in Rn and φ : URn an injective differentiable function with continuous partial derivatives, the Jacobian of which is nonzero for every x in U. Then for any real-valued, compactly supported, continuous function f, with support contained in φ(U): ${\displaystyle \int _{\varphi (U)}f(\mathbf {v} )\,d\mathbf {v} =\int _{U}f(\varphi (\mathbf {u} ))\left|\det(D\varphi )(\mathbf {u} )\right|\,d\mathbf {u} .}$

The conditions on the theorem can be weakened in various ways. First, the requirement that φ be continuously differentiable can be replaced by the weaker assumption that φ be merely differentiable and have a continuous inverse.[4] This is guaranteed to hold if φ is continuously differentiable by the inverse function theorem. Alternatively, the requirement that det() ≠ 0 can be eliminated by applying Sard's theorem.[5]

For Lebesgue measurable functions, the theorem can be stated in the following form:[6]

Theorem — Let U be a measurable subset of Rn and φ : URn an injective function, and suppose for every x in U there exists φ′(x) in Rn,n such that φ(y) = φ(x) + φ′(x)(yx) + o(‖yx‖) as yx (here o is little-o notation). Then φ(U) is measurable, and for any real-valued function f defined on φ(U): ${\displaystyle \int _{\varphi (U)}f(v)\,dv=\int _{U}f(\varphi (u))\left|\det \varphi '(u)\right|\,du}$ in the sense that if either integral exists (including the possibility of being properly infinite), then so does the other one, and they have the same value.

Another very general version in measure theory is the following:[7]

Theorem — Let X be a locally compact Hausdorff space equipped with a finite Radon measure μ, and let Y be a σ-compact Hausdorff space with a σ-finite Radon measure ρ. Let φ : XY be an absolutely continuous function (where the latter means that ρ(φ(E)) = 0 whenever μ(E) = 0). Then there exists a real-valued Borel measurable function w on X such that for every Lebesgue integrable function f : YR, the function (fφ) ⋅ w is Lebesgue integrable on X, and ${\displaystyle \int _{Y}f(y)\,d\rho (y)=\int _{X}(f\circ \varphi )(x)\,w(x)\,d\mu (x).}$ Furthermore, it is possible to write ${\displaystyle w(x)=(g\circ \varphi )(x)}$ for some Borel measurable function g on Y.

In geometric measure theory, integration by substitution is used with Lipschitz functions. A bi-Lipschitz function is a Lipschitz function φ : URn which is injective and whose inverse function φ−1 : φ(U) → U is also Lipschitz. By Rademacher's theorem, a bi-Lipschitz mapping is differentiable almost everywhere. In particular, the Jacobian determinant of a bi-Lipschitz mapping det is well-defined almost everywhere. The following result then holds:

Theorem — Let U be an open subset of Rn and φ : URn be a bi-Lipschitz mapping. Let f : φ(U) → R be measurable. Then ${\displaystyle \int _{\varphi (U)}f(x)\,dx=\int _{U}(f\circ \varphi )(x)|\det D\varphi (x)|\,dx}$ in the sense that if either integral exists (or is properly infinite), then so does the other one, and they have the same value.

The above theorem was first proposed by Euler when he developed the notion of double integrals in 1769. Although generalized to triple integrals by Lagrange in 1773, and used by Legendre, Laplace, and Gauss, and first generalized to n variables by Mikhail Ostrogradsky in 1836, it resisted a fully rigorous formal proof for a surprisingly long time, and was first satisfactorily resolved 125 years later, by Élie Cartan in a series of papers beginning in the mid-1890s.[8][9]

## Application in probability

Substitution can be used to answer the following important question in probability: given a random variable X with probability density pX and another random variable Y such that Y= ϕ(X) for injective (one-to-one) ϕ, what is the probability density for Y?

It is easiest to answer this question by first answering a slightly different question: what is the probability that Y takes a value in some particular subset S? Denote this probability P(YS). Of course, if Y has probability density pY, then the answer is: ${\displaystyle P(Y\in S)=\int _{S}p_{Y}(y)\,dy,}$ but this is not really useful because we do not know pY; it is what we are trying to find. We can make progress by considering the problem in the variable X. Y takes a value in S whenever X takes a value in ${\textstyle \phi ^{-1}(S),}$ so: ${\displaystyle P(Y\in S)=P(X\in \phi ^{-1}(S))=\int _{\phi ^{-1}(S)}p_{X}(x)\,dx.}$

Changing from variable x to y gives: ${\displaystyle P(Y\in S)=\int _{\phi ^{-1}(S)}p_{X}(x)\,dx=\int _{S}p_{X}(\phi ^{-1}(y))\left|{\frac {d\phi ^{-1)){dy))\right|\,dy.}$ Combining this with our first equation gives: ${\displaystyle \int _{S}p_{Y}(y)\,dy=\int _{S}p_{X}(\phi ^{-1}(y))\left|{\frac {d\phi ^{-1)){dy))\right|\,dy,}$ so: ${\displaystyle p_{Y}(y)=p_{X}(\phi ^{-1}(y))\left|{\frac {d\phi ^{-1)){dy))\right|.}$

In the case where X and Y depend on several uncorrelated variables (i.e., ${\textstyle p_{X}=p_{X}(x_{1},\ldots ,x_{n})}$ and ${\displaystyle y=\phi (x)}$), ${\displaystyle p_{Y))$can be found by substitution in several variables discussed above. The result is: ${\displaystyle p_{Y}(y)=p_{X}(\phi ^{-1}(y))\left|\det D\phi ^{-1}(y)\right|.}$

## Notes

1. ^ Swokowski 1983, p. 257
2. ^ Swokowski 1983, p. 258
3. ^ Briggs & Cochran 2011, p. 361
4. ^ Rudin 1987, Theorem 7.26
5. ^ Spivak 1965, p. 72
6. ^ Fremlin 2010, Theorem 263D
7. ^ Hewitt & Stromberg 1965, Theorem 20.3
8. ^ Katz 1982
9. ^ Ferzola 1994

## References

• Briggs, William; Cochran, Lyle (2011), Calculus /Early Transcendentals (Single Variable ed.), Addison-Wesley, ISBN 978-0-321-66414-3
• Ferzola, Anthony P. (1994), "Euler and differentials", The College Mathematics Journal, 25 (2): 102–111, doi:10.2307/2687130, JSTOR 2687130, archived from the original on 2012-11-07, retrieved 2008-12-24
• Fremlin, D.H. (2010), Measure Theory, Volume 2, Torres Fremlin, ISBN 978-0-9538129-7-4.
• Hewitt, Edwin; Stromberg, Karl (1965), Real and Abstract Analysis, Springer-Verlag, ISBN 978-0-387-04559-7.
• Katz, V. (1982), "Change of variables in multiple integrals: Euler to Cartan", Mathematics Magazine, 55 (1): 3–11, doi:10.2307/2689856, JSTOR 2689856
• Rudin, Walter (1987), Real and Complex Analysis, McGraw-Hill, ISBN 978-0-07-054234-1.
• Swokowski, Earl W. (1983), Calculus with analytic geometry (alternate ed.), Prindle, Weber & Schmidt, ISBN 0-87150-341-7
• Spivak, Michael (1965), Calculus on Manifolds, Westview Press, ISBN 978-0-8053-9021-6.