This article includes a list of general references, but it lacks sufficient corresponding inline citations. Please help to improve this article by introducing more precise citations. (September 2012) (Learn how and when to remove this message)

In algebra, the partial fraction decomposition or partial fraction expansion of a rational fraction (that is, a fraction such that the numerator and the denominator are both polynomials) is an operation that consists of expressing the fraction as a sum of a polynomial (possibly zero) and one or several fractions with a simpler denominator.[1]

The importance of the partial fraction decomposition lies in the fact that it provides algorithms for various computations with rational functions, including the explicit computation of antiderivatives,[2] Taylor series expansions, inverse Z-transforms, and inverse Laplace transforms. The concept was discovered independently in 1702 by both Johann Bernoulli and Gottfried Leibniz.[3]

In symbols, the partial fraction decomposition of a rational fraction of the form ${\textstyle {\frac {f(x)}{g(x))),}$ where f and g are polynomials, is its expression as

${\displaystyle {\frac {f(x)}{g(x)))=p(x)+\sum _{j}{\frac {f_{j}(x)}{g_{j}(x)))}$

where p(x) is a polynomial, and, for each j, the denominator gj (x) is a power of an irreducible polynomial (that is not factorable into polynomials of positive degrees), and the numerator fj (x) is a polynomial of a smaller degree than the degree of this irreducible polynomial.

When explicit computation is involved, a coarser decomposition is often preferred, which consists of replacing "irreducible polynomial" by "square-free polynomial" in the description of the outcome. This allows replacing polynomial factorization by the much easier-to-compute square-free factorization. This is sufficient for most applications, and avoids introducing irrational coefficients when the coefficients of the input polynomials are integers or rational numbers.

Basic principles

Let ${\displaystyle R(x)={\frac {F}{G))}$ be a rational fraction, where F and G are univariate polynomials in the indeterminate x over a field. The existence of the partial fraction can be proved by applying inductively the following reduction steps.

Polynomial part

There exist two polynomials E and F1 such that ${\displaystyle {\frac {F}{G))=E+{\frac {F_{1)){G)),}$ and ${\displaystyle \deg F_{1}<\deg G,}$ where ${\displaystyle \deg P}$ denotes the degree of the polynomial P.

This results immediately from the Euclidean division of F by G, which asserts the existence of E and F1 such that ${\displaystyle F=EG+F_{1))$ and ${\displaystyle \deg F_{1}<\deg G.}$

This allows supposing in the next steps that ${\displaystyle \deg F<\deg G.}$

Factors of the denominator

If ${\displaystyle \deg F<\deg G,}$ and ${\displaystyle G=G_{1}G_{2},}$ where G1 and G2 are coprime polynomials, then there exist polynomials ${\displaystyle F_{1))$ and ${\displaystyle F_{2))$ such that ${\displaystyle {\frac {F}{G))={\frac {F_{1)){G_{1))}+{\frac {F_{2)){G_{2))},}$ and ${\displaystyle \deg F_{1}<\deg G_{1}\quad {\text{and))\quad \deg F_{2}<\deg G_{2}.}$

This can be proved as follows. Bézout's identity asserts the existence of polynomials C and D such that ${\displaystyle CG_{1}+DG_{2}=1}$ (by hypothesis, 1 is a greatest common divisor of G1 and G2).

Let ${\displaystyle DF=G_{1}Q+F_{1))$ with ${\displaystyle \deg F_{1}<\deg G_{1))$ be the Euclidean division of DF by ${\displaystyle G_{1}.}$ Setting ${\displaystyle F_{2}=CF+QG_{2},}$ one gets {\displaystyle {\begin{aligned}{\frac {F}{G))&={\frac {F(CG_{1}+DG_{2})}{G_{1}G_{2))}={\frac {DF}{G_{1))}+{\frac {CF}{G_{2))}\\&={\frac {F_{1}+G_{1}Q}{G_{1))}+{\frac {F_{2}-G_{2}Q}{G_{2))}\\&={\frac {F_{1)){G_{1))}+{\frac {F_{2)){G_{2))}.\end{aligned))} It remains to show that ${\displaystyle \deg F_{2}<\deg G_{2}.}$ By reducing the last sum of fractions to a common denominator, one gets ${\displaystyle F=F_{2}G_{1}+F_{1}G_{2},}$ and thus {\displaystyle {\begin{aligned}\deg F_{2}&=\deg(F-F_{1}G_{2})-\deg G_{1}\leq \max(\deg F,\deg(F_{1}G_{2}))-\deg G_{1}\\&<\max(\deg G,\deg(G_{1}G_{2}))-\deg G_{1}=\deg G_{2}\end{aligned))}

Powers in the denominator

Using the preceding decomposition inductively one gets fractions of the form ${\displaystyle {\frac {F}{G^{k))},}$ with ${\displaystyle \deg F<\deg G^{k}=k\deg G,}$ where G is an irreducible polynomial. If k > 1, one can decompose further, by using that an irreducible polynomial is a square-free polynomial, that is, ${\displaystyle 1}$ is a greatest common divisor of the polynomial and its derivative. If ${\displaystyle G'}$ is the derivative of G, Bézout's identity provides polynomials C and D such that ${\displaystyle CG+DG'=1}$ and thus ${\displaystyle F=FCG+FDG'.}$ Euclidean division of ${\displaystyle FDG'}$ by ${\displaystyle G}$ gives polynomials ${\displaystyle H_{k))$ and ${\displaystyle Q}$ such that ${\displaystyle FDG'=QG+H_{k))$ and ${\displaystyle \deg H_{k}<\deg G.}$ Setting ${\displaystyle F_{k-1}=FC+Q,}$ one gets ${\displaystyle {\frac {F}{G^{k))}={\frac {H_{k)){G^{k))}+{\frac {F_{k-1)){G^{k-1))},}$ with ${\displaystyle \deg H_{k}<\deg G.}$

Iterating this process with ${\displaystyle {\frac {F_{k-1)){G^{k-1))))$ in place of ${\displaystyle {\frac {F}{G^{k))))$ leads eventually to the following theorem.

Statement

Theorem — Let f and g be nonzero polynomials over a field K. Write g as a product of powers of distinct irreducible polynomials : ${\displaystyle g=\prod _{i=1}^{k}p_{i}^{n_{i)).}$

There are (unique) polynomials b and aij with deg aij < deg pi such that ${\displaystyle {\frac {f}{g))=b+\sum _{i=1}^{k}\sum _{j=1}^{n_{i)){\frac {a_{ij)){p_{i}^{j))}.}$

If deg f < deg g, then b = 0.

The uniqueness can be proved as follows. Let d = max(1 + deg f, deg g). All together, b and the aij have d coefficients. The shape of the decomposition defines a linear map from coefficient vectors to polynomials f of degree less than d. The existence proof means that this map is surjective. As the two vector spaces have the same dimension, the map is also injective, which means uniqueness of the decomposition. By the way, this proof induces an algorithm for computing the decomposition through linear algebra.

If K is the field of complex numbers, the fundamental theorem of algebra implies that all pi have degree one, and all numerators ${\displaystyle a_{ij))$ are constants. When K is the field of real numbers, some of the pi may be quadratic, so, in the partial fraction decomposition, quotients of linear polynomials by powers of quadratic polynomials may also occur.

In the preceding theorem, one may replace "distinct irreducible polynomials" by "pairwise coprime polynomials that are coprime with their derivative". For example, the pi may be the factors of the square-free factorization of g. When K is the field of rational numbers, as it is typically the case in computer algebra, this allows to replace factorization by greatest common divisor computation for computing a partial fraction decomposition.

Application to symbolic integration

For the purpose of symbolic integration, the preceding result may be refined into

Theorem — Let f and g be nonzero polynomials over a field K. Write g as a product of powers of pairwise coprime polynomials which have no multiple root in an algebraically closed field:

${\displaystyle g=\prod _{i=1}^{k}p_{i}^{n_{i)).}$

There are (unique) polynomials b and cij with deg cij < deg pi such that ${\displaystyle {\frac {f}{g))=b+\sum _{i=1}^{k}\sum _{j=2}^{n_{i))\left({\frac {c_{ij)){p_{i}^{j-1))}\right)'+\sum _{i=1}^{k}{\frac {c_{i1)){p_{i))}.}$ where ${\displaystyle X'}$ denotes the derivative of ${\displaystyle X.}$

This reduces the computation of the antiderivative of a rational function to the integration of the last sum, which is called the logarithmic part, because its antiderivative is a linear combination of logarithms.

There are various methods to compute decomposition in the Theorem. One simple way is called Hermite's method. First, b is immediately computed by Euclidean division of f by g, reducing to the case where deg(f) < deg(g). Next, one knows deg(cij) < deg(pi), so one may write each cij as a polynomial with unknown coefficients. Reducing the sum of fractions in the Theorem to a common denominator, and equating the coefficients of each power of x in the two numerators, one gets a system of linear equations which can be solved to obtain the desired (unique) values for the unknown coefficients.

Procedure

Given two polynomials ${\displaystyle P(x)}$ and ${\displaystyle Q(x)=(x-\alpha _{1})(x-\alpha _{2})\cdots (x-\alpha _{n})}$, where the αn are distinct constants and deg P < n, explicit expressions for partial fractions can be obtained by supposing that ${\displaystyle {\frac {P(x)}{Q(x)))={\frac {c_{1)){x-\alpha _{1))}+{\frac {c_{2)){x-\alpha _{2))}+\cdots +{\frac {c_{n)){x-\alpha _{n))))$ and solving for the ci constants, by substitution, by equating the coefficients of terms involving the powers of x, or otherwise. (This is a variant of the method of undetermined coefficients. After both sides of the equation are multiplied by Q(x), one side of the equation is a specific polynomial, and the other side is a polynomial with undetermined coefficients. The equality is possible only when the coefficients of like powers of x are equal. This yields n equations in n unknowns, the ck.)

A more direct computation, which is strongly related to Lagrange interpolation, consists of writing ${\displaystyle {\frac {P(x)}{Q(x)))=\sum _{i=1}^{n}{\frac {P(\alpha _{i})}{Q'(\alpha _{i}))){\frac {1}{(x-\alpha _{i})))}$ where ${\displaystyle Q'}$ is the derivative of the polynomial ${\displaystyle Q}$. The coefficients of ${\displaystyle {\tfrac {1}{x-\alpha _{j))))$ are called the residues of f/g.

This approach does not account for several other cases, but can be modified accordingly:

• If ${\displaystyle \deg P\geq \deg Q,}$ then it is necessary to perform the Euclidean division of P by Q, using polynomial long division, giving P(x) = E(x) Q(x) + R(x) with deg R < n. Dividing by Q(x) this gives ${\displaystyle {\frac {P(x)}{Q(x)))=E(x)+{\frac {R(x)}{Q(x))),}$ and then seek partial fractions for the remainder fraction (which by definition satisfies deg R < deg Q).
• If Q(x) contains factors which are irreducible over the given field, then the numerator N(x) of each partial fraction with such a factor F(x) in the denominator must be sought as a polynomial with deg N < deg F, rather than as a constant. For example, take the following decomposition over R: ${\displaystyle {\frac {x^{2}+1}{(x+2)(x-1)\color {Blue}(x^{2}+x+1)))={\frac {a}{x+2))+{\frac {b}{x-1))+{\frac {\color {OliveGreen}cx+d}{\color {Blue}x^{2}+x+1)).}$
• Suppose Q(x) = (xα)r S(x) and S(α) ≠ 0, that is α is a root of Q(x) of multiplicity r. In the partial fraction decomposition, the r first powers of (xα) will occur as denominators of the partial fractions (possibly with a zero numerator). For example, if S(x) = 1 the partial fraction decomposition has the form ${\displaystyle {\frac {P(x)}{Q(x)))={\frac {P(x)}{(x-\alpha )^{r))}={\frac {c_{1)){x-\alpha ))+{\frac {c_{2)){(x-\alpha )^{2))}+\cdots +{\frac {c_{r)){(x-\alpha )^{r))}.}$

Illustration

In an example application of this procedure, (3x + 5)/(1 − 2x)2 can be decomposed in the form

${\displaystyle {\frac {3x+5}{(1-2x)^{2))}={\frac {A}{(1-2x)^{2))}+{\frac {B}{(1-2x))).}$

Clearing denominators shows that 3x + 5 = A + B(1 − 2x). Expanding and equating the coefficients of powers of x gives

5 = A + B and 3x = −2Bx

Solving this system of linear equations for A and B yields A = 13/2 and B = −3/2. Hence,

${\displaystyle {\frac {3x+5}{(1-2x)^{2))}={\frac {13/2}{(1-2x)^{2))}+{\frac {-3/2}{(1-2x))).}$

Residue method

Over the complex numbers, suppose f(x) is a rational proper fraction, and can be decomposed into

${\displaystyle f(x)=\sum _{i}\left({\frac {a_{i1)){x-x_{i))}+{\frac {a_{i2)){(x-x_{i})^{2))}+\cdots +{\frac {a_{ik_{i))}{(x-x_{i})^{k_{i))))\right).}$

Let ${\displaystyle g_{ij}(x)=(x-x_{i})^{j-1}f(x),}$ then according to the uniqueness of Laurent series, aij is the coefficient of the term (xxi)−1 in the Laurent expansion of gij(x) about the point xi, i.e., its residue ${\displaystyle a_{ij}=\operatorname {Res} (g_{ij},x_{i}).}$

This is given directly by the formula ${\displaystyle a_{ij}={\frac {1}{(k_{i}-j)!))\lim _{x\to x_{i)){\frac {d^{k_{i}-j)){dx^{k_{i}-j))}\left((x-x_{i})^{k_{i))f(x)\right),}$ or in the special case when xi is a simple root, ${\displaystyle a_{i1}={\frac {P(x_{i})}{Q'(x_{i}))),}$ when ${\displaystyle f(x)={\frac {P(x)}{Q(x))).}$

Over the reals

Partial fractions are used in real-variable integral calculus to find real-valued antiderivatives of rational functions. Partial fraction decomposition of real rational functions is also used to find their Inverse Laplace transforms. For applications of partial fraction decomposition over the reals, see

General result

Let ${\displaystyle f(x)}$ be any rational function over the real numbers. In other words, suppose there exist real polynomials functions ${\displaystyle p(x)}$ and ${\displaystyle q(x)\neq 0}$, such that ${\displaystyle f(x)={\frac {p(x)}{q(x)))}$

By dividing both the numerator and the denominator by the leading coefficient of ${\displaystyle q(x)}$, we may assume without loss of generality that ${\displaystyle q(x)}$ is monic. By the fundamental theorem of algebra, we can write

${\displaystyle q(x)=(x-a_{1})^{j_{1))\cdots (x-a_{m})^{j_{m))(x^{2}+b_{1}x+c_{1})^{k_{1))\cdots (x^{2}+b_{n}x+c_{n})^{k_{n))}$

where ${\displaystyle a_{1},\dots ,a_{m))$, ${\displaystyle b_{1},\dots ,b_{n))$, ${\displaystyle c_{1},\dots ,c_{n))$ are real numbers with ${\displaystyle b_{i}^{2}-4c_{i}<0}$, and ${\displaystyle j_{1},\dots ,j_{m))$, ${\displaystyle k_{1},\dots ,k_{n))$ are positive integers. The terms ${\displaystyle (x-a_{i})}$ are the linear factors of ${\displaystyle q(x)}$ which correspond to real roots of ${\displaystyle q(x)}$, and the terms ${\displaystyle (x_{i}^{2}+b_{i}x+c_{i})}$ are the irreducible quadratic factors of ${\displaystyle q(x)}$ which correspond to pairs of complex conjugate roots of ${\displaystyle q(x)}$.

Then the partial fraction decomposition of ${\displaystyle f(x)}$ is the following:

${\displaystyle f(x)={\frac {p(x)}{q(x)))=P(x)+\sum _{i=1}^{m}\sum _{r=1}^{j_{i)){\frac {A_{ir)){(x-a_{i})^{r))}+\sum _{i=1}^{n}\sum _{r=1}^{k_{i)){\frac {B_{ir}x+C_{ir)){(x^{2}+b_{i}x+c_{i})^{r))))$

Here, P(x) is a (possibly zero) polynomial, and the Air, Bir, and Cir are real constants. There are a number of ways the constants can be found.

The most straightforward method is to multiply through by the common denominator q(x). We then obtain an equation of polynomials whose left-hand side is simply p(x) and whose right-hand side has coefficients which are linear expressions of the constants Air, Bir, and Cir. Since two polynomials are equal if and only if their corresponding coefficients are equal, we can equate the coefficients of like terms. In this way, a system of linear equations is obtained which always has a unique solution. This solution can be found using any of the standard methods of linear algebra. It can also be found with limits (see Example 5).

Examples

Example 1

${\displaystyle f(x)={\frac {1}{x^{2}+2x-3))}$

Here, the denominator splits into two distinct linear factors:

${\displaystyle q(x)=x^{2}+2x-3=(x+3)(x-1)}$

so we have the partial fraction decomposition

${\displaystyle f(x)={\frac {1}{x^{2}+2x-3))={\frac {A}{x+3))+{\frac {B}{x-1))}$

Multiplying through by the denominator on the left-hand side gives us the polynomial identity

${\displaystyle 1=A(x-1)+B(x+3)}$

Substituting x = −3 into this equation gives A = −1/4, and substituting x = 1 gives B = 1/4, so that

${\displaystyle f(x)={\frac {1}{x^{2}+2x-3))={\frac {1}{4))\left({\frac {-1}{x+3))+{\frac {1}{x-1))\right)}$

Example 2

${\displaystyle f(x)={\frac {x^{3}+16}{x^{3}-4x^{2}+8x))}$

After long division, we have

${\displaystyle f(x)=1+{\frac {4x^{2}-8x+16}{x^{3}-4x^{2}+8x))=1+{\frac {4x^{2}-8x+16}{x(x^{2}-4x+8)))}$

The factor x2 − 4x + 8 is irreducible over the reals, as its discriminant (−4)2 − 4×8 = −16 is negative. Thus the partial fraction decomposition over the reals has the shape

${\displaystyle {\frac {4x^{2}-8x+16}{x(x^{2}-4x+8)))={\frac {A}{x))+{\frac {Bx+C}{x^{2}-4x+8))}$

Multiplying through by x3 − 4x2 + 8x, we have the polynomial identity

${\displaystyle 4x^{2}-8x+16=A\left(x^{2}-4x+8\right)+\left(Bx+C\right)x}$

Taking x = 0, we see that 16 = 8A, so A = 2. Comparing the x2 coefficients, we see that 4 = A + B = 2 + B, so B = 2. Comparing linear coefficients, we see that −8 = −4A + C = −8 + C, so C = 0. Altogether,

${\displaystyle f(x)=1+2\left({\frac {1}{x))+{\frac {x}{x^{2}-4x+8))\right)}$

The fraction can be completely decomposed using complex numbers. According to the fundamental theorem of algebra every complex polynomial of degree n has n (complex) roots (some of which can be repeated). The second fraction can be decomposed to:

${\displaystyle {\frac {x}{x^{2}-4x+8))={\frac {D}{x-(2+2i)))+{\frac {E}{x-(2-2i)))}$

Multiplying through by the denominator gives:

${\displaystyle x=D(x-(2-2i))+E(x-(2+2i))}$

Equating the coefficients of x and the constant (with respect to x) coefficients of both sides of this equation, one gets a system of two linear equations in D and E, whose solution is

${\displaystyle D={\frac {1+i}{2i))={\frac {1-i}{2)),\qquad E={\frac {1-i}{-2i))={\frac {1+i}{2)).}$

Thus we have a complete decomposition:

${\displaystyle f(x)={\frac {x^{3}+16}{x^{3}-4x^{2}+8x))=1+{\frac {2}{x))+{\frac {1-i}{x-(2+2i)))+{\frac {1+i}{x-(2-2i)))}$

One may also compute directly A, D and E with the residue method (see also example 4 below).

Example 3

This example illustrates almost all the "tricks" we might need to use, short of consulting a computer algebra system.

${\displaystyle f(x)={\frac {x^{9}-2x^{6}+2x^{5}-7x^{4}+13x^{3}-11x^{2}+12x-4}{x^{7}-3x^{6}+5x^{5}-7x^{4}+7x^{3}-5x^{2}+3x-1))}$

After long division and factoring the denominator, we have

${\displaystyle f(x)=x^{2}+3x+4+{\frac {2x^{6}-4x^{5}+5x^{4}-3x^{3}+x^{2}+3x}{(x-1)^{3}(x^{2}+1)^{2))))$

The partial fraction decomposition takes the form

${\displaystyle {\frac {2x^{6}-4x^{5}+5x^{4}-3x^{3}+x^{2}+3x}{(x-1)^{3}(x^{2}+1)^{2))}={\frac {A}{x-1))+{\frac {B}{(x-1)^{2))}+{\frac {C}{(x-1)^{3))}+{\frac {Dx+E}{x^{2}+1))+{\frac {Fx+G}{(x^{2}+1)^{2))}.}$

Multiplying through by the denominator on the left-hand side we have the polynomial identity

{\displaystyle {\begin{aligned}&2x^{6}-4x^{5}+5x^{4}-3x^{3}+x^{2}+3x\\[4pt]={}&A\left(x-1\right)^{2}\left(x^{2}+1\right)^{2}+B\left(x-1\right)\left(x^{2}+1\right)^{2}+C\left(x^{2}+1\right)^{2}+\left(Dx+E\right)\left(x-1\right)^{3}\left(x^{2}+1\right)+\left(Fx+G\right)\left(x-1\right)^{3}\end{aligned))}

Now we use different values of x to compute the coefficients:

${\displaystyle {\begin{cases}4=4C&x=1\\2+2i=(Fi+G)(2+2i)&x=i\\0=A-B+C-E-G&x=0\end{cases))}$

Solving this we have:

${\displaystyle {\begin{cases}C=1\\F=0,G=1\\E=A-B\end{cases))}$

Using these values we can write:

{\displaystyle {\begin{aligned}&2x^{6}-4x^{5}+5x^{4}-3x^{3}+x^{2}+3x\\[4pt]={}&A\left(x-1\right)^{2}\left(x^{2}+1\right)^{2}+B\left(x-1\right)\left(x^{2}+1\right)^{2}+\left(x^{2}+1\right)^{2}+\left(Dx+\left(A-B\right)\right)\left(x-1\right)^{3}\left(x^{2}+1\right)+\left(x-1\right)^{3}\\[4pt]={}&\left(A+D\right)x^{6}+\left(-A-3D\right)x^{5}+\left(2B+4D+1\right)x^{4}+\left(-2B-4D+1\right)x^{3}+\left(-A+2B+3D-1\right)x^{2}+\left(A-2B-D+3\right)x\end{aligned))}

We compare the coefficients of x6 and x5 on both side and we have:

${\displaystyle {\begin{cases}A+D=2\\-A-3D=-4\end{cases))\quad \Rightarrow \quad A=D=1.}$

Therefore:

${\displaystyle 2x^{6}-4x^{5}+5x^{4}-3x^{3}+x^{2}+3x=2x^{6}-4x^{5}+(2B+5)x^{4}+(-2B-3)x^{3}+(2B+1)x^{2}+(-2B+3)x}$

which gives us B = 0. Thus the partial fraction decomposition is given by:

${\displaystyle f(x)=x^{2}+3x+4+{\frac {1}{(x-1)))+{\frac {1}{(x-1)^{3))}+{\frac {x+1}{x^{2}+1))+{\frac {1}{(x^{2}+1)^{2))}.}$

Alternatively, instead of expanding, one can obtain other linear dependences on the coefficients computing some derivatives at ${\displaystyle x=1,\imath }$ in the above polynomial identity. (To this end, recall that the derivative at x = a of (xa)mp(x) vanishes if m > 1 and is just p(a) for m = 1.) For instance the first derivative at x = 1 gives

${\displaystyle 2\cdot 6-4\cdot 5+5\cdot 4-3\cdot 3+2+3=A\cdot (0+0)+B\cdot (4+0)+8+D\cdot 0}$

that is 8 = 4B + 8 so B = 0.

Example 4 (residue method)

${\displaystyle f(z)={\frac {z^{2}-5}{(z^{2}-1)(z^{2}+1)))={\frac {z^{2}-5}{(z+1)(z-1)(z+i)(z-i)))}$

Thus, f(z) can be decomposed into rational functions whose denominators are z+1, z−1, z+i, z−i. Since each term is of power one, −1, 1, −i and i are simple poles.

Hence, the residues associated with each pole, given by ${\displaystyle {\frac {P(z_{i})}{Q'(z_{i})))={\frac {z_{i}^{2}-5}{4z_{i}^{3))},}$ are ${\displaystyle 1,-1,{\tfrac {3i}{2)),-{\tfrac {3i}{2)),}$ respectively, and

${\displaystyle f(z)={\frac {1}{z+1))-{\frac {1}{z-1))+{\frac {3i}{2)){\frac {1}{z+i))-{\frac {3i}{2)){\frac {1}{z-i)).}$

Example 5 (limit method)

Limits can be used to find a partial fraction decomposition.[4] Consider the following example:

${\displaystyle {\frac {1}{x^{3}-1))}$

First, factor the denominator which determines the decomposition:

${\displaystyle {\frac {1}{x^{3}-1))={\frac {1}{(x-1)(x^{2}+x+1)))={\frac {A}{x-1))+{\frac {Bx+C}{x^{2}+x+1)).}$

Multiplying everything by ${\displaystyle x-1}$, and taking the limit when ${\displaystyle x\to 1}$, we get

${\displaystyle \lim _{x\to 1}\left((x-1)\left({\frac {A}{x-1))+{\frac {Bx+C}{x^{2}+x+1))\right)\right)=\lim _{x\to 1}A+\lim _{x\to 1}{\frac {(x-1)(Bx+C)}{x^{2}+x+1))=A.}$

On the other hand,

${\displaystyle \lim _{x\to 1}{\frac {(x-1)}{(x-1)(x^{2}+x+1)))=\lim _{x\to 1}{\frac {1}{x^{2}+x+1))={\frac {1}{3)),}$

and thus:

${\displaystyle A={\frac {1}{3)).}$

Multiplying by x and taking the limit when ${\displaystyle x\to \infty }$, we have

${\displaystyle \lim _{x\to \infty }x\left({\frac {A}{x-1))+{\frac {Bx+C}{x^{2}+x+1))\right)=\lim _{x\to \infty }{\frac {Ax}{x-1))+\lim _{x\to \infty }{\frac {Bx^{2}+Cx}{x^{2}+x+1))=A+B,}$

and

${\displaystyle \lim _{x\to \infty }{\frac {x}{(x-1)(x^{2}+x+1)))=0.}$

This implies A + B = 0 and so ${\displaystyle B=-{\frac {1}{3))}$.

For x = 0, we get ${\displaystyle -1=-A+C,}$ and thus ${\displaystyle C=-{\tfrac {2}{3))}$.

Putting everything together, we get the decomposition

${\displaystyle {\frac {1}{x^{3}-1))={\frac {1}{3))\left({\frac {1}{x-1))+{\frac {-x-2}{x^{2}+x+1))\right).}$

Example 6 (integral)

Suppose we have the indefinite integral:

${\displaystyle \int {\frac {x^{4}+x^{3}+x^{2}+1}{x^{2}+x-2))\,dx}$

Before performing decomposition, it is obvious we must perform polynomial long division and factor the denominator. Doing this would result in:

${\displaystyle \int \left(x^{2}+3+{\frac {-3x+7}{(x+2)(x-1)))\right)dx}$

Upon this, we may now perform partial fraction decomposition.

${\displaystyle \int \left(x^{2}+3+{\frac {-3x+7}{(x+2)(x-1)))\right)dx=\int \left(x^{2}+3+{\frac {A}{(x+2)))+{\frac {B}{(x-1)))\right)dx}$ so: ${\displaystyle A(x-1)+B(x+2)=-3x+7}$. Upon substituting our values, in this case, where x=1 to solve for B and x=-2 to solve for A, we will result in:

${\displaystyle A={\frac {-13}{3))\ ,B={\frac {4}{3))}$

Plugging all of this back into our integral allows us to find the answer:

${\displaystyle \int \left(x^{2}+3+{\frac {-13/3}{(x+2)))+{\frac {4/3}{(x-1)))\right)\,dx={\frac {x^{3)){3))\ +3x-{\frac {13}{3))\ln(|x+2|)+{\frac {4}{3))\ln(|x-1|)+C}$

The role of the Taylor polynomial

The partial fraction decomposition of a rational function can be related to Taylor's theorem as follows. Let

${\displaystyle P(x),Q(x),A_{1}(x),\ldots ,A_{r}(x)}$

be real or complex polynomials assume that

${\displaystyle Q=\prod _{j=1}^{r}(x-\lambda _{j})^{\nu _{j)),}$

satisfies ${\displaystyle \deg A_{1}<\nu _{1},\ldots ,\deg A_{r}<\nu _{r},\quad {\text{and))\quad \deg(P)<\deg(Q)=\sum _{j=1}^{r}\nu _{j}.}$

Also define

${\displaystyle Q_{i}=\prod _{j\neq i}(x-\lambda _{j})^{\nu _{j))={\frac {Q}{(x-\lambda _{i})^{\nu _{i)))),\qquad 1\leqslant i\leqslant r.}$

Then we have

${\displaystyle {\frac {P}{Q))=\sum _{j=1}^{r}{\frac {A_{j)){(x-\lambda _{j})^{\nu _{j))))}$

if, and only if, each polynomial ${\displaystyle A_{i}(x)}$ is the Taylor polynomial of ${\displaystyle {\tfrac {P}{Q_{i))))$ of order ${\displaystyle \nu _{i}-1}$ at the point ${\displaystyle \lambda _{i))$:

${\displaystyle A_{i}(x):=\sum _{k=0}^{\nu _{i}-1}{\frac {1}{k!))\left({\frac {P}{Q_{i))}\right)^{(k)}(\lambda _{i})\ (x-\lambda _{i})^{k}.}$

Taylor's theorem (in the real or complex case) then provides a proof of the existence and uniqueness of the partial fraction decomposition, and a characterization of the coefficients.

Sketch of the proof

The above partial fraction decomposition implies, for each 1 ≤ i ≤ r, a polynomial expansion

${\displaystyle {\frac {P}{Q_{i))}=A_{i}+O((x-\lambda _{i})^{\nu _{i))),\qquad {\text{for ))x\to \lambda _{i},}$

so ${\displaystyle A_{i))$ is the Taylor polynomial of ${\displaystyle {\tfrac {P}{Q_{i))))$, because of the unicity of the polynomial expansion of order ${\displaystyle \nu _{i}-1}$, and by assumption ${\displaystyle \deg A_{i}<\nu _{i))$.

Conversely, if the ${\displaystyle A_{i))$ are the Taylor polynomials, the above expansions at each ${\displaystyle \lambda _{i))$ hold, therefore we also have

${\displaystyle P-Q_{i}A_{i}=O((x-\lambda _{i})^{\nu _{i))),\qquad {\text{for ))x\to \lambda _{i},}$

which implies that the polynomial ${\displaystyle P-Q_{i}A_{i))$ is divisible by ${\displaystyle (x-\lambda _{i})^{\nu _{i)).}$

For ${\displaystyle j\neq i,Q_{j}A_{j))$ is also divisible by ${\displaystyle (x-\lambda _{i})^{\nu _{i))}$, so

${\displaystyle P-\sum _{j=1}^{r}Q_{j}A_{j))$

is divisible by ${\displaystyle Q}$. Since

${\displaystyle \deg \left(P-\sum _{j=1}^{r}Q_{j}A_{j}\right)<\deg(Q)}$

we then have

${\displaystyle P-\sum _{j=1}^{r}Q_{j}A_{j}=0,}$

and we find the partial fraction decomposition dividing by ${\displaystyle Q}$.

Fractions of integers

The idea of partial fractions can be generalized to other integral domains, say the ring of integers where prime numbers take the role of irreducible denominators. For example:

${\displaystyle {\frac {1}{18))={\frac {1}{2))-{\frac {1}{3))-{\frac {1}{3^{2))}.}$

Notes

1. ^ Larson, Ron (2016). Algebra & Trigonometry. Cengage Learning. ISBN 9781337271172.
2. ^ Horowitz, Ellis. "Algorithms for partial fraction decomposition and rational function integration." Proceedings of the second ACM symposium on Symbolic and algebraic manipulation. ACM, 1971.
3. ^ Grosholz, Emily (2000). The Growth of Mathematical Knowledge. Kluwer Academic Publilshers. p. 179. ISBN 978-90-481-5391-6.
4. ^ Bluman, George W. (1984). Problem Book for First Year Calculus. New York: Springer-Verlag. pp. 250–251.