In mathematics, a geometric series is the sum of an infinite number of terms that have a constant ratio between successive terms. For example, the series
is geometric, because each successive term can be obtained by multiplying the previous term by . In general, a geometric series is written as , where is the coefficient of each term and is the common ratio between adjacent terms. The geometric series had an important role in the early development of calculus, is used throughout mathematics, and can serve as an introduction to frequently used mathematical tools such as the Taylor series, the Fourier series, and the matrix exponential.
The name geometric series indicates each term is the geometric mean of its two neighboring terms, similar to how the name arithmetic series indicates each term is the arithmetic mean of its two neighboring terms.
The geometric series a + ar + ar^{2} + ar^{3} + ... is written in expanded form.^{[1]} Every coefficient in the geometric series is the same. In contrast, the power series written as a_{0} + a_{1}r + a_{2}r^{2} + a_{3}r^{3} + ... in expanded form has coefficients a_{i} that can vary from term to term. In other words, the geometric series is a special case of the power series. The first term of a geometric series in expanded form is the coefficient a of that geometric series.
In addition to the expanded form of the geometric series, there is a generator form^{[1]} of the geometric series written as
and a closed form of the geometric series written as
The derivation of the closed form from the expanded form is shown in this article's § Sum section. However even without that derivation, the result can be confirmed with long division: a divided by (1 - r) results in a + ar + ar^{2} + ar^{3} + ... , which is the expanded form of the geometric series.
It is often a convenience in notation to set the series equal to the sum s and work with the geometric series
Changing even one of the coefficients to something other than coefficient a would change the resulting sum of functions to some function other than a / (1 − r) within the range |r| < 1. As an aside, a particularly useful change to the coefficients is defined by the Taylor series, which describes how to change the coefficients so that the sum of functions converges to any user selected, sufficiently smooth function within a range.
The geometric series a + ar + ar^{2} + ar^{3} + ... is an infinite series defined by just two parameters: coefficient a and common ratio r. Common ratio r is the ratio of any term with the previous term in the series. Or equivalently, common ratio r is the term multiplier used to calculate the next term in the series. The following table shows several geometric series:
a | r | Example series |
---|---|---|
4 | 10 | 4 + 40 + 400 + 4000 + 40,000 + ··· |
3 | 1 | 3 + 3 + 3 + 3 + 3 + ··· |
1 | 2/3 | 1 + 2/3 + 4/9 + 8/27 + 16/81 + ··· |
1/2 | 1/2 | 1/2 + 1/4 + 1/8 + 1/16 + 1/32 + ··· |
9 | 1/3 | 9 + 3 + 1 + 1/3 + 1/9 + ··· |
7 | 1/10 | 7 + 0.7 + 0.07 + 0.007 + 0.0007 + ··· |
1 | −1/2 | 1 − 1/2 + 1/4 − 1/8 + 1/16 − 1/32 + ··· |
3 | −1 | 3 − 3 + 3 − 3 + 3 − ··· |
The convergence of the geometric series depends on the value of the common ratio r:
The rate of convergence also depends on the value of the common ratio r. Specifically, the rate of convergence gets slower as r approaches 1 or −1. For example, the geometric series with a = 1 is 1 + r + r^{2} + r^{3} + ... and converges to 1 / (1 - r) when |r| < 1. However, the number of terms needed to converge approaches infinity as r approaches 1 because a / (1 - r) approaches infinity and each term of the series is less than or equal to one. In contrast, as r approaches −1 the sum of the first several terms of the geometric series starts to converge to 1/2 but slightly flips up or down depending on whether the most recently added term has a power of r that is even or odd. That flipping behavior near r = −1 is illustrated in the adjacent image showing the first 11 terms of the geometric series with a = 1 and |r| < 1.
The common ratio r and the coefficient a also define the geometric progression, which is a list of the terms of the geometric series but without the additions. Therefore the geometric series a + ar + ar^{2} + ar^{3} + ... has the geometric progression (also called the geometric sequence) a, ar, ar^{2}, ar^{3}, ... The geometric progression - as simple as it is - models a surprising number of natural phenomena,
As an aside, the common ratio r can be a complex number such as |r|e^{iθ} where |r| is the vector's magnitude (or length), θ is the vector's angle (or orientation) in the complex plane and i^{2} = -1. With a common ratio |r|e^{iθ}, the expanded form of the geometric series is a + a|r|e^{iθ} + a|r|^{2}e^{i2θ} + a|r|^{3}e^{i3θ} + ... Modeling the angle θ as linearly increasing over time at the rate of some angular frequency ω_{0} (in other words, making the substitution θ = ω_{0}t), the expanded form of the geometric series becomes a + a|r|e^{iω0t} + a|r|^{2}e^{i2ω0t} + a|r|^{3}e^{i3ω0t} + ... , where the first term is a vector of length a not rotating at all, and all the other terms are vectors of different lengths rotating at harmonics of the fundamental angular frequency ω_{0}. The constraint |r|<1 is enough to coordinate this infinite number of vectors of different lengths all rotating at different speeds into tracing a circle, as shown in the adjacent video. Similar to how the Taylor series describes how to change the coefficients so the series converges to a user selected sufficiently smooth function within a range, the Fourier series describes how to change the coefficients (which can also be complex numbers in order to specify the initial angles of vectors) so the series converges to a user selected periodic function.
The sum of the first n terms of a geometric series, up to and including the r ^{n-1} term, is given by the closed-form formula:
where r is the common ratio. One can derive that closed-form formula for the partial sum, s_{n}, by subtracting out the many self-similar terms as follows:^{[3]}^{[4]}^{[5]}
As n approaches infinity, the absolute value of r must be less than one for the series to converge. The sum then becomes
The formula also holds for complex r, with the corresponding restriction that the modulus of r is strictly less than one.
As an aside, the question of whether an infinite series converges is fundamentally a question about the distance between two values: given enough terms, does the value of the partial sum get arbitrarily close to the finite value it is approaching? In the above derivation of the closed form of the geometric series, the interpretation of the distance between two values is the distance between their locations on the number line. That is the most common interpretation of the distance between two values. However, the p-adic metric, which has become a critical notion in modern number theory, offers a definition of distance such that the geometric series 1 + 2 + 4 + 8 + ... with a = 1 and r = 2 actually does converge to a / (1 - r) = 1 / (1 - 2) = -1 even though r is outside the typical convergence range |r| < 1.
We can prove that the geometric series converges using the sum formula for a geometric progression:
Alternatively, a geometric interpretation of the convergence is shown in the adjacent diagram. The area of the white triangle is the series remainder = s − s_{n} = ar^{n+1} / (1 − r). Each additional term in the partial series reduces the area of that white triangle remainder by the area of the trapezoid representing the added term. The trapezoid areas (i.e., the values of the terms) get progressively thinner and shorter and closer to the origin. In the limit, as the number of trapezoids approaches infinity, the white triangle remainder vanishes as it is filled by trapezoids and therefore s_{n} converges to s, provided |r|<1. In contrast, if |r|>1, the trapezoid areas representing the terms of the series instead get progressively wider and taller and farther from the origin, not converging to the origin and not converging as a series.
After knowing that a series converges, there are some applications in which it is also important to know how quickly the series converges. For the geometric series, one convenient measure of the convergence rate is how much the previous series remainder decreases due to the last term of the partial series. Given that the last term is ar^{n} and the previous series remainder is s - s_{n-1} = ar^{n} / (1 - r)), this measure of the convergence rate of the geometric series is ar^{n} / (ar^{n} / (1 - r)) = 1 - r, if 0 ≤ r < 1.
If r < 0, adjacent terms in the geometric series alternate between being positive and negative. A geometric interpretation of a converging alternating geometric series is shown in the adjacent diagram in which the areas of the negative terms are shown below the x axis. Pairing and summing each positive area with its negative smaller area neighbor results in non-overlapped trapezoids separated by gaps. To remove the gaps, broaden each trapezoid to cover the rightmost 1 - r^{2} of the original triangle area instead of just the rightmost 1 - |r|. However, to maintain the same trapezoid areas during this broadening transformation, scaling is needed: scale*(1 - r^{2}) = (1 - |r|), or scale = (1 - |r|) / (1 - r^{2}) = (1 + r) / (1 - r^{2}) = (1 + r) / ((1 + r)(1 - r)) = 1 / (1 - r) where -1 < r ≤ 0. Note that because r < 0 this scale decreases the amplitude of the separated trapezoids in order to fill in the separation gaps. In contrast, for the case r > 0 the same scale 1 / (1 - r) increases the amplitude of the non-overlapped trapezoids in order to account for the loss of the overlapped areas.
With the gaps removed, pairs of terms in a converging alternating geometric series become a converging (non-alternating) geometric series with common ratio r^{2} to account for the pairing of terms, coefficient a = 1 / (1 - r) to account for the gap filling, and the degree (i.e., highest powered term) of the partial series called m instead of n to emphasize that terms have been paired. Similar to the r > 0 case, the r < 0 convergence rate = ar^{2m} / (s - s_{m-1}) = 1 - r^{2}, which is the same as the convergence rate of a non-alternating geometric series if its terms were similarly paired. Therefore, the convergence rate does not depend upon n or m and, perhaps more surprising, does not depend upon the sign of the common ratio. One perspective that helps explain the variable rate of convergence that is symmetric about r = 0 is that each added term of the partial series makes a finite contribution to the infinite sum at r = 1 and each added term of the partial series makes a finite contribution to the infinite slope at r = -1.
To derive this formula, first write a general geometric series as:
We can find a simpler formula for this sum by multiplying both sides of the above equation by 1 − r, and we'll see that
since all the other terms cancel. If r ≠ 1, we can rearrange the above to get the convenient formula for a geometric series that computes the sum of n terms:
If one were to begin the sum not from k=1 or 0 but from a different value, say , then
Differentiating this formula with respect to allows us to arrive at formulae for sums of the form
For example:
For a geometric series containing only even powers of multiply by :
Equivalently, take as the common ratio and use the standard formulation.
For a series with only odd powers of ,
An exact formula for the generalized sum when is expanded by the Stirling numbers of the second kind as ^{[6]}
An infinite geometric series is an infinite series whose successive terms have a common ratio. Such a series converges if and only if the absolute value of the common ratio is less than one (|r| < 1). Its value can then be computed from the finite sum formula
Since:
Then:
For a series containing only even powers of ,
In cases where the sum does not start at k = 0,
This formula only works for |r| < 1 as well. From this, it follows that, for |r| < 1,
Also, the infinite series 1/2 + 1/4 + 1/8 + 1/16 + ⋯ is an elementary example of a series that converges absolutely.
It is a geometric series whose first term is 1/2 and whose common ratio is 1/2, so its sum is
The inverse of the above series is 1/2 − 1/4 + 1/8 − 1/16 + ⋯ is a simple example of an alternating series that converges absolutely.
It is a geometric series whose first term is 1/2 and whose common ratio is −1/2, so its sum is
The summation formula for geometric series remains valid even when the common ratio is a complex number. In this case the condition that the absolute value of r be less than 1 becomes that the modulus of r be less than 1. It is possible to calculate the sums of some non-obvious geometric series. For example, consider the proposition
The proof of this comes from the fact that
This is the difference of two geometric series, and so it is a straightforward application of the formula for infinite geometric series that completes the proof.
2,500 years ago, Greek mathematicians had a problem when walking from one place to another: they thought^{[7]} that an infinitely long list of numbers greater than zero summed to infinity. Therefore, it was a paradox when Zeno of Elea pointed out that in order to walk from one place to another, you first have to walk half the distance, and then you have to walk half the remaining distance, and then you have to walk half of that remaining distance, and you continue halving the remaining distances an infinite number of times because no matter how small the remaining distance is you still have to walk the first half of it. Thus, Zeno of Elea transformed a short distance into an infinitely long list of halved remaining distances, all of which are greater than zero. And that was the problem: how can a distance be short when measured directly and also infinite when summed over its infinite list of halved remainders? The paradox revealed something was wrong with the assumption that an infinitely long list of numbers greater than zero summed to infinity.
Euclid's Elements of Geometry^{[8]} Book IX, Proposition 35, proof (of the proposition in adjacent diagram's caption):
Let AA', BC, DD', EF be any multitude whatsoever of continuously proportional numbers, beginning from the least AA'. And let BG and FH, each equal to AA', have been subtracted from BC and EF. I say that as GC is to AA', so EH is to AA', BC, DD'.
For let FK be made equal to BC, and FL to DD'. And since FK is equal to BC, of which FH is equal to BG, the remainder HK is thus equal to the remainder GC. And since as EF is to DD', so DD' to BC, and BC to AA' [Prop. 7.13], and DD' equal to FL, and BC to FK, and AA' to FH, thus as EF is to FL, so LF to FK, and FK to FH. By separation, as EL to LF, so LK to FK, and KH to FH [Props. 7.11, 7.13]. And thus as one of the leading is to one of the following, so (the sum of) all of the leading to (the sum of) all of the following [Prop. 7.12]. Thus, as KH is to FH, so EL, LK, KH to LF, FK, HF. And KH equal to CG, and FH to AA', and LF, FK, HF to DD', BC, AA'. Thus, as CG is to AA', so EH to DD', BC, AA'. Thus, as the excess of the second is to the first, so is the excess of the last is to all those before it. The very thing it was required to show.
The terseness of Euclid's propositions and proofs may have been a necessity. As is, the Elements of Geometry is over 500 pages of propositions and proofs. Making copies of this popular textbook was labor intensive given that the printing press was not invented until 1440. And the book's popularity lasted a long time: as stated in the cited introduction to an English translation, Elements of Geometry "has the distinction of being the world's oldest continuously used mathematical textbook." So being very terse was being very practical. The proof of Proposition 35 in Book IX could have been even more compact if Euclid could have somehow avoided explicitly equating lengths of specific line segments from different terms in the series. For example, the contemporary notation for geometric series (i.e., a + ar + ar^{2} + ar^{3} + ... + ar^{n}) does not label specific portions of terms that are equal to each other.
Also in the cited introduction the editor comments,
Most of the theorems appearing in the Elements were not discovered by Euclid himself, but were the work of earlier Greek mathematicians such as Pythagoras (and his school), Hippocrates of Chios, Theaetetus of Athens, and Eudoxus of Cnidos. However, Euclid is generally credited with arranging these theorems in a logical manner, so as to demonstrate (admittedly, not always with the rigour demanded by modern mathematics) that they necessarily follow from five simple axioms. Euclid is also credited with devising a number of particularly ingenious proofs of previously discovered theorems (e.g., Theorem 48 in Book 1).
To help translate the proposition and proof into a form that uses current notation, a couple modifications are in the diagram. First, the four horizontal line lengths representing the values of the first four terms of a geometric series are now labeled a, ar, ar^{2}, ar^{3} in the diagram's left margin. Second, new labels A' and D' are now on the first and third lines so that all the diagram's line segment names consistently specify the segment's starting point and ending point.
Here is a phrase by phrase interpretation of the proposition:
Proposition | in contemporary notation |
---|---|
"If there is any multitude whatsoever of continually proportional numbers" | Taking the first n+1 terms of a geometric series S_{n} = a + ar + ar^{2} + ar^{3} + ... + ar^{n} |
"and equal to the first is subtracted from the second and the last" | and subtracting a from ar and ar^{n} |
"then as the excess of the second to the first, so the excess of the last will be to all those before it." | then (ar-a) / a = (ar^{n}-a) / (a + ar + ar^{2} + ar^{3} + ... + ar^{n-1}) = (ar^{n}-a) / S_{n-1}, which can be rearranged to the more familiar form S_{n-1} = a(r^{n}-1) / (r-1). |
Similarly, here is a sentence by sentence interpretation of the proof:
Proof | in contemporary notation |
---|---|
"Let AA', BC, DD', EF be any multitude whatsoever of continuously proportional numbers, beginning from the least AA'." | Consider the first n+1 terms of a geometric series S_{n} = a + ar + ar^{2} + ar^{3} + ... + ar^{n} for the case r>1 and n=3. |
"And let BG and FH, each equal to AA', have been subtracted from BC and EF." | Subtract a from ar and ar^{3}. |
"I say that as GC is to AA', so EH is to AA', BC, DD'." | I say that (ar-a) / a = (ar^{3}-a) / (a + ar + ar^{2}). |
"For let FK be made equal to BC, and FL to DD'." | |
"And since FK is equal to BC, of which FH is equal to BG, the remainder HK is thus equal to the remainder GC." | |
"And since as EF is to DD', so DD' to BC, and BC to AA' [Prop. 7.13], and DD' equal to FL, and BC to FK, and AA' to FH, thus as EF is to FL, so LF to FK, and FK to FH." | |
"By separation, as EL to LF, so LK to FK, and KH to FH [Props. 7.11, 7.13]." | By separation, (ar^{3}-ar^{2}) / ar^{2} = (ar^{2}-ar) / ar = (ar-a) / a = r-1. |
"And thus as one of the leading is to one of the following, so (the sum of) all of the leading to (the sum of) all of the following [Prop. 7.12]." | The sum of those numerators and the sum of those denominators form the same proportion: ((ar^{3}-ar^{2}) + (ar^{2}-ar) + (ar-a)) / (ar^{2} + ar + a) = r-1. |
"And thus as one of the leading is to one of the following, so (the sum of) all of the leading to (the sum of) all of the following [Prop. 7.12]." | And this sum of equal proportions can be extended beyond (ar^{3}-ar^{2}) / ar^{2} to include all the proportions up to (ar^{n}-ar^{n-1}) / ar^{n-1}. |
"Thus, as KH is to FH, so EL, LK, KH to LF, FK, HF." | |
"And KH equal to CG, and FH to AA', and LF, FK, HF to DD', BC, AA'." | |
"Thus, as CG is to AA', so EH to DD', BC, AA'." | |
"Thus, as the excess of the second is to the first, so is the excess of the last is to all those before it." | Thus, (ar-a) / a = (ar^{3}-a) / S_{2}. Or more generally, (ar-a) / a = (ar^{n}-a) / S_{n-1}, which can be rearranged in the more common form S_{n-1} = a(r^{n}-1) / (r-1). |
"The very thing it was required to show." | Q.E.D. |
Main article: The Quadrature of the Parabola |
Archimedes used the sum of a geometric series to compute the area enclosed by a parabola and a straight line. His method was to dissect the area into an infinite number of triangles.
Archimedes' Theorem states that the total area under the parabola is 4/3 of the area of the blue triangle.
Archimedes determined that each green triangle has 1/8 the area of the blue triangle, each yellow triangle has 1/8 the area of a green triangle, and so forth.
Assuming that the blue triangle has area 1, the total area is an infinite sum:
The first term represents the area of the blue triangle, the second term the areas of the two green triangles, the third term the areas of the four yellow triangles, and so on. Simplifying the fractions gives
This is a geometric series with common ratio 1/4 and the fractional part is equal to
The sum is
This computation uses the method of exhaustion, an early version of integration. Using calculus, the same area could be found by a definite integral.
Among his insights into infinite series, in addition to his elegantly simple proof of the divergence of the harmonic series, Nicole Oresme^{[9]} proved that the series 1/2 + 2/4 + 3/8 + 4/16 + 5/32 + 6/64 + 7/128 + ... converges to 2. His diagram for his geometric proof, similar to the adjacent diagram, shows a two dimensional geometric series. The first dimension is horizontal, in the bottom row showing the geometric series S = 1/2 + 1/4 + 1/8 + 1/16 + ... , which is the geometric series with coefficient a = 1/2 and common ratio r = 1/2 that converges to S = a / (1-r) = (1/2) / (1-1/2) = 1. The second dimension is vertical, where the bottom row is a new coefficient a_{T} equal to S and each subsequent row above it is scaled by the same common ratio r = 1/2, making another geometric series T = 1 + 1/2 + 1/4 + 1/8 + ..., which is the geometric series with coefficient a_{T} = S = 1 and common ratio r = 1/2 that converges to T = a_{T} / (1-r) = S / (1-r) = a / (1-r) / (1-r) = (1/2) / (1-1/2) / (1-1/2) = 2.
Although difficult to visualize beyond three dimensions, Oresme's insight generalizes to any dimension d. Using the sum of the d−1 dimension of the geometric series as the coefficient a in the d dimension of the geometric series results in a d-dimensional geometric series converging to S^{d} / a = 1 / (1-r)^{d} within the range |r|<1. Pascal's triangle and long division reveals the coefficients of these multi-dimensional geometric series, where the closed form is valid only within the range |r|<1.
(closed form) | (expanded form) | |
---|---|---|
As an aside, instead of using long division, it is also possible to calculate the coefficients of the d-dimensional geometric series by integrating the coefficients of dimension d−1. This mapping from division by 1-r in the power series sum domain to integration in the power series coefficient domain is a discrete form of the mapping performed by the Laplace transform. MIT Professor Arthur Mattuck shows how to derive the Laplace transform from the power series in this lecture video,^{[10]} where the power series is a mapping between discrete coefficients and a sum and the Laplace transform is a mapping between continuous weights and an integral.
Main article: Time value of money |
In economics, geometric series are used to represent the present value of an annuity (a sum of money to be paid in regular intervals).
For example, suppose that a payment of $100 will be made to the owner of the annuity once per year (at the end of the year) in perpetuity. Receiving $100 a year from now is worth less than an immediate $100, because one cannot invest the money until one receives it. In particular, the present value of $100 one year in the future is $100 / (1 + ), where is the yearly interest rate.
Similarly, a payment of $100 two years in the future has a present value of $100 / (1 + )^{2} (squared because two years' worth of interest is lost by not receiving the money right now). Therefore, the present value of receiving $100 per year in perpetuity is
which is the infinite series:
This is a geometric series with common ratio 1 / (1 + ). The sum is the first term divided by (one minus the common ratio):
For example, if the yearly interest rate is 10% ( = 0.10), then the entire annuity has a present value of $100 / 0.10 = $1000.
This sort of calculation is used to compute the APR of a loan (such as a mortgage loan). It can also be used to estimate the present value of expected stock dividends, or the terminal value of a financial asset assuming a stable growth rate.
The area inside the Koch snowflake can be described as the union of infinitely many equilateral triangles (see figure). Each side of the green triangle is exactly 1/3 the size of a side of the large blue triangle, and therefore has exactly 1/9 the area. Similarly, each yellow triangle has 1/9 the area of a green triangle, and so forth. Taking the blue triangle as a unit of area, the total area of the snowflake is
The first term of this series represents the area of the blue triangle, the second term the total area of the three green triangles, the third term the total area of the twelve yellow triangles, and so forth. Excluding the initial 1, this series is geometric with constant ratio r = 4/9. The first term of the geometric series is a = 3(1/9) = 1/3, so the sum is
Thus the Koch snowflake has 8/5 of the area of the base triangle.
The derivative of because,^{[11]} letting
Therefore, letting is the integral
which is called Gregory's series and is commonly attributed to Madhava of Sangamagrama (c. 1340 – c. 1425).
The geometric series has two degrees of freedom: one for its coefficient a and another for its common ratio r. In the map of polynomials, the big red circle represents the set of all geometric series.
Only a subset of all geometric series converge. Specifically, a geometric series converges if and only if its common ratio |r| < 1. In the map of polynomials, the red triangle represents the set of converging geometric series and being drawn inside the big red circle representing the set of all geometric series indicates the converging geometric series is a subset of the geometric series.
Main article: Repeating decimal |
Only a subset of all converging geometric series converge to decimal fractions that have repeated patterns that continue forever (e.g., 0.7777... or 0.9999... or 0.123412341234...). In the map of polynomials, the little yellow triangle represents the set of geometric series that converge to infinitely repeated decimal patterns. It is drawn inside the red triangle to indicate it is a subset of the converging geometric series, which in turn is drawn inside the big red circle indicating both the converging geometric series and the geometric series that converge to infinitely repeated patterns are subsets of the geometric series.
Although fractions with infinitely repeated decimal patterns can only be approximated when encoded as floating point numbers, they can always be defined exactly as the ratio of two integers and those two integers can be calculated using the geometric series. For example, the repeated decimal fraction 0.7777... can be written as the geometric series
where coefficient a = 7/10 and common ratio r = 1/10. The geometric series closed form reveals the two integers that specify the repeated pattern:
This approach extends beyond base-ten numbers. In fact, any fraction that has an infinitely repeated pattern in base-ten numbers also has an infinitely repeated pattern in numbers written in any other base. For example, looking at the floating point encoding for the number 0.7777...
julia> bitstring(Float32(0.77777777777777777777))
"00111111010001110001110001110010"
reveals the binary fraction 0.110001110001110001... where the binary pattern 0b110001 repeats indefinitely and can be written in mostly (except for the powers) binary numbers as
where coefficient a = 0b110001 / 0b1000000 = 49 / 64 and common ratio r = 1 / 0b1000000 = 1 / 64. Using the geometric series closed form as before
You may have noticed that the floating point encoding does not capture the 0b110001 repeat pattern in the last couple (least significant) bits. This is because floating point encoding rounds the remainder instead of truncating it. Therefore, if the most significant bit of the remainder is 1, the least significant bit of the encoded fraction gets incremented and that will cause a carry if the least significant bit of the fraction is already 1, which can cause another carry if that bit of the fraction is already a 1, which can cause another carry, etc. This floating point rounding and the subsequent carry propagation explains why the floating point encoding for 0.99999... is exactly the same as the floating point encoding for 1.
julia> bitstring(Float32(0.99999999999999999999))
"00111111100000000000000000000000"
julia> bitstring(Float32(1.0))
"00111111100000000000000000000000"
As an example that has four digits in the repeated pattern, 0.123412341234... can be written as the geometric series
where coefficient a = 1234/10000 and common ratio r = 1/10000. The geometric series closed form reveals the two integers that specify the repeated pattern:
Main article: Power series |
Like the geometric series, the power series has one degree of freedom for its common ratio r (along the x-axis) but has n+1 degrees of freedom for its coefficients (along the y-axis), where n represents the power of the last term in the partial series. In the map of polynomials, the big blue circle represents the set of all power series.
In mathematics, the Taylor series or Taylor expansion of a function is an infinite sum of terms that are expressed in terms of the function's derivatives at a single point. For most common functions, the function and the sum of its Taylor series are equal near this point. Taylor series are named after Brook Taylor, who introduced them in 1715. A Taylor series is also called a Maclaurin series when 0 is the point where the derivatives are considered, after Colin Maclaurin, who made extensive use of this special case of Taylor series in the mid-18th century.
The partial sum formed by the first n + 1 terms of a Taylor series is a polynomial of degree n that is called the nth Taylor polynomial of the function. Taylor polynomials are approximations of a function, which become generally more accurate as n increases. Taylor's theorem gives quantitative estimates on the error introduced by the use of such approximations. If the Taylor series of a function is convergent, its sum is the limit of the infinite sequence of the Taylor polynomials. A function may differ from the sum of its Taylor series, even if its Taylor series is convergent. A function is analytic at a point x if it is equal to the sum of its Taylor series in some open interval (or open disk in the complex plane) containing x. This implies that the function is analytic at every point of the interval (or disk).
Main article: Binary number |
Main article: Single-precision floating-point format |
Zeno of Elea's geometric series with coefficient a=1/2 and common ratio r=1/2 is the foundation of binary encoded approximations of fractions in digital computers. Concretely, the geometric series written in its normalized vector form is s/a = [1 1 1 1 1 …][1 r r^{2} r^{3} r^{4} …]^{T}. Keeping the column vector of basis functions [1 r r^{2} r^{3} r^{4} …]^{T} the same but generalizing the row vector [1 1 1 1 1 …] so that each entry can be either a 0 or a 1 allows for an approximate encoding of any fraction. For example, the value v = 0.34375 is encoded as v/a = [0 1 0 1 1 0 …][1 r r^{2} r^{3} r^{4} …]^{T} where coefficient a = 1/2 and common ratio r = 1/2. Typically, the row vector is written in the more compact binary form v = 0.010110 which is 0.34375 in decimal.
Similarly, the geometric series with coefficient a=1 and common ratio r=2 is the foundation for binary encoded integers in digital computers. Again, the geometric series written in its normalized vector form is s/a = [1 1 1 1 1 …][1 r r^{2} r^{3} r^{4} …]^{T}. Keeping the column vector of basis functions [1 r r^{2} r^{3} r^{4} …]^{T} the same but generalizing the row vector [1 1 1 1 1 …] so that each entry can be either a 0 or a 1 allows for an encoding of any integer. For example, the value v = 151 is encoded as v/a = [1 1 1 0 1 0 0 1 0 …][1 r r^{2} r^{3} r^{4} r^{5} r^{6} r^{7} r^{8} …]^{T} where coefficient a = 1 and common ratio r = 2. Typically, the row vector is written in reverse order (so that the most significant bit is first) in the more compact binary form v = …010010111 = 10010111 which is 151 in decimal.
As shown in the adjacent figure, the standard binary encoding of a 32-bit floating point number is a combination of a binary encoded integer and a binary encoded fraction, beginning at the most significant bit with
Building upon the previous example of 0.34375 having binary encoding of 0.010110, a floating point encoding (according to the IEEE 754 standard) of 0.34375 is
Although encoding floating point numbers by hand like this is possible, letting a computer do it is easier and less error prone. The following Julia code confirms the hand calculated floating point encoding of the number 0.34375:
julia> bitstring(Float32(0.34375))
"00111110101100000000000000000000"
Main article: Fourier series |
As an example of the ability of the complex Fourier series to trace any 2D closed figure, in the adjacent animation a complex Fourier series traces the letter 'e' (for exponential). Given the intricate coordination of motions shown in the animation, a definition of the complex Fourier series can be surprisingly compact in just two equations:
where parameterized function s(t) traces some 2D closed figure in the complex plane as the parameter t progresses through the period from 0 to 1.
To help make sense of these compact equations defining the complex Fourier series, note that the complex Fourier series summation looks similar to the complex geometric series except that the complex Fourier series is basically two complex geometric series (one set of terms rotating in the positive direction and another set of terms rotating in the negative direction), and the coefficients of the complex Fourier series are complex constants that can vary from term to term. By allowing terms to rotate in either direction, the series becomes capable of tracing any 2D closed figure. In contrast, the complex geometric series has all the terms rotating in the same direction and it can trace only circles. Allowing the coefficients of the complex geometric series to vary from term to term would expand upon the shapes it can trace but all the possible shapes would still be limited to being puffy and cloud-like, not able to trace the shape of a simple line segment, for example going back and forth between 1 + i0 and -1 + i0. However, Euler's formula shows that the addition of just two terms rotating in opposite directions can trace that line segment between 1 + i0 and -1 + i0:
Concerning the complex Fourier series second equation defining how to calculate the coefficients, the coefficient of the non-rotating term c_{0} can be calculated by integrating the complex Fourier series first equation over the range of one period from 0 to 1. Over that range, all the rotating terms integrate to zero, leaving just c_{0}. Similarly, any of the terms in the complex Fourier series first equation can be made to be a non-rotating term by multiplying both sides of the equation by before integrating to calculate c_{n}, and that is the complex Fourier series second equation.
In mathematics, a matrix polynomial is a polynomial with square matrices as variables. Given an ordinary, scalar-valued polynomial
this polynomial evaluated at a matrix is
where is the identity matrix.^{[14]}
Note that has the same dimension as .
A matrix polynomial equation is an equality between two matrix polynomials, which holds for the specific matrices in question. A matrix polynomial identity is a matrix polynomial equation which holds for all matrices A in a specified matrix ring M_{n}(R).
Matrix polynomials are often demonstrated in undergraduate linear algebra classes due to their relevance in showcasing properties of linear transformations represented as matrices, most notably the Cayley-Hamilton theorem.In mathematics, the matrix exponential is a matrix function on square matrices analogous to the ordinary exponential function. It is used to solve systems of linear differential equations. In the theory of Lie groups, the matrix exponential gives the exponential map between a matrix Lie algebra and the corresponding Lie group.
Let X be an n×n real or complex matrix. The exponential of X, denoted by e^{X} or exp(X), is the n×n matrix given by the power series
where is defined to be the identity matrix with the same dimensions as .^{[15]} The series always converges, so the exponential of X is well-defined.
Equivalently,
In the Geometric series the summands take the exponential course in relation to the sum index. The exponent in the summands, which is dependent on the sum index, has a linear course with respect to the index. However, if the exponent in the elements of the sequence instead assumes a square course with respect to the sum index, then the sequence members themselves follow course of a Gaussian bell curve, and then the values are of the affected infinite series can not be displayed in an elementary way. Sums with a square course of the exponent in the sequence members with respect to the sum index assume elliptic theta function values for the representation. And these values can be represented with both the Jacobian theta functions and with the Neville theta functions as well.
The theta functions mentioned are according to Edmund Taylor Whittaker and George Neville WatsonTheta functions^{[16]}^{[17]}^{[18]} defined as follows:
The function is called the Elliptic Nome and the function stands for the complete Elliptic Integral of first kind:
The Greek letter designation represents the so called Theta Nullwert Function whereas the designation stands for the Hermite elliptic Psi Function:
This interconnection is given:
Continuation of the identity by using the Neville Theta Function: | ||
Continuation of the identity by using the Poisson Summation Formula: | ||
The Neville theta functions themselves were researched by the mathematician Eric Harold Neville from England. In general, by hiking integer numbered steps of the result must always be the same value over and over again. This very phenomenon is called periodicity of functions. And this criterion is fulfilled by the theta formulas mentioned above. If R is set to zero, then the primary standardized theta function value must be the result, the base value must be used as the Nome entry of the Theta Nullwert function. And if the value does not result in a whole number but rather a fractional number, then the resulting value of the sum series must be related to the value of the sum series with equal to zero by the factor which is given by the associated incomplete theta function of the Complementary Nome, i.e. by the value exactly. The formula in the bottom line of the formula table shown follows directly from the laws described in the last two sentences mentioned. And the formula with Neville's theta function results directly from this. The Pythagorean complementary module must be entered as the module. And this is created accurately by the fourth power of the Hermite Elliptic Psi Function . By using the Poisson's empirical formula, simplifications can be made in the formula in the bottom line of the table.
This formula about the Jacobi theta function is valid:
Hierfür soll gelten:
For the now mentioned sum series following two calculation examples shall be executed in detail: