In mathematics, certain kinds of mistaken proof are often exhibited, and sometimes collected, as illustrations of a concept called **mathematical fallacy**. There is a distinction between a simple *mistake* and a *mathematical fallacy* in a proof, in that a mistake in a proof leads to an invalid proof while in the best-known examples of mathematical fallacies there is some element of concealment or deception in the presentation of the proof.

For example, the reason why validity fails may be attributed to a division by zero that is hidden by algebraic notation. There is a certain quality of the mathematical fallacy: as typically presented, it leads not only to an absurd result, but does so in a crafty or clever way.^{[1]} Therefore, these fallacies, for pedagogic reasons, usually take the form of spurious proofs of obvious contradictions. Although the proofs are flawed, the errors, usually by design, are comparatively subtle, or designed to show that certain steps are conditional, and are not applicable in the cases that are the exceptions to the rules.

The traditional way of presenting a mathematical fallacy is to give an invalid step of deduction mixed in with valid steps, so that the meaning of fallacy is here slightly different from the logical fallacy. The latter usually applies to a form of argument that does not comply with the valid inference rules of logic, whereas the problematic mathematical step is typically a correct rule applied with a tacit wrong assumption. Beyond pedagogy, the resolution of a fallacy can lead to deeper insights into a subject (e.g., the introduction of Pasch's axiom of Euclidean geometry,^{[2]} the five colour theorem of graph theory). *Pseudaria*, an ancient lost book of false proofs, is attributed to Euclid.^{[3]}

Mathematical fallacies exist in many branches of mathematics. In elementary algebra, typical examples may involve a step where division by zero is performed, where a root is incorrectly extracted or, more generally, where different values of a multiple valued function are equated. Well-known fallacies also exist in elementary Euclidean geometry and calculus.^{[4]}^{[5]}

Examples exist of mathematically correct results derived by incorrect lines of reasoning. Such an argument, however true the conclusion appears to be, is mathematically invalid and is commonly known as a *howler*. The following is an example of a howler involving anomalous cancellation:

Here, although the conclusion 16/64 = 1/4 is correct, there is a fallacious, invalid cancellation in the middle step.^{[note 1]} Another classical example of a howler is proving the Cayley–Hamilton theorem by simply substituting the scalar variables of the characteristic polynomial by the matrix.

Bogus proofs, calculations, or derivations constructed to produce a correct result in spite of incorrect logic or operations were termed "howlers" by Edwin Maxwell.^{[2]} Outside the field of mathematics the term *howler* has various meanings, generally less specific.

The division-by-zero fallacy has many variants. The following example uses a disguised division by zero to "prove" that 2 = 1, but can be modified to prove that any number equals any other number.

- Let
*a*and*b*be equal, nonzero quantities - Multiply by
*a* - Subtract
*b*^{2} - Factor both sides: the left factors as a difference of squares, the right is factored by extracting
*b*from both terms - Divide out (
*a*−*b*) - Use the fact that
*a*=*b* - Combine like terms on the left
- Divide by the non-zero
*b*

*Q.E.D.*^{[6]}

The fallacy is in line 5: the progression from line 4 to line 5 involves division by *a* − *b*, which is zero since *a* = *b*. Since division by zero is undefined, the argument is invalid.

Mathematical analysis as the mathematical study of change and limits can lead to mathematical fallacies — if the properties of integrals and differentials are ignored. For instance, a naive use of integration by parts can be used to give a false proof that 0 = 1.^{[7]} Letting *u* = 1/log *x* and *dv* = *dx*/*x*, we may write:

after which the antiderivatives may be cancelled yielding 0 = 1. The problem is that antiderivatives are only defined up to a constant and shifting them by 1 or indeed any number is allowed. The error really comes to light when we introduce arbitrary integration limits *a* and *b*.

Since the difference between two values of a constant function vanishes, the same definite integral appears on both sides of the equation.

Main article: Multivalued function |

Many functions do not have a unique inverse. For instance, while squaring a number gives a unique value, there are two possible square roots of a positive number. The square root is multivalued. One value can be chosen by convention as the principal value; in the case of the square root the non-negative value is the principal value, but there is no guarantee that the square root given as the principal value of the square of a number will be equal to the original number (e.g. the principal square root of the square of −2 is 2). This remains true for nth roots.

Care must be taken when taking the square root of both sides of an equality. Failing to do so results in a "proof" of^{[8]} 5 = 4.

Proof:

- Start from
- Write this as
- Rewrite as
- Add 81/4 on both sides:
- These are perfect squares:
- Take the square root of both sides:
- Add 9/2 on both sides:
*Q.E.D.*

The fallacy is in the second to last line, where the square root of both sides is taken: *a*^{2} = *b*^{2} only implies *a* = *b* if *a* and *b* have the same sign, which is not the case here. In this case, it implies that *a* = –*b*, so the equation should read

which, by adding 9/2 on both sides, correctly reduces to 5 = 5.

Another example illustrating the danger of taking the square root of both sides of an equation involves the following fundamental identity^{[9]}

which holds as a consequence of the Pythagorean theorem. Then, by taking a square root,

Evaluating this when *x* = π , we get that

or

which is incorrect.

The error in each of these examples fundamentally lies in the fact that any equation of the form

where , has two solutions:

and it is essential to check which of these solutions is relevant to the problem at hand.^{[10]} In the above fallacy, the square root that allowed the second equation to be deduced from the first is valid only when cos *x* is positive. In particular, when *x* is set to π, the second equation is rendered invalid.

Invalid proofs utilizing powers and roots are often of the following kind:

The fallacy is that the rule is generally valid only if at least one of and * is non-negative (when dealing with real numbers), which is not the case here.*^{[11]}

Alternatively, imaginary roots are obfuscated in the following:

The error here lies in the incorrect usage of multiple-valued functions. has two values and without a prior choice of branch, while only denotes the principal value . ^{[12]} Similarly, has four different values , , , and , of which only is equal to the left side of the first equality.

When a number is raised to a complex power, the result is not uniquely defined (see Exponentiation § Failure of power and logarithm identities). If this property is not recognized, then errors such as the following can result:

The error here is that the rule of multiplying exponents as when going to the third line does not apply unmodified with complex exponents, even if when putting both sides to the power *i* only the principal value is chosen. When treated as multivalued functions, both sides produce the same set of values, being

Many mathematical fallacies in geometry arise from using an additive equality involving oriented quantities (such as adding vectors along a given line or adding oriented angles in the plane) to a valid identity, but which fixes only the absolute value of (one of) these quantities. This quantity is then incorporated into the equation with the wrong orientation, so as to produce an absurd conclusion. This wrong orientation is usually suggested implicitly by supplying an imprecise diagram of the situation, where relative positions of points or lines are chosen in a way that is actually impossible under the hypotheses of the argument, but non-obviously so.

In general, such a fallacy is easy to expose by drawing a precise picture of the situation, in which some relative positions will be different from those in the provided diagram. In order to avoid such fallacies, a correct geometric argument using addition or subtraction of distances or angles should always prove that quantities are being incorporated with their correct orientation.

The fallacy of the isosceles triangle, from (Maxwell 1959, Chapter II, § 1), purports to show that every triangle is isosceles, meaning that two sides of the triangle are congruent. This fallacy was known to Lewis Carroll and may have been discovered by him. It was published in 1899.^{[13]}^{[14]}

Given a triangle △ABC, prove that AB = AC:

- Draw a line bisecting ∠A.
- Draw the perpendicular bisector of segment BC, which bisects BC at a point D.
- Let these two lines meet at a point O.
- Draw line OR perpendicular to AB, line OQ perpendicular to AC.
- Draw lines OB and OC.
- By AAS, △RAO ≅ △QAO (∠ORA = ∠OQA = 90°; ∠RAO = ∠QAO; AO = AO (common side)).
- By RHS,
^{[note 2]}△ROB ≅ △QOC (∠BRO = ∠CQO = 90°; BO = OC (hypotenuse); RO = OQ (leg)). - Thus, AR = AQ, RB = QC, and AB = AR + RB = AQ + QC = AC.

*Q.E.D.*

As a corollary, one can show that all triangles are equilateral, by showing that AB = BC and AC = BC in the same way.

The error in the proof is the assumption in the diagram that the point O is *inside* the triangle. In fact, O always lies on the circumcircle of the △ABC (except for isosceles and equilateral triangles where AO and OD coincide). Furthermore, it can be shown that, if AB is longer than AC, then R will lie *within* AB, while Q will lie *outside* of AC, and vice versa (in fact, any diagram drawn with sufficiently accurate instruments will verify the above two facts). Because of this, AB is still AR + RB, but AC is actually AQ − QC; and thus the lengths are not necessarily the same.

There exist several fallacious proofs by induction in which one of the components, basis case or inductive step, is incorrect. Intuitively, proofs by induction work by arguing that if a statement is true in one case, it is true in the next case, and hence by repeatedly applying this, it can be shown to be true for all cases. The following "proof" shows that all horses are the same colour.^{[15]}^{[note 3]}

- Let us say that any group of
*N*horses is all of the same colour. - If we remove a horse from the group, we have a group of
*N*− 1 horses of the same colour. If we add another horse, we have another group of*N*horses. By our previous assumption, all the horses are of the same colour in this new group, since it is a group of*N*horses. - Thus we have constructed two groups of
*N*horses all of the same colour, with*N*− 1 horses in common. Since these two groups have some horses in common, the two groups must be of the same colour as each other. - Therefore, combining all the horses used, we have a group of
*N*+ 1 horses of the same colour. - Thus if any
*N*horses are all the same colour, any*N*+ 1 horses are the same colour. - This is clearly true for
*N*= 1 (i.e., one horse is a group where all the horses are the same colour). Thus, by induction,*N*horses are the same colour for any positive integer*N*, and so all horses are the same colour.

The fallacy in this proof arises in line 3. For *N* = 1, the two groups of horses have *N* − 1 = 0 horses in common, and thus are not necessarily the same colour as each other, so the group of *N* + 1 = 2 horses is not necessarily all of the same colour. The implication "every *N* horses are of the same colour, then *N* + 1 horses are of the same colour" works for any *N* > 1, but fails to be true when *N* = 1. The basis case is correct, but the induction step has a fundamental flaw.