Basic concepts
We will mostly consider functions defined on the domain
. Sometimes it is more convenient to work with the domain
instead. If
is defined on
, then the corresponding function defined on
is
![{\displaystyle f_{01}(x_{1},\ldots ,x_{n})=f((-1)^{x_{1)),\ldots ,(-1)^{x_{n))).}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a82f856c2988bc0a620cd0b1c1b1c710f95038f0)
Similarly, for us a Boolean function is a
-valued function, though often it is more convenient to consider
-valued functions instead.
Fourier expansion
Every real-valued function
has a unique expansion as a multilinear polynomial:
![{\displaystyle f(x)=\sum _{S\subseteq [n]}{\hat {f))(S)\chi _{S}(x),\quad \chi _{S}(x)=\prod _{i\in S}x_{i}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/edd06934afceeda6f118a76ce6f563d022db143e)
(Note that even if the function is 0-1 valued this is not a sum mod 2, but just an ordinary sum of real numbers.)
This is the Hadamard transform of the function
, which is the Fourier transform in the group
. The coefficients
are known as Fourier coefficients, and the entire sum is known as the Fourier expansion of
. The functions
are known as Fourier characters, and they form an orthonormal basis for the space of all functions over
, with respect to the inner product
.
The Fourier coefficients can be calculated using an inner product:
![{\displaystyle {\hat {f))(S)=\langle f,\chi _{S}\rangle .}](https://wikimedia.org/api/rest_v1/media/math/render/svg/5f73bf6bbaf6a34b16c01df3a92ddb2e8c96c450)
In particular, this shows that
, where the expected value is taken with respect to the uniform distribution over
. Parseval's identity states that
![{\displaystyle \|f\|^{2}=\operatorname {E} [f^{2}]=\sum _{S}{\hat {f))(S)^{2}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a074d7f2e4e715c950be9b86fe753de94bc75e01)
If we skip
, then we get the variance of
:
![{\displaystyle \operatorname {Var} [f]=\sum _{S\neq \emptyset }{\hat {f))(S)^{2}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/d3b8839d29e68a31deab451f21b04166c97e9fe1)
Fourier degree and Fourier levels
The degree of a function
is the maximum
such that
for some set
of size
. In other words, the degree of
is its degree as a multilinear polynomial.
It is convenient to decompose the Fourier expansion into levels: the Fourier coefficient
is on level
.
The degree
part of
is
![{\displaystyle f^{=d}=\sum _{|S|=d}{\hat {f))(S)\chi _{S}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/de4b8e83adef08cd7b581380afcaf4e4b08653d5)
It is obtained from
by zeroing out all Fourier coefficients not on level
.
We similarly define
.
Influence
The
'th influence of a function
can be defined in two equivalent ways:
![{\displaystyle {\begin{aligned}&\operatorname {Inf} _{i}[f]=\operatorname {E} \left[\left({\frac {f-f^{\oplus i)){2))\right)^{2}\right]=\sum _{S\ni i}{\hat {f))(S)^{2},\\[5pt]&f^{\oplus i}(x_{1},\ldots ,x_{n})=f(x_{1},\ldots ,x_{i-1},-x_{i},x_{i+1},\ldots ,x_{n}).\end{aligned))}](https://wikimedia.org/api/rest_v1/media/math/render/svg/7157d7832059e1e4ba17860117c7193c25a20b0d)
If
is Boolean then
is the probability that flipping the
'th coordinate flips the value of the function:
![{\displaystyle \operatorname {Inf} _{i}[f]=\Pr[f(x)\neq f^{\oplus i}(x)].}](https://wikimedia.org/api/rest_v1/media/math/render/svg/eaf9fe6c24947905524ab5d83c90d3001145fc80)
If
then
doesn't depend on the
'th coordinate.
The total influence of
is the sum of all of its influences:
![{\displaystyle \operatorname {Inf} [f]=\sum _{i=1}^{n}\operatorname {Inf} _{i}[f]=\sum _{S}|S|{\hat {f))(S)^{2}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ddb5cb4437986e37703e6ebc970e47ada9bd8bb4)
The total influence of a Boolean function is also the average sensitivity of the function. The sensitivity of a Boolean function
at a given point is the number of coordinates
such that if we flip the
'th coordinate, the value of the function changes. The average value of this quantity is exactly the total influence.
The total influence can also be defined using the discrete Laplacian of the Hamming graph, suitably normalized:
.
A generalized form of influence is the
-stable influence, defined by:
![{\displaystyle \operatorname {Inf} _{i}^{\,(\rho )}[f]=\operatorname {Stab} _{\rho }[\operatorname {D} _{i}f]=\sum _{S\ni i}\rho ^{|S|-1}{\hat {f))(S)^{2}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/803021a8373d0c35d2f678e507ba876bd01020c7)
The corresponding total influences is
![{\displaystyle \operatorname {I} ^{(\rho )}[f]={\frac {d}{d\rho ))\operatorname {Stab} _{\rho }[f]=\sum _{S}|S|\rho ^{|S|-1}{\hat {f))(S)^{2}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/5517178d27d117e75e0d7e950895f1eedc9cff47)
One can prove that a function
has at most “constantly” many
“stably-influential” coordinates:
Noise stability
Given
, we say that two random vectors
are
-correlated if the marginal distributions of
are uniform, and
. Concretely, we can generate a pair of
-correlated random variables by first choosing
uniformly at random, and then choosing
according to one of the following two equivalent rules, applied independently to each coordinate:
![{\displaystyle y_{i}={\begin{cases}x_{i}&{\text{w.p. ))\rho ,\\z_{i}&{\text{w.p. ))1-\rho .\end{cases))\quad {\text{or))\quad y_{i}={\begin{cases}x_{i}&{\text{w.p. )){\frac {1+\rho }{2)),\\-x_{i}&{\text{w.p. )){\frac {1-\rho }{2)).\end{cases))}](https://wikimedia.org/api/rest_v1/media/math/render/svg/eb56c6b4632cbe5edabc7e98dd7cebe95572ef82)
We denote this distribution by
.
The noise stability of a function
at
can be defined in two equivalent ways:
![{\displaystyle \operatorname {Stab} _{\rho }[f]=\operatorname {E} _{x;y\sim N_{\rho }(x)}[f(x)f(y)]=\sum _{S\subseteq [n]}\rho ^{|S|}{\hat {f))(S)^{2}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/64608f6699e688c0791a34f7ae1023f7d22d579a)
For
, the noise sensitivity of
at
is
![{\displaystyle \operatorname {NS} _{\delta }[f]={\frac {1}{2))-{\frac {1}{2))\operatorname {Stab} _{1-2\delta }[f].}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b96da0ed039f7a47ef97fc75309b169e94622b2e)
If
is Boolean, then this is the probability that the value of
changes if we flip each coordinate with probability
, independently.
Noise operator
The noise operator
is an operator taking a function
and returning another function
given by
![{\displaystyle (T_{\rho }f)(x)=\operatorname {E} _{y\sim N_{\rho }(x)}[f(y)]=\sum _{S\subseteq [n]}\rho ^{|S|}{\hat {f))(S)\chi _{S}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/14532ec7e4934d9f234d8a13f09db30ecff240e5)
When
, the noise operator can also be defined using a continuous-time Markov chain in which each bit is flipped independently with rate 1. The operator
corresponds to running this Markov chain for
steps starting at
, and taking the average value of
at the final state. This Markov chain is generated by the Laplacian of the Hamming graph, and this relates total influence to the noise operator.
Noise stability can be defined in terms of the noise operator:
.
Hypercontractivity
For
, the
-norm of a function
is defined by
![{\displaystyle \|f\|_{q}={\sqrt[{q}]{\operatorname {E} [|f|^{q}])).}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0557d5a57f6c7d9a472ef167c3f9a33189f4685d)
We also define
The hypercontractivity theorem states that for any
and
,
![{\displaystyle \|T_{\rho }f\|_{q}\leq \|f\|_{2}\quad {\text{and))\quad \|T_{\rho }f\|_{2}\leq \|f\|_{q'}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c27590dfed7dfbade4b822deb7023f4f7d05d0f1)
Hypercontractivity is closely related to the logarithmic Sobolev inequalities of functional analysis.[2]
A similar result for
is known as reverse hypercontractivity.[3]
p-Biased analysis
In many situations the input to the function is not uniformly distributed over
, but instead has a bias toward
or
. In these situations it is customary to consider functions over the domain
. For
, the p-biased measure
is given by
![{\displaystyle \mu _{p}(x)=p^{\sum _{i}x_{i))(1-p)^{\sum _{i}(1-x_{i})}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/298b0968985b681d78b619f8183554c5e14c7f04)
This measure can be generated by choosing each coordinate independently to be 1 with probability
and 0 with probability
.
The classical Fourier characters are no longer orthogonal with respect to this measure. Instead, we use the following characters:
![{\displaystyle \omega _{S}(x)=\left({\sqrt {\frac {p}{1-p))}\right)^{|\{i\in S:x_{i}=0\}|}\left(-{\sqrt {\frac {1-p}{p))}\right)^{|\{i\in S:x_{i}=1\}|}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6cb8ee657c62826ac034ac8f767a39853135b72e)
The p-biased Fourier expansion of
is the expansion of
as a linear combination of p-biased characters:
![{\displaystyle f=\sum _{S\subseteq [n]}{\hat {f))(S)\omega _{S}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/526db972588a38fc5211cbdaea8bcfcb4b9a5147)
We can extend the definitions of influence and the noise operator to the p-biased setting by using their spectral definitions.
Influence
The
's influence is given by
![{\displaystyle \operatorname {Inf} _{i}[f]=\sum _{S\ni i}{\hat {f))(S)^{2}=p(1-p)\operatorname {E} [(f-f^{\oplus i})^{2}].}](https://wikimedia.org/api/rest_v1/media/math/render/svg/23880fbb96cea5ad600dd7e60678ce8cf248d800)
The total influence is the sum of the individual influences:
![{\displaystyle \operatorname {Inf} [f]=\sum _{i=1}^{n}\operatorname {Inf} _{i}[f]=\sum _{S}|S|{\hat {f))(S)^{2}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ddb5cb4437986e37703e6ebc970e47ada9bd8bb4)
Noise operator
A pair of
-correlated random variables can be obtained by choosing
independently and
, where
is given by
![{\displaystyle y_{i}={\begin{cases}x_{i}&{\text{w.p. ))\rho ,\\z_{i}&{\text{w.p. ))1-\rho .\end{cases))}](https://wikimedia.org/api/rest_v1/media/math/render/svg/99b540f7b37239141666511b7854ddcd09efc99c)
The noise operator is then given by
![{\displaystyle (T_{\rho }f)(x)=\sum _{S\subseteq [n]}\rho ^{|S|}{\hat {f))(S)\omega _{S}(x)=\operatorname {E} _{y\sim N_{\rho }(x)}[f(y)].}](https://wikimedia.org/api/rest_v1/media/math/render/svg/27f3eff8f6f0771d91b427b4eb5128e3295dac3b)
Using this we can define the noise stability and the noise sensitivity, as before.
Russo–Margulis formula
The Russo–Margulis formula (also called the Margulis–Russo formula[1]) states that for monotone Boolean functions
,
![{\displaystyle {\frac {d}{dp))\operatorname {E} _{x\sim \mu _{p))[f(x)]={\frac {\operatorname {Inf} [f]}{p(1-p)))=\sum _{i=1}^{n}\Pr[f\neq f^{\oplus i}].}](https://wikimedia.org/api/rest_v1/media/math/render/svg/1c8c1f47a1c9b94e144187321a02cab33b3dbd20)
Both the influence and the probabilities are taken with respect to
, and on the right-hand side we have the average sensitivity of
. If we think of
as a property, then the formula states that as
varies, the derivative of the probability that
occurs at
equals the average sensitivity at
.
The Russo–Margulis formula is key for proving sharp threshold theorems such as Friedgut's.
Gaussian space
One of the deepest results in the area, the invariance principle, connects the distribution of functions on the Boolean cube
to their distribution on Gaussian space, which is the space
endowed with the standard
-dimensional Gaussian measure.
Many of the basic concepts of Fourier analysis on the Boolean cube have counterparts in Gaussian space:
- The counterpart of the Fourier expansion in Gaussian space is the Hermite expansion, which is an expansion to an infinite sum (converging in
) of multivariate Hermite polynomials.
- The counterpart of total influence or average sensitivity for the indicator function of a set is Gaussian surface area, which is the Minkowski content of the boundary of the set.
- The counterpart of the noise operator is the Ornstein–Uhlenbeck operator (related to the Mehler transform), given by
, or alternatively by
, where
is a pair of
-correlated standard Gaussians.
- Hypercontractivity holds (with appropriate parameters) in Gaussian space as well.
Gaussian space is more symmetric than the Boolean cube (for example, it is rotation invariant), and supports continuous arguments which may be harder to get through in the discrete setting of the Boolean cube. The invariance principle links the two settings, and allows deducing results on the Boolean cube from results on Gaussian space.
Basic results
Friedgut–Kalai–Naor theorem
If
has degree at most 1, then
is either constant, equal to a coordinate, or equal to the negation of a coordinate. In particular,
is a dictatorship: a function depending on at most one coordinate.
The Friedgut–Kalai–Naor theorem,[4] also known as the FKN theorem, states that if
almost has degree 1 then it is close to a dictatorship. Quantitatively, if
and
, then
is
-close to a dictatorship, that is,
for some Boolean dictatorship
, or equivalently,
for some Boolean dictatorship
.
Similarly, a Boolean function of degree at most
depends on at most
coordinates, making it a junta (a function depending on a constant number of coordinates), where
is an absolute constant equal to at least 1.5, and at most 4.41, as shown by Wellens.[5] The Kindler–Safra theorem[6] generalizes the Friedgut–Kalai–Naor theorem to this setting. It states that if
satisfies
then
is
-close to a Boolean function of degree at most
.
Kahn–Kalai–Linial theorem
The Poincaré inequality for the Boolean cube (which follows from formulas appearing above) states that for a function
,
![{\displaystyle \operatorname {Var} [f]\leq \operatorname {Inf} [f]\leq \deg f\cdot \operatorname {Var} [f].}](https://wikimedia.org/api/rest_v1/media/math/render/svg/7ddcc01e6e1549b65daaff598b0272589a641a4b)
This implies that
.
The Kahn–Kalai–Linial theorem,[7] also known as the KKL theorem, states that if
is Boolean then
.
The bound given by the Kahn–Kalai–Linial theorem is tight, and is achieved by the Tribes function of Ben-Or and Linial:[8]
![{\displaystyle (x_{1,1}\land \cdots \land x_{1,w})\lor \cdots \lor (x_{2^{w},1}\land \cdots \land x_{2^{w},w}).}](https://wikimedia.org/api/rest_v1/media/math/render/svg/87210747ee6bf3142d6fa35a8913036b4a78b8c3)
The Kahn–Kalai–Linial theorem was one of the first results in the area, and was the one introducing hypercontractivity into the context of Boolean functions.
Friedgut's junta theorem
If
is an
-junta (a function depending on at most
coordinates) then
according to the Poincaré inequality.
Friedgut's theorem[9] is a converse to this result. It states that for any
, the function
is
-close to a Boolean junta depending on
coordinates.
Combined with the Russo–Margulis lemma, Friedgut's junta theorem implies that for every
, every monotone function is close to a junta with respect to
for some
.
Invariance principle
The invariance principle[10] generalizes the Berry–Esseen theorem to non-linear functions.
The Berry–Esseen theorem states (among else) that if
and no
is too large compared to the rest, then the distribution of
over
is close to a normal distribution with the same mean and variance.
The invariance principle (in a special case) informally states that if
is a multilinear polynomial of bounded degree over
and all influences of
are small, then the distribution of
under the uniform measure over
is close to its distribution in Gaussian space.
More formally, let
be a univariate Lipschitz function, let
, let
, and let
. Suppose that
. Then
![{\displaystyle \left|\operatorname {E} _{x\sim \{-1,1\}^{n))[\psi (f(x))]-\operatorname {E} _{g\sim N(0,I)}[\psi (f(g))]\right|=O(k9^{k}\varepsilon ).}](https://wikimedia.org/api/rest_v1/media/math/render/svg/6bb87eb9bce1a63d7666c6851d8ca48c592dab53)
By choosing appropriate
, this implies that the distributions of
under both measures are close in CDF distance, which is given by
.
The invariance principle was the key ingredient in the original proof of the Majority is Stablest theorem.
Some applications
Linearity testing
A Boolean function
is linear if it satisfies
, where
. It is not hard to show that the Boolean linear functions are exactly the characters
.
In property testing we want to test whether a given function is linear. It is natural to try the following test: choose
uniformly at random, and check that
. If
is linear then it always passes the test. Blum, Luby and Rubinfeld[11] showed that if the test passes with probability
then
is
-close to a Fourier character. Their proof was combinatorial.
Bellare et al.[12] gave an extremely simple Fourier-analytic proof, that also shows that if the test succeeds with probability
, then
is correlated with a Fourier character. Their proof relies on the following formula for the success probability of the test:
![{\displaystyle {\frac {1}{2))+{\frac {1}{2))\sum _{S\subseteq [n]}{\hat {f))(S)^{3}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2805612d0d0d84c104e169844b3c1e74936ae2d8)
Arrow's theorem
Arrow's impossibility theorem states that for three and more candidates, the only unanimous voting rule for which there is always a Condorcet winner is a dictatorship.
The usual proof of Arrow's theorem is combinatorial. Kalai[13] gave an alternative proof of this result in the case of three candidates using Fourier analysis. If
is the rule that assigns a winner among two candidates given their relative orders in the votes, then the probability that there is a Condorcet winner given a uniformly random vote is
, from which the theorem easily follows.
The FKN theorem implies that if
is a rule for which there is almost always a Condorcet winner, then
is close to a dictatorship.
Sharp thresholds
A classical result in the theory of random graphs states that the probability that a
random graph is connected tends to
if
. This is an example of a sharp threshold: the width of the "threshold window", which is
, is asymptotically smaller than the threshold itself, which is roughly
. In contrast, the probability that a
graph contains a triangle tends to
when
. Here both the threshold window and the threshold itself are
, and so this is a coarse threshold.
Friedgut's sharp threshold theorem[14] states, roughly speaking, that a monotone graph property (a graph property is a property which doesn't depend on the names of the vertices) has a sharp threshold unless it is correlated with the appearance of small subgraphs. This theorem has been widely applied to analyze random graphs and percolation.
On a related note, the KKL theorem implies that the width of threshold window is always at most
.[15]
Majority is stablest
Let
denote the majority function on
coordinates. Sheppard's formula gives the asymptotic noise stability of majority:
![{\displaystyle \operatorname {Stab} _{\rho }[\operatorname {Maj} _{n}]\longrightarrow 1-{\frac {2}{\pi ))\arccos \rho .}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4cdff9297043387010315b119ff21e6abbe94e96)
This is related to the probability that if we choose
uniformly at random and form
by flipping each bit of
with probability
, then the majority stays the same:
.
There are Boolean functions with larger noise stability. For example, a dictatorship
has noise stability
.
The Majority is Stablest theorem states, informally, then the only functions having noise stability larger than majority have influential coordinates. Formally, for every
there exists
such that if
has expectation zero and
, then
.
The first proof of this theorem used the invariance principle in conjunction with an isoperimetric theorem of Borell in Gaussian space; since then more direct proofs were devised.[16]
[17]
Majority is Stablest implies that the Goemans–Williamson approximation algorithm for MAX-CUT is optimal, assuming the unique games conjecture. This implication, due to Khot et al.,[18] was the impetus behind proving the theorem.