In probability theory and statistics, a standardized moment of a probability distribution is a moment (often a higher degree central moment) that is normalized, typically by a power of the standard deviation, rendering the moment scale invariant. The shape of different probability distributions can be compared using standardized moments.[1]

## Standard normalization

Let X be a random variable with a probability distribution P and mean value ${\textstyle \mu =\mathrm {E} [X]}$ (i.e. the first raw moment or moment about zero), the operator E denoting the expected value of X. Then the standardized moment of degree k is ${\displaystyle {\frac {\mu _{k)){\sigma ^{k))},}$[2] that is, the ratio of the kth moment about the mean

${\displaystyle \mu _{k}=\operatorname {E} \left[(X-\mu )^{k}\right]=\int _{-\infty }^{\infty }(x-\mu )^{k}P(x)\,dx,}$

to the kth power of the standard deviation,

${\displaystyle \sigma ^{k}=\mu _{2}^{k/2}=\left({\sqrt {\mathrm {E} \left[(X-\mu )^{2}\right]))\right)^{k}.}$

The power of k is because moments scale as ${\displaystyle x^{k},}$ meaning that ${\displaystyle \mu _{k}(\lambda X)=\lambda ^{k}\mu _{k}(X):}$ they are homogeneous functions of degree k, thus the standardized moment is scale invariant. This can also be understood as being because moments have dimension; in the above ratio defining standardized moments, the dimensions cancel, so they are dimensionless numbers.

The first four standardized moments can be written as:

Degree k Comment
1 ${\displaystyle {\tilde {\mu ))_{1}={\frac {\mu _{1)){\sigma ^{1))}={\frac {\operatorname {E} \left[(X-\mu )^{1}\right]}{(\operatorname {E} \left[(X-\mu )^{2}\right])^{1/2))}={\frac {\mu -\mu }{\sqrt {\operatorname {E} \left[(X-\mu )^{2}\right]))}=0}$ The first standardized moment is zero, because the first moment about the mean is always zero.
2 ${\displaystyle {\tilde {\mu ))_{2}={\frac {\mu _{2)){\sigma ^{2))}={\frac {\operatorname {E} \left[(X-\mu )^{2}\right]}{(\operatorname {E} \left[(X-\mu )^{2}\right])^{2/2))}=1}$ The second standardized moment is one, because the second moment about the mean is equal to the variance σ2.
3 ${\displaystyle {\tilde {\mu ))_{3}={\frac {\mu _{3)){\sigma ^{3))}={\frac {\operatorname {E} \left[(X-\mu )^{3}\right]}{(\operatorname {E} \left[(X-\mu )^{2}\right])^{3/2))))$ The third standardized moment is a measure of skewness.
4 ${\displaystyle {\tilde {\mu ))_{4}={\frac {\mu _{4)){\sigma ^{4))}={\frac {\operatorname {E} \left[(X-\mu )^{4}\right]}{(\operatorname {E} \left[(X-\mu )^{2}\right])^{4/2))))$ The fourth standardized moment refers to the kurtosis.

For skewness and kurtosis, alternative definitions exist, which are based on the third and fourth cumulant respectively.

## Other normalizations

 Further information: Normalization (statistics)

Another scale invariant, dimensionless measure for characteristics of a distribution is the coefficient of variation, ${\displaystyle {\frac {\sigma }{\mu ))}$. However, this is not a standardized moment, firstly because it is a reciprocal, and secondly because ${\displaystyle \mu }$ is the first moment about zero (the mean), not the first moment about the mean (which is zero).

See Normalization (statistics) for further normalizing ratios.

2. ^ W., Weisstein, Eric. "Standardized Moment". mathworld.wolfram.com. Retrieved 2016-03-30.((cite web)): CS1 maint: multiple names: authors list (link)