In statistics, expected mean squares (EMS) are the expected values of certain statistics arising in partitions of sums of squares in the analysis of variance (ANOVA). They can be used for ascertaining which statistic should appear in the denominator in an F-test for testing a null hypothesis that a particular effect is absent.

## Definition

When the total corrected sum of squares in an ANOVA is partitioned into several components, each attributed to the effect of a particular predictor variable, each of the sums of squares in that partition is a random variable that has an expected value. That expected value divided by the corresponding number of degrees of freedom is the expected mean square for that predictor variable.

## Example

The following example is from Longitudinal Data Analysis by Donald Hedeker and Robert D. Gibbons.[1]

Each of s treatments (one of which may be a placebo) is administered to a sample of (capital) N randomly chosen patients, on whom certain measurements ${\textstyle Y_{hij))$ are observed at each of (lower-case) n specified times, for ${\textstyle h=1,\ldots ,s,\quad i=1,\ldots ,N_{h))$ (thus the numbers of patients receiving different treatments may differ), and ${\textstyle j=1,\ldots ,n.}$ We assume the sets of patients receiving different treatments are disjoint, so patients are nested within treatments and not crossed with treatments. We have

${\displaystyle Y_{hij}=\mu +\gamma _{h}+\tau _{j}+(\gamma \tau )_{hj}+\pi _{i(h)}+\varepsilon _{hij))$

where

• ${\displaystyle \mu }$ = grand mean, (fixed)
• ${\displaystyle \gamma _{h))$ = effect of treatment ${\displaystyle h}$, (fixed)
• ${\displaystyle \tau _{j))$ = effect of time ${\displaystyle j}$, (fixed)
• ${\displaystyle (\gamma \tau )_{hj))$ = interaction effect of treatment ${\displaystyle h}$ and time ${\displaystyle j}$, (fixed)
• ${\displaystyle \pi _{i(h)))$ = individual difference effect for patient ${\displaystyle i}$ nested within treatment ${\displaystyle h}$, (random)
• ${\displaystyle \varepsilon _{hij))$ = error for patient ${\displaystyle i}$ in treatment ${\displaystyle h}$ at time ${\displaystyle j}$. (random)
• ${\displaystyle \sigma _{\pi }^{2))$ = variance of the random effect of patients nested within treatments,
• ${\displaystyle \sigma _{\varepsilon ))$ = error variance.

The total corrected sum of squares is

${\displaystyle \sum _{hij}(Y_{hij}-{\overline {Y)))^{2}\quad {\text{where )){\overline {Y))={\frac {1}{n))\sum _{hij}Y_{hij}.}$

The ANOVA table below partitions the sum of squares (where ${\textstyle N=\sum _{h}N_{h))$):

source of variability degrees of freedom sum of squares mean square expected mean square
treatment ${\displaystyle s-1}$ ${\displaystyle {\text{SS))_{\text{Tr))=n\sum _{h=1}^{s}N_{h}({\overline {Y))_{h\cdot \cdot }-{\overline {Y))_{\cdot \cdot \cdot })^{2))$ ${\displaystyle {\dfrac ((\text{SS))_{\text{Tr))}{s-1))}$ ${\displaystyle \sigma _{\varepsilon }^{2}+n\sigma _{\pi }^{2}+D_{\text{Tr))}$
time ${\displaystyle n-1}$ ${\displaystyle {\text{SS))_{\text{T))=N\sum _{j=1}^{n}({\overline {Y))_{\cdot \cdot j}-{\overline {Y))_{\cdot \cdot \cdot })^{2))$ ${\displaystyle {\dfrac ((\text{SS))_{\text{T))}{n-1))}$ ${\displaystyle \sigma _{\varepsilon }^{2}+D_{\text{T))}$
treatment × time ${\displaystyle (s-1)(n-1)}$ ${\displaystyle {\text{SS))_{\text{Tr T))=\sum _{h=1}^{s}\sum _{j=1}^{n}N_{h}({\overline {Y))_{h\cdot j}-{\overline {Y))_{h\cdot \cdot }-{\overline {Y))_{\cdot \cdot j}+{\overline {Y))_{\cdot \cdot \cdot })^{2))$ ${\displaystyle {\dfrac ((\text{SS))_{\text{Tr T))}{(n-1)(s-1)))}$ ${\displaystyle \sigma _{\varepsilon }^{2}+D_{\text{Tr T))}$
patients within treatments ${\displaystyle N-s}$ ${\displaystyle {\text{SS))_((\text{S))({\text{Tr)))}=n\sum _{h=1}^{s}\sum _{i=1}^{N_{h))({\overline {Y))_{hi\cdot }-{\overline {Y))_{h\cdot \cdot })^{2))$ ${\displaystyle {\dfrac ((\text{SS))_((\text{S))({\text{Tr))))){N-s))}$ ${\displaystyle \sigma _{\varepsilon }^{2}+n\sigma _{\pi }^{2))$
error ${\displaystyle (N-s)(n-1)}$ ${\displaystyle {\text{SS))_{\text{E))=\sum _{h=1}^{s}\sum _{i=1}^{N_{h))\sum _{j=1}^{n}(Y_{hij}-{\overline {Y))_{h\cdot j}-{\overline {Y))_{hi\cdot }+{\overline {Y))_{h\cdot \cdot })^{2))$ ${\displaystyle {\dfrac ((\text{SS))_{\text{E))}{(N-s)(n-1)))}$ ${\displaystyle \sigma _{\varepsilon }^{2))$

### Use in F-tests

A null hypothesis of interest is that there is no difference between effects of different treatments—thus no difference among treatment means. This may be expressed by saying ${\textstyle D_{\text{Tr))=0,}$ (with the notation as used in the table above). Under this null hypothesis, the expected mean square for effects of treatments is ${\textstyle \sigma _{\varepsilon }^{2}+n\sigma _{\pi }^{2}.}$

The numerator in the F-statistic for testing this hypothesis is the mean square due to differences among treatments, i.e. it is ${\textstyle \left.{\text{SS))_{\text{Tr))\right/(s-1).}$ The denominator, however, is not ${\textstyle \left.{\text{SS))_{\text{E))\right/{\big (}(N-s)(n-1){\big )}.}$ The reason is that the random variable below, although under the null hypothesis it has an F-distribution, is not observable—it is not a statistic—because its value depends on the unobservable parameters ${\textstyle \sigma _{\pi }^{2))$ and ${\textstyle \sigma _{\varepsilon }^{2}.}$

${\displaystyle {\frac {\left.{\frac ((\text{SS))_{\text{Tr))}{\sigma _{\varepsilon }^{2}+n\sigma _{\pi }^{2))}\right/(s-1)}{\left.{\frac ((\text{SS))_{\text{E))}{\sigma _{\varepsilon }^{2))}\right/{\big (}(N-s)(n-1){\big )))}\neq {\frac ((\text{SS))_{\text{Tr))/(s-1)}((\text{SS))_{\text{E))/{\big (}(N-s)(n-1){\big )))))$

Instead, one uses as the test statistic the following random variable that is not defined in terms of ${\textstyle {\text{SS))_{\text{E))}$:

${\displaystyle F={\frac {\left.{\frac ((\text{SS))_{\text{Tr))}{\sigma _{\varepsilon }^{2}+n\sigma _{\pi }^{2))}\right/(s-1)}{\left.{\frac ((\text{SS))_((\text{S))({\text{Tr))))){\sigma _{\varepsilon }^{2}+n\sigma _{\pi }^{2))}\right/(N-s)))={\frac {\left.{\text{SS))_{\text{Tr))\right/(s-1)}{\left.{\text{SS))_{\text{S(Tr)))\right/(N-s)))}$

## Notes and references

1. ^ Donald Hedeker, Robert D. Gibbons. Longitudinal Data Analysis. Wiley Interscience. 2006. pp. 21–24