In the statistical theory of the design of experiments, blocking is the arranging of experimental units that are similar to one another in groups (blocks) based on one or more variables. These variables are chosen carefully to minimize the impact of their variability on the observed outcomes. There are different ways that blocking can be implemented, resulting in different confounding effects. However, the different methods share the same purpose: to control variability introduced by specific factors that could influence the outcome of an experiment. The roots of blocking originated from the statistician, Ronald Fisher, following his development of ANOVA.^{[1]}
The use of blocking in experimental design has an evolving history that spans multiple disciplines. The foundational concepts of blocking date back to the early 20th century with statisticians like Ronald A. Fisher. His work in developing analysis of variance (ANOVA) set the groundwork for grouping experimental units to control for extraneous variables. Blocking evolved over the years, leading to the formalization of randomized block designs and Latin square designs.^{[1]} Today, blocking still plays a pivotal role in experimental design, and in recent years, advancements in statistical software and computational capabilities have allowed researchers to explore more intricate blocking designs.
Blocking reduces unexplained variability. Its principle lies in the fact that variability which cannot be overcome (e.g. needing two batches of raw material to produce 1 container of a chemical) is confounded or aliased with a(n) (higher/highest order) interaction to eliminate its influence on the end product.^{[2]} High order interactions are usually of the least importance (think of the fact that temperature of a reactor or the batch of raw materials is more important than the combination of the two – this is especially true when more (3, 4, ...) factors are present); thus it is preferable to confound this variability with the higher interaction.^{[2]}
In the examples listed above, a nuisance variable is a variable that is not the primary focus of the study but can affect the outcomes of the experiment.^{[3]} They are considered potential sources of variability that, if not controlled or accounted for, may confound the interpretation between the independent and dependent variables.
To address nuisance variables, researchers can employ different methods such as blocking or randomization. Blocking involves grouping experimental units based on levels of the nuisance variable to control for its influence. Randomization helps distribute the effects of nuisance variables evenly across treatment groups.
By using one of these methods to account for nuisance variables, researchers can enhance the internal validity of their experiments, ensuring that the effects observed are more likely attributable to the manipulated variables rather than extraneous influences.
In the first example provided above, the sex of the patient would be a nuisance variable. For example, consider if the drug was a diet pill and the researchers wanted to test the effect of the diet pills on weight loss. The explanatory variable is the diet pill and the response variable is the amount of weight loss. Although the sex of the patient is not the main focus of the experiment—the effect of the drug is—it is possible that the sex of the individual will affect the amount of weight lost.
In the statistical theory of the design of experiments, blocking is the arranging of experimental units in groups (blocks) that are similar to one another. Typically, a blocking factor is a source of variability that is not of primary interest to the experimenter.^{[3]}^{[4]}
When studying probability theory the blocks method consists of splitting a sample into blocks (groups) separated by smaller subblocks so that the blocks can be considered almost independent.^{[5]} The blocks method helps proving limit theorems in the case of dependent random variables.
The blocks method was introduced by S. Bernstein:^{[6]} The method was successfully applied in the theory of sums of dependent random variables and in extreme value theory.^{[7]}^{[8]}^{[9]}
In our previous diet pills example, a blocking factor could be the sex of a patient. We could put individuals into one of two blocks (male or female). And within each of the two blocks, we can randomly assign the patients to either the diet pill (treatment) or placebo pill (control). By blocking on sex, this source of variability is controlled, therefore, leading to greater interpretation of how the diet pills affect weight loss.
A nuisance factor is used as a blocking factor if every level of the primary factor occurs the same number of times with each level of the nuisance factor.^{[3]} The analysis of the experiment will focus on the effect of varying levels of the primary factor within each block of the experiment.
The general rule is:
Blocking is used to remove the effects of a few of the most important nuisance variables. Randomization is then used to reduce the contaminating effects of the remaining nuisance variables. For important nuisance variables, blocking will yield higher significance in the variables of interest than randomizing.^{[10]}
Implementing blocking in experimental design involves a series of steps to effectively control for extraneous variables and enhance the precision of treatment effect estimates.
Identify potential factors that are not the primary focus of the study but could introduce variability.
Carefully choose blocking factors based on their relevance to the study as well as their potential to confound the primary factors of interest.^{[11]}
There are consequences to partitioning a certain sized experiment into a certain number of blocks as the number of blocks determines the number of confounded effects.^{[12]}
You may choose to randomly assign experimental units to treatment conditions within each block which may help ensure that any unaccounted for variability is spread evenly across treatment groups. However, depending on how you assign treatments to blocks, you may obtain a different number of confounded effects.^{[4]} Therefore, the number of as well as which specific effects get confounded can be chosen which means that assigning treatments to blocks is superior over random assignment.^{[4]}
By running a different design for each replicate, where a different effect gets confounded each time, the interaction effects are partially confounded instead of completely sacrificing one single effect.^{[4]} Replication enhances the reliability of results and allows for a more robust assessment of treatment effects.^{[12]}
One useful way to look at a randomized block experiment is to consider it as a collection of completely randomized experiments, each run within one of the blocks of the total experiment.^{[3]}
Name of design | Number of factors k | Number of runs n |
---|---|---|
2-factor RBD | 2 | L_{1} * L_{2} |
3-factor RBD | 3 | L_{1} * L_{2} * L_{3} |
4-factor RBD | 4 | L_{1} * L_{2} * L_{3} * L_{4} |
k-factor RBD | k | L_{1} * L_{2} * * L_{k} |
with
Suppose engineers at a semiconductor manufacturing facility want to test whether different wafer implant material dosages have a significant effect on resistivity measurements after a diffusion process taking place in a furnace. They have four different dosages they want to try and enough experimental wafers from the same lot to run three wafers at each of the dosages.
The nuisance factor they are concerned with is "furnace run" since it is known that each furnace run differs from the last and impacts many process parameters.
An ideal way to run this experiment would be to run all the 4x3=12 wafers in the same furnace run. That would eliminate the nuisance furnace factor completely. However, regular production wafers have furnace priority, and only a few experimental wafers are allowed into any furnace run at the same time.
A non-blocked way to run this experiment would be to run each of the twelve experimental wafers, in random order, one per furnace run. That would increase the experimental error of each resistivity measurement by the run-to-run furnace variability and make it more difficult to study the effects of the different dosages. The blocked way to run this experiment, assuming you can convince manufacturing to let you put four experimental wafers in a furnace run, would be to put four wafers with different dosages in each of three furnace runs. The only randomization would be choosing which of the three wafers with dosage 1 would go into furnace run 1, and similarly for the wafers with dosages 2, 3 and 4.
Let X_{1} be dosage "level" and X_{2} be the blocking factor furnace run. Then the experiment can be described as follows:
Before randomization, the design trials look like:
X_{1} | X_{2} |
---|---|
1 | 1 |
1 | 2 |
1 | 3 |
2 | 1 |
2 | 2 |
2 | 3 |
3 | 1 |
3 | 2 |
3 | 3 |
4 | 1 |
4 | 2 |
4 | 3 |
An alternate way of summarizing the design trials would be to use a 4x3 matrix whose 4 rows are the levels of the treatment X_{1} and whose columns are the 3 levels of the blocking variable X_{2}. The cells in the matrix have indices that match the X_{1}, X_{2} combinations above.
Treatment | Block 1 | Block 2 | Block 3 |
---|---|---|---|
1 | 1 | 1 | 1 |
2 | 1 | 1 | 1 |
3 | 1 | 1 | 1 |
4 | 1 | 1 | 1 |
By extension, note that the trials for any K-factor randomized block design are simply the cell indices of a k dimensional matrix.
The model for a randomized block design with one nuisance variable is
where