Mental chronometry is the scientific study of processing speed or reaction time on cognitive tasks to infer the content, duration, and temporal sequencing of mental operations. Reaction time (RT; sometimes incorrectly referred to as "response time") is measured by the elapsed time between stimulus onset and an individual's response on elementary cognitive tasks (ECTs), which are relatively simple perceptual-motor tasks typically administered in a laboratory setting. Mental chronometry is one of the core methodological paradigms of human experimental, cognitive, and differential psychology, but is also commonly analyzed in psychophysiology, cognitive neuroscience, and behavioral neuroscience to help elucidate the biological mechanisms underlying perception, attention, and decision-making in humans and other species.
Mental chronometry uses measurements of elapsed time between sensory stimulus onsets and subsequent behavioral responses to study the time course of information processing in the nervous system. Distributional characteristics of response times such as means and variance are considered useful indices of processing speed and efficiency, indicating how fast an individual can execute task-relevant mental operations. Behavioral responses are typically button presses, but eye movements, vocal responses, and other observable behaviors are often used. Reaction time is thought to be constrained by the speed of signal transmission in white matter as well as the processing efficiency of neocortical gray matter.
The use of mental chronometry in psychological research is far ranging, encompassing nomothetic models of information processing in the human auditory and visual systems, as well as differential psychology topics such as the role of individual differences in RT in human cognitive ability, aging, and a variety of clinical and psychiatric outcomes. The experimental approach to mental chronometry includes topics such as the empirical study of vocal and manual latencies, visual and auditory attention, temporal judgment and integration, language and reading, movement time and motor response, perceptual and decision time, memory, and subjective time perception. Conclusions about information processing drawn from RT are often made with consideration of task experimental design, limitations in measurement technology, and mathematical modeling.
The conception of human reaction to an external stimulus being mediated by a biological interface (such as a nerve) is nearly as old as the philosophical discipline of science itself. Enlightenment thinkers like René Descartes proposed that the reflexive response to pain, for example, is carried by some sort of fiber—what we would recognize as part of the nervous system today—up to the brain, where it is then processed as the subjective experience of pain. However, this biological stimulus-response reflex was thought by Descartes and others as occurring instantaneously, and therefore not subject to objective measurement.
The first documentation of human reaction time as a scientific variable would come several centuries later, from practical concerns that arose in the field of astronomy. In 1820, German astronomer Friedrich Bessel applied himself to the problem of accuracy in recording stellar transits, which was typically done by using the ticking of a metronome to estimate the time at which a star passed the hairline of a telescope. Bessel noticed timing discrepancies under this method between records of multiple astronomers, and sought to improve accuracy by taking these individual differences in timing into account. This led various astronomers to seek out ways to minimize these differences between individuals, which came to be known as the "personal equation" of astronomical timing. This phenomenon was explored in detail by English statistician Karl Pearson, who designed one of the first apparatuses to measure it.
Purely psychological inquiries into the nature of reaction time came about in the mid-1850s. Psychology as a quantitative, experimental science has historically been considered as principally divided into two disciplines: Experimental and differential psychology. The scientific study of mental chronometry, one of the earliest developments in scientific psychology, has taken on a microcosm of this division as early as the mid-1800s, when scientists such as Hermann von Helmholtz and Wilhelm Wundt designed reaction time tasks to attempt to measure the speed of neural transmission. Wundt, for example, conducted experiments to test whether emotional provocations affected pulse and breathing rate using a kymograph.
Sir Francis Galton is typically credited as the founder of differential psychology, which seeks to determine and explain the mental differences between individuals. He was the first to use rigorous RT tests with the express intention of determining averages and ranges of individual differences in mental and behavioral traits in humans. Galton hypothesized that differences in intelligence would be reflected in variation of sensory discrimination and speed of response to stimuli, and he built various machines to test different measures of this, including RT to visual and auditory stimuli. His tests involved a selection of over 10,000 men, women and children from the London public.
Welford (1980) notes that the historical study of human reaction times were broadly concerned with five distinct classes of research problems, some of which evolved into paradigms that are still in use today. These domains are broadly described as sensory factors, response characteristics, preparation, choice, and conscious accompaniments.
Early researchers noted that varying the sensory qualities of the stimulus affected response times, wherein increasing the perceptual salience of stimuli tends to decrease reaction times. This variation can be brought about by a number of manipulations, several of which are discussed below. In general, the variation in reaction times produced by manipulating sensory factors is likely more a result of differences in peripheral mechanisms than of central processes.
One of the earliest attempts to mathematically model the effects of the sensory qualities of stimuli on reaction time duration came from the observation that increasing the intensity of a stimulus tended to produce shorter response times. For example, Henri Piéron (1920) proposed formulae to model this relationship of the general form:
where represents stimulus intensity, represents a reducible time value, represents an irreducible time value, and represents a variable exponent that differs across senses and conditions. This formulation reflects the observation that reaction time will decrease as stimulus intensity increases down to the constant , which represents a theoretical lower limit below which human physiology cannot meaningfully operate.
The effects of stimulus intensity on reducing RTs was found to be relative rather than absolute in the early 1930s. One of the first observations of this phenomenon comes from the research of Carl Hovland, who demonstrated with a series of candles placed at different focal distances that the effects of stimulus intensity on RT depended on previous level of adaptation.
In addition to stimulus intensity, varying stimulus strength (that is, "amount" of stimulus available to the sensory apparatus per unit time) can also be achieved by increasing both the area and duration of the presented stimulus in an RT task. This effect was documented in early research for response times to sense of taste by varying the area over taste buds for detection of a taste stimulus, and for the size of visual stimuli as amount of area in the visual field. Similarly, increasing the duration of a stimulus available in a reaction time task was found to produce slightly faster reaction times to visual and auditory stimuli, though these effects tend to be small and are largely consequent of the sensitivity to sensory receptors.
The sensory modality over which a stimulus is administered in a reaction time task is highly dependent on the afferent conduction times, state change properties, and range of sensory discrimination inherent to our different senses. For example, early researchers found that an auditory signal is able to reach central processing mechanisms within 8–10 ms, while visual stimulus tends to take around 20–40 ms. Animal senses also differ considerably in their ability to rapidly change state, with some systems able to change almost instantaneously and others much slower. For example, the vestibular system, which controls the perception of one's position in space, updates much more slowly than does the auditory system. The range of sensory discrimination of a given sense also varies considerably both within and across sensory modality. For example, Kiesow (1903) found in a reaction time task of taste that human subjects are more sensitive to the presence of salt on the tongue than of sugar, reflected in a faster RT by more than 100 ms to salt than to sugar.
Early studies of the effects of response characteristics on reaction times were chiefly concerned with the physiological factors that influence the speed of response. For example, Travis (1929) found in a key-pressing RT task that 75% of participants tended to incorporate the down-phase of the common tremor rate of an extended finger, which is about 8–12 tremors per second, in depressing a key in response to a stimulus. This tendency suggested that response times distributions have an inherent periodicity, and that a given RT is influenced by the point during the tremor cycle at which a response is solicited. This finding was further supported by subsequent work in the mid-1900s showing that responses were less variable when stimuli were presented near the top or bottom points of the tremor cycle.
Anticipatory muscle tension is another physiological factor that early researchers found as a predictor of response times, wherein muscle tension is interpreted as an index of cortical arousal level. That is, if physiological arousal state is high upon stimulus onset, greater preexisting muscular tension facilitates faster responses; if arousal is low, weaker muscle tension predicts slower response. However, too much arousal (and therefore muscle tension) was also found to negatively affect performance on RT tasks as a consequence of an impaired signal-to-noise ratio.
As with many sensory manipulations, such physiological response characteristics as predictors of RT operate largely outside of central processing, which differentiates these effects from those of preparation, discussed below.
Another observation first made by early chronometric research was that a "warning" sign preceding the appearance of a stimulus typically resulted in shorter reaction times. This short warning period, referred to as "expectancy" in this foundational work, is measured in simple RT tasks as the length of the intervals between the warning and the presentation of the stimulus to be reacted to. The importance of the length and variability of expectancy in mental chronometry research was first observed in the early 1900s, and remains an important consideration in modern research. It is reflected today in modern research in the use of a variable foreperiod that precedes stimulus presentation.
This relationship can be summarized in simple terms by the equation:
where and are constants related to the task and denotes the probability of a stimulus appearing at any given time.
In simple RT tasks, constant foreperiods of about 300 ms over a series of trials tends to produce the fastest responses for a given individual, and responses lengthen as the foreperiod becomes longer, an effect that has been demonstrated up to foreperiods of many hundreds of seconds. Foreperiods of variable interval, if presented in equal frequency but in random order, tend to produce slower RTs when the intervals are shorter than the mean of the series, and can be faster or slower when greater than the mean. Whether held constant or variable, foreperiods of less than 300 ms may produce delayed RTs because processing of the warning may not have had time to complete before the stimulus arrives. This type of delay has significant implications for the question of serially-organized central processing, a complex topic that has received much empirical attention in the century following this foundational work.
The number of possible options was recognized early as a significant determinant of response time, with reaction times lengthening as a function of both the number of possible signals and possible responses.
The first scientist to recognize the importance of response options on RT was Franciscus Donders (1869). Donders found that simple RT is shorter than recognition RT, and that choice RT is longer than both. Donders also devised a subtraction method to analyze the time it took for mental operations to take place. By subtracting simple RT from choice RT, for example, it is possible to calculate how much time is needed to make the connection. This method provides a way to investigate the cognitive processes underlying simple perceptual-motor tasks, and formed the basis of subsequent developments.
Although Donders' work paved the way for future research in mental chronometry tests, it was not without its drawbacks. His insertion method, often referred to as "pure insertion", was based on the assumption that inserting a particular complicating requirement into an RT paradigm would not affect the other components of the test. This assumption—that the incremental effect on RT was strictly additive—was not able to hold up to later experimental tests, which showed that the insertions were able to interact with other portions of the RT paradigm. Despite this, Donders' theories are still of interest and his ideas are still used in certain areas of psychology, which now have the statistical tools to use them more accurately.
The interest in the content of consciousness that typified early studies of Wundt and other structuralist psychologists largely fell out of favor with the advent of behaviorism in the 1920s. Nevertheless, the study of conscious accompaniments in the context of reaction time was an important historical development in the late 1800s and early 1900s. For example, Wundt and his associate Oswald Külpe often studied reaction time by asking participants to describe the conscious process that occurred during performance on such tasks.
Chronometric measurements from standard reaction time paradigms are raw values of time elapsed between stimulus onset and motor response. These times are typically measured in milliseconds (ms), and are considered to be ratio scale measurements with equal intervals and a true zero.
Response time on chronometric tasks are typically concerned with five categories of measurement: Central tendency of response time across a number of individual trials for a given person or task condition, usually captured by the arithmetic mean but occasionally by the median and less commonly the mode; intraindividual variability, the variation in individual responses within or across conditions of a task; skew, a measure of the asymmetry of reaction time distributions across trials; slope, the difference between mean RTs across tasks of different type or complexity; and accuracy or error rate, the proportion of correct responses for a given person or task condition.
Human response times on simple reaction time tasks are usually on the order of 200 ms. The processes that occur during this brief time enable the brain to perceive the surrounding environment, identify an object of interest, decide an action in response to the object, and issue a motor command to execute the movement. These processes span the domains of perception and movement, and involve perceptual decision making and motor planning. Many researchers consider the lower limit of a valid response time trial to be somewhere between 100 and 200 ms, which can be considered the bare minimum of time needed for physiological processes such as stimulus perception and for motor responses. Responses faster than this often result from an "anticipatory response", wherein the person's motor response has already been programmed and is in progress before the onset of the stimulus, and likely do not reflect the process of interest.
Reaction times trials of any given individual are always distributed non-symmetrically and skewed to the right, therefore rarely following a normal (Gaussian) distribution. The typical observed pattern is that mean RT will always be a larger value than median RT, and median RT will be a greater value than the maximum height of the distribution (mode). One of the most obvious reasons for this standard pattern is that while it is possible for any number of factors to extend the response time of a given trial, it is not physiologically possible to shorten RT on a given trial past the limits of human perception (typically considered to be somewhere between 100-200 ms), nor is it logically possible for the duration of a trial to be negative.
One reason for variability that extends the right tail of an individual's RT distribution is momentary attentional lapses. To improve the reliability of individual response times, researchers typically require a subject to perform multiple trials, from which a measure of the 'typical' or baseline response time can be calculated. Taking the mean of the raw response time is rarely an effective method of characterizing the typical response time, and alternative approaches (such as modeling the entire response time distribution) are often more appropriate.
A number of different approaches have been developed to analyze RT measurements, particularly in how to effectively deal with issues that arise from trimming outliers, data transformations, measurement reliability speed-accuracy tradeoffs, mixture models, convolution models, stochastic orders related comparisons,  and the mathematical modeling of stochastic variation in timed responses.
Main article: Hick's law
Building on Donders' early observations of the effects of number of response options on RT duration, W. E. Hick (1952) devised an RT experiment which presented a series of nine tests in which there are n equally possible choices. The experiment measured the subject's RT based on the number of possible choices during any given trial. Hick showed that the individual's RT increased by a constant amount as a function of available choices, or the "uncertainty" involved in which reaction stimulus would appear next. Uncertainty is measured in "bits", which are defined as the quantity of information that reduces uncertainty by half in information theory. In Hick's experiment, the RT is found to be a function of the binary logarithm of the number of available choices (n). This phenomenon is called "Hick's law" and is said to be a measure of the "rate of gain of information". The law is usually expressed by the formula:
where and are constants representing the intercept and slope of the function, and is the number of alternatives. The Jensen Box is a more recent application of Hick's law. Hick's law has interesting modern applications in marketing, where restaurant menus and web interfaces (among other things) take advantage of its principles in striving to achieve speed and ease of use for the consumer.[failed verification]
The drift-diffusion model (DDM) is a well-defined mathematical formulation to explain observed variance in response times and accuracy across trials in a (typically two-choice) reaction time task. This model and its variants account for these distributional features by partitioning a reaction time trial into a non-decision residual stage and a stochastic "diffusion" stage, where the actual response decision is generated. The distribution of reaction times across trials is determined by the rate at which evidence accumulates in neurons with an underlying "random walk" component. The drift rate (v) is the average rate at which this evidence accumulates in the presence of this random noise. The decision threshold (a) represents the width of the decision boundary, or the amount of evidence needed before a response is made. The trial terminates when the accumulating evidence reaches either the correct or the incorrect boundary.
Modern chronometric research typically uses variations on one or more of the following broad categories of reaction time task paradigms, which need not be mutually exclusive in all cases.
Simple reaction time is the motion required for an observer to respond to the presence of a stimulus. For example, a subject might be asked to press a button as soon as a light or sound appears. Mean RT for college-age individuals is about 160 milliseconds to detect an auditory stimulus, and approximately 190 milliseconds to detect visual stimulus.
The mean RTs for sprinters at the Beijing Olympics were 166 ms for males and 169 ms for females, but in one out of 1,000 starts they can achieve 109 ms and 121 ms, respectively. This study also concluded that longer female RTs can be an artifact of the measurement method used, suggesting that the starting block sensor system might overlook a female false-start due to insufficient pressure on the pads. The authors suggested compensating for this threshold would improve false-start detection accuracy with female runners.
The IAAF has a controversial rule that if an athlete moves in less than 100 ms, it counts as a false start, and he or she may be (since 2009, even must be) disqualified – even despite an IAAF-commissioned study in 2009 that indicated top sprinters are able to sometimes react in 80–85 ms.
Recognition or go/no-go RT tasks require that the subject press a button when one stimulus type appears and withhold a response when another stimulus type appears. For example, the subject may have to press the button when a green light appears and not respond when a blue light appears.
Discrimination RT involves comparing pairs of simultaneously presented visual displays and then pressing one of two buttons according to which display appears brighter, longer, heavier, or greater in magnitude on some dimension of interest. Discrimination RT paradigms fall into three basic categories, involving stimuli that are administered simultaneously, sequentially, or continuously.
In a classic example of a simultaneous discrimination RT paradigm, conceived by social psychologist Leon Festinger, two vertical lines of differing lengths are shown side-by-side to participants simultaneously. Participants are asked to identify as quickly as possible whether the line on the right is longer or shorter than the line on the left. One of these lines would retain a constant length across trials, while the other took on a range of 15 different values, each one presented an equal number of times across the session.
An example of the second type of discrimination paradigm, which administers stimuli successfully or serially, is a classic 1963 study in which participants are given two sequentially lifted weights and asked to judge whether the second was heavier or lighter than the first.
The third broad type of discrimination RT task, wherein stimuli are administered continuously, is exemplified by a 1955 experiment in which participants are asked to sort packs of shuffled playing cards into two piles depending on whether the card had a large or small number of dots on its back. Reaction time in such a task is often measured by the total amount of time it takes to complete the task.
Choice reaction time (CRT) tasks require distinct responses for each possible class of stimulus. In a choice reaction time task which calls for a single response to several different signals, four distinct processes are thought to occur in sequence: First, the sensory qualities of the stimuli are received by the sensory organs and transmitted to the brain; second, the signal is identified, processed, and reasoned by the individual; third, the choice decision is made; and fourth, the motor response corresponding to that choice is initiated and carried out by an action.
CRT tasks can be highly variable. They can involve stimuli of any sensory modality, most typically of visual or auditory nature, and require responses that are typically indicated by pressing a key or button. For example, the subject might be asked to press one button if a red light appears and a different button if a yellow light appears. The Jensen box is an example of an instrument designed to measure choice RT with visual stimuli and keypress response. Response criteria can also be in the form of vocalizations, such as the original version of the Stroop task, where participants are instructed to read the names of words printed in colored ink from lists. Modern versions of the Stoop task, which use single stimulus pairs for each trial, are also examples of a multi-choice CRT paradigm with vocal responding.
Models of choice reaction time are closely aligned with Hick's Law, which posits that average reaction times lengthen as a function of more available choices. Hick's law can be reformulated as:
where denotes mean RT across trials, is a constant, and represents the sum of possibilities including "no signal". This accounts for the fact that in a choice task, the subject must not only make a choice but also first detect whether a signal has occurred at all (equivalent to in the original formulation).
With the advent of the functional neuroimaging techniques of PET and fMRI, psychologists started to modify their mental chronometry paradigms for functional imaging. Although psycho(physio)logists have been using electroencephalographic measurements for decades, the images obtained with PET have attracted great interest from other branches of neuroscience, popularizing mental chronometry among a wider range of scientists in recent years. The way that mental chronometry is utilized is by performing RT based tasks which show through neuroimaging the parts of the brain which are involved in the cognitive process.
With the invention of functional magnetic resonance imaging (fMRI), techniques were used to measure activity through electrical event-related potentials in a study when subjects were asked to identify if a digit that was presented was above or below five. According to Sternberg's additive theory, each of the stages involved in performing this task includes: encoding, comparing against the stored representation for five, selecting a response, and then checking for error in the response. The fMRI image presents the specific locations where these stages are occurring in the brain while performing this simple mental chronometry task.
In the 1980s, neuroimaging experiments allowed researchers to detect the activity in localized brain areas by injecting radionuclides and using positron emission tomography (PET) to detect them. Also, fMRI was used which have detected the precise brain areas that are active during mental chronometry tasks. Many studies have shown that there is a small number of brain areas which are widely spread out which are involved in performing these cognitive tasks.
Current medical reviews indicate that signaling through the dopamine pathways originating in the ventral tegmental area is strongly positively correlated with improved (shortened) RT; e.g., dopaminergic pharmaceuticals like amphetamine have been shown to expedite responses during interval timing, while dopamine antagonists (specifically, for D2-type receptors) produce the opposite effect. Similarly, age-related loss of dopamine from the striatum, as measured by SPECT imaging of the dopamine transporter, strongly correlates with slowed RT.
The assumption that mental operations can be measured by the time required to perform them is considered foundational to modern cognitive psychology. To understand how different brain systems acquire, process and respond to stimuli through the time course of information processing by the nervous system, experimental psychologists often use response times as a dependent variable under different experimental conditions. This approach to the study of mental chronometry is typically aimed at testing theory-driven hypotheses intended to explain observed relationships between measured RT and some experimentally manipulated variable of interest, which often make precisely formulated mathematical predictions.
The distinction between this experimental approach and the use of chronometric tools to investigate individual differences is more conceptual than practical, and many modern researchers integrate tools, theories and models from both areas to investigate psychological phenomena. Nevertheless, it is a useful organizing principle to distinguish the two areas in terms of their research questions and the purposes for which a number of chronometric tasks were devised. The experimental approach to mental chronometry has been used to investigate a variety of cognitive systems and functions that are common to all humans, including memory, language processing and production, attention, and aspects of visual and auditory perception. The following is a brief overview of several well-known experimental tasks in mental chronometry.
Saul Sternberg (1966) devised an experiment wherein subjects were told to remember a set of unique digits in short-term memory. Subjects were then given a probe stimulus in the form of a digit from 0–9. The subject then answered as quickly as possible whether the probe was in the previous set of digits or not. The size of the initial set of digits determined the RT of the subject. The idea is that as the size of the set of digits increases the number of processes that need to be completed before a decision can be made increases as well. So if the subject has four items in short-term memory (STM), then after encoding the information from the probe stimulus the subject needs to compare the probe to each of the four items in memory and then make a decision. If there were only two items in the initial set of digits, then only two processes would be needed. The data from this study found that for each additional item added to the set of digits, about 38 milliseconds were added to the response time of the subject. This supported the idea that a subject did a serial exhaustive search through memory rather than a serial self-terminating search. Sternberg (1969) developed a much-improved method for dividing RT into successive or serial stages, called the additive factor method.
Main article: Mental rotation
Shepard and Metzler (1971) presented a pair of three-dimensional shapes that were identical or mirror-image versions of one another. RT to determine whether they were identical or not was a linear function of the angular difference between their orientation, whether in the picture plane or in depth. They concluded that the observers performed a constant-rate mental rotation to align the two objects so they could be compared. Cooper and Shepard (1973) presented a letter or digit that was either normal or mirror-reversed, and presented either upright or at angles of rotation in units of 60 degrees. The subject had to identify whether the stimulus was normal or mirror-reversed. Response time increased roughly linearly as the orientation of the letter deviated from upright (0 degrees) to inverted (180 degrees), and then decreases again until it reaches 360 degrees. The authors concluded that the subjects mentally rotate the image the shortest distance to upright, and then judge whether it is normal or mirror-reversed.
Mental chronometry has been used in identifying some of the processes associated with understanding a sentence. This type of research typically revolves around the differences in processing four types of sentences: true affirmative (TA), false affirmative (FA), false negative (FN), and true negative (TN). A picture can be presented with an associated sentence that falls into one of these four categories. The subject then decides if the sentence matches the picture or does not. The type of sentence determines how many processes need to be performed before a decision can be made. According to the data from Clark and Chase (1972) and Just and Carpenter (1971), the TA sentences are the simplest and take the least time, than FA, FN, and TN sentences.
Hierarchical network models of memory were largely discarded due to some findings related to mental chronometry. The Teachable Language Comprehender (TLC) model proposed by Collins and Quillian (1969) had a hierarchical structure indicating that recall speed in memory should be based on the number of levels in memory traversed in order to find the necessary information. But the experimental results did not agree. For example, a subject will reliably answer that a robin is a bird more quickly than he will answer that an ostrich is a bird despite these questions accessing the same two levels in memory. This led to the development of spreading activation models of memory (e.g., Collins & Loftus, 1975), wherein links in memory are not organized hierarchically but by importance instead.
In the late 1960s, Michael Posner developed a series of letter-matching studies to measure the mental processing time of several tasks associated with recognition of a pair of letters. The simplest task was the physical match task, in which subjects were shown a pair of letters and had to identify whether the two letters were physically identical or not. The next task was the name match task where subjects had to identify whether two letters had the same name. The task involving the most cognitive processes was the rule match task in which subjects had to determine whether the two letters presented both were vowels or not vowels.
The physical match task was the most simple; subjects had to encode the letters, compare them to each other, and make a decision. When doing the name match task subjects were forced to add a cognitive step before making a decision: they had to search memory for the names of the letters, and then compare those before deciding. In the rule based task they had to also categorize the letters as either vowels or consonants before making their choice. The time taken to perform the rule match task was longer than the name match task which was longer than the physical match task. Using the subtraction method experimenters were able to determine the approximate amount of time that it took for subjects to perform each of the cognitive processes associated with each of these tasks.
Differential psychologists frequently investigate the causes and consequences of information processing modeled by chronometric studies from experimental psychology. While traditional experimental studies of RT are conducted within-subjects with RT as a dependent measure affected by experimental manipulations, a differential psychologist studying RT will typically hold conditions constant to ascertain between-subjects variability in RT and its relationships with other psychological variables.
Researchers spanning more than a century have generally reported medium-sized correlations between RT and measures of intelligence: There is thus a tendency for individuals with higher IQ to be faster on RT tests. Although its mechanistic underpinnings are still debated, the relationship between RT and cognitive ability today is as well-established an empirical fact as any phenomenon in psychology. A 2008 literature review on the mean correlation between various measures of reaction time and intelligence was found to be −0.24 (SD = 0.07).
Empirical research into the nature of the relationship between reaction time and measures of intelligence has a long history of study that dates back to the early 1900s, with some early researchers reporting a near-perfect correlation in a sample of five students. The first review of these incipient studies, in 1933, analyzed over two dozen studies and found a smaller but reliable association between measures of intelligence and the production of faster responses on a variety of RT tasks.
Up through the beginning of the 21st century, psychologists studying reaction time and intelligence continued to find such associations, but were largely unable to agree about the true size of the association between reaction time and psychometric intelligence in the general population. This is likely due to the fact that the majority of samples studied had been selected from universities and had unusually high mental ability scores relative to the general population. In 2001, psychologist Ian J. Deary published the first large-scale study of intelligence and reaction time in a representative population sample across a range of ages, finding a correlation between psychometric intelligence and simple reaction time of –0.31 and four-choice reaction time of –0.49.
Researchers have yet to develop consensus for a unified neurophysiological theory that fully explains the basis of the relationship between RT and cognitive ability. It may reflect more efficient information processing, better attentional control, or the integrity of neuronal processes. Such a theory would need to explain several unique features of the relationship, several of which are discussed below.
Twin and adoption studies have shown that performance on chronometric tasks is heritable. Mean RT across these studies reveal a heritability of around 0.44, meaning that 44% of the variance in mean RT is associated with genetic differences, while standard deviation of RTs show a heritability of around 0.20. Additionally, mean RTs and measures of IQ have been found to be genetically correlated in the range of 0.90, suggesting that the lower observed phenotypic correlation between IQ and mean RT includes as-yet unknown environmental forces.
In 2016, a genome-wide association study (GWAS) of cognitive function found 36 genome-wide significant genetic variants associated with reaction time in a sample of around 95,000 individuals. These variants were found to span two regions on chromosome 2 and chromosome 12, which appear to be in or near genes involved in spermatogenesis and signaling activities by cytokine and growth factor receptors, respectively. This study additionally found significant genetic correlations between RT, memory, and verbal-numerical reasoning.
Neurophysiological research using event-related potentials (ERPs) have used P3 latency as a correlate of the "decision" stage of a reaction time task. These studies have generally found that the magnitude of the association between g and P3 latency increases with more demanding task conditions. Measures of P3 latency have also been found to be consistent with the worst performance rule, wherein the correlation between P3 latency quantile mean and cognitive assessment scores becomes more strongly negative with increasing quantile. Other ERP studies have found consilience with the interpretation of the g-RT relationship residing chiefly in the "decision" component of a task, wherein most of the g-related brain activity occurs following stimulation evaluation but before motor response, while components involved in sensory processing change little across differences in g.
Although a unified theory of reaction time and intelligence has yet to achieve consensus among psychologists, diffusion modeling provides one promising theoretical model. Diffusion modeling partitions RT into residual "non-decision" and stochastic "diffusion" stages, the latter of which represents the generation of a decision in a two-choice task. This model successfully integrates the roles of mean reaction time, response time variability, and accuracy in modeling the rate of diffusion as a variable representing the accumulated weight of evidence that generates a decision in an RT task. Under the diffusion model, this evidence accumulates by undertaking a continuous random walk between two boundaries that represent each response choice in the task. Applications of this model have shown that the basis of the g-RT relationship is specifically the relationship of g with the rate of the diffusion process, rather than with the non-decision residual time. Diffusion modeling can also successfully explain the worst performance rule by assuming that the same measure of ability (diffusion rate) mediates performance on both simple and complex cognitive tasks, which has been theoretically and empirically supported.
Main article: Neo-Piagetian theories of cognitive development
There is extensive recent research using mental chronometry for the study of cognitive development. Specifically, various measures of speed of processing were used to examine changes in the speed of information processing as a function of age. Kail (1991) showed that speed of processing increases exponentially from early childhood to early adulthood. Studies of RTs in young children of various ages are consistent with common observations of children engaged in activities not typically associated with chronometry. This includes speed of counting, reaching for things, repeating words, and other developing vocal and motor skills that develop quickly in growing children. Once reaching early maturity, there is then a long period of stability until speed of processing begins declining from middle age to senility (Salthouse, 2000). In fact, cognitive slowing is considered a good index of broader changes in the functioning of the brain and intelligence. Demetriou and colleagues, using various methods of measuring speed of processing, showed that it is closely associated with changes in working memory and thought (Demetriou, Mouyi, & Spanoudis, 2009). These relations are extensively discussed in the neo-Piagetian theories of cognitive development.
During senescence, RT deteriorates (as does fluid intelligence), and this deterioration is systematically associated with changes in many other cognitive processes, such as executive functions, working memory, and inferential processes. In the theory of Andreas Demetriou, one of the neo-Piagetian theories of cognitive development, change in speed of processing with age, as indicated by decreasing RT, is one of the pivotal factors of cognitive development.
Performance on simple and choice reaction time tasks is associated with a variety of health-related outcomes, including general, objective health composites as well as specific measures like cardiorespiratory integrity. The association between IQ and earlier all-cause mortality has been found to be chiefly mediated by a measure of reaction time. These studies generally find that faster and more accurate responses to reaction time tasks are associated with better health outcomes and longer lifespan.
Although a comprehensive study of personality traits and reaction time has yet to be conducted, several researchers have reported associations between RT and the Big Five personality factors of Extraversion and Neuroticism. While many of these studies suffer from low sample sizes (generally fewer than 200 individuals), their results are summarized here in brief along with the authors' proposed biologically-plausible mechanisms.
A 2014 study measured choice RT in a sample of 63 high and 63 low Extraversion participants, and found that higher levels of Extraversion were associated with faster responses. Although the authors note this is likely a function of specific task demands rather than underlying individual differences, other authors have proposed the RT-Extraversion relationship as representing individual differences in motor response, which may be mediated by dopamine. However, these studies are difficult to interpret in light of their small samples and have yet to be replicated.
In a similar vein, other researchers have found a small (r < 0.20) association between RT and Neuroticism, wherein more neurotic individuals tended to be slower at RT tasks. The authors interpret this as reflecting a higher arousal threshold in response to stimuli of varying intensity, speculating that higher Neuroticism individuals may have relatively "weak" nervous systems. In a somewhat larger study of 242 college undergraduates, Neuroticism was found to be more substantially correlated (r ≈ 0.25) with response variability, with higher Neuroticism associated with greater RT standard deviations. The authors speculate that Neuroticism may confer greater variance in reaction time through the interference of "mental noise."
The neurotransmitter dopamine is released from projections originating in the midbrain. Manipulations of dopaminergic signaling profoundly influence interval timing, leading to the hypothesis that dopamine influences internal pacemaker, or "clock," activity (Maricq and Church, 1983; Buhusi and Meck, 2005, 2009; Lake and Meck, 2013). For instance, amphetamine, which increases concentrations of dopamine at the synaptic cleft (Maricq and Church, 1983; Zetterström et al., 1983) advances the start of responding during interval timing (Taylor et al., 2007), whereas antagonists of D2 type dopamine receptors typically slow timing (Drew et al., 2003; Lake and Meck, 2013). ... Depletion of dopamine in healthy volunteers impairs timing (Coull et al., 2012), while amphetamine releases synaptic dopamine and speeds up timing (Taylor et al., 2007).