Seismic tomography is a technique for imaging the subsurface of the Earth with seismic waves produced by earthquakes or explosions. P-, S-, and surface waves can be used for tomographic models of different resolutions based on seismic wavelength, wave source distance, and the seismograph array coverage. The data received at seismometers are used to solve an inverse problem, wherein the locations of reflection and refraction of the wave paths are determined. This solution can be used to create 3D images of velocity anomalies which may be interpreted as structural, thermal, or compositional variations. Geoscientists use these images to better understand core, mantle, and plate tectonic processes.
Tomography is solved as an inverse problem. Seismic travel time data are compared to an initial Earth model and the model is modified until the best possible fit between the model predictions and observed data is found. Seismic waves would travel in straight lines if Earth was of uniform composition, but the compositional layering, tectonic structure, and thermal variations reflect and refract seismic waves. The location and magnitude of these variations can be calculated by the inversion process, although solutions to tomographic inversions are non-unique.
Seismic tomography is similar to medical x-ray computed tomography (CT scan) in that a computer processes receiver data to produce a 3D image, although CT scans use attenuation instead of traveltime difference. Seismic tomography has to deal with the analysis of curved ray paths which are reflected and refracted within the earth and potential uncertainty in the location of the earthquake hypocenter. CT scans use linear x-rays and a known source.
Seismic tomography requires large datasets of seismograms and well-located earthquake or explosion sources. These became more widely available in the 1960s with the expansion of global seismic networks and in the 1970s when digital seismograph data archives were established. These developments occurred concurrently with advancements in computing power that were required to solve inverse problems and generate theoretical seismograms for model testing.
In 1977, P-wave delay times were used to create the first seismic array-scale 2D map of seismic velocity. In the same year, P-wave data were used to determine 150 spherical harmonic coefficients for velocity anomalies in the mantle. The first model using iterative techniques, required when there are a large numbers of unknowns, was done in 1984. This built upon the first radially anisotropic model of the Earth, which provided the required initial reference frame to compare tomographic models to for iteration. Initial models had resolution of ~3000 to 5000 km, as compared to the few hundred kilometer resolution of current models.
Seismic tomographic models improve with advancements in computing and expansion of seismic networks. Recent models of global body waves used over 107 traveltimes to model 105 to 106 unknowns.
Seismic tomography uses seismic records to create 2D and 3D images of subsurface anomalies by solving large inverse problems such that generate models consistent with observed data. Various methods are used to resolve anomalies in the crust and lithosphere, shallow mantle, whole mantle, and core based on the availability of data and types of seismic waves that penetrate the region at a suitable wavelength for feature resolution. The accuracy of the model is limited by availability and accuracy of seismic data, wave type utilized, and assumptions made in the model.
P-wave data are used in most local models and global models in areas with sufficient earthquake and seismograph density. S- and surface wave data are used in global models when this coverage is not sufficient, such as in ocean basins and away from subduction zones. First-arrival times are the most widely used, but models utilizing reflected and refracted phases are used in more complex models, such as those imaging the core. Differential traveltimes between wave phases or types are also used.
Local tomographic models are often based on a temporary seismic array targeting specific areas, unless in a seismically active region with extensive permanent network coverage. These allow for the imaging of the crust and upper mantle.
Regional to global scale tomographic models are generally based on long wavelengths. Various models have better agreement with each other than local models due to the large feature size they image, such as subducted slabs and superplumes. The trade off from whole mantle to whole earth coverage is the coarse resolution (hundreds of kilometers) and difficulty imaging small features (e.g. narrow plumes). Although often used to image different parts of the subsurface, P- and S-wave derived models broadly agree where there is image overlap. These models use data from both permanent seismic stations and supplementary temporary arrays.
Seismic tomography can resolve anisotropy, anelasticity, density, and bulk sound velocity. Variations in these parameters may be a result of thermal or chemical differences, which are attributed to processes such as mantle plumes, subducting slabs, and mineral phase changes. Larger scale features that can be imaged with tomography include the high velocities beneath continental shields and low velocities under ocean spreading centers.
The mantle plume hypothesis proposes that areas of volcanism not readily explained by plate tectonics, called hotspots, are a result of thermal upwelling from as deep as the core-mantle boundary that become diapirs in the crust. This is an actively contested theory, although tomographic images suggest there are anomalies beneath some hotspots. The best imaged of these are large low-shear-velocity provinces, or superplumes, visible on S-wave models of the lower mantle and believed to reflect both thermal and compositional differences.
The Yellowstone hotspot is responsible for volcanism at the Yellowstone Caldera and a series of extinct calderas along the Snake River Plain. The Yellowstone Geodynamic Project sought to image the plume beneath the hotspot. They found a strong low-velocity body from ~30 to 250 km depth beneath Yellowstone and a weaker anomaly from 250 to 650 km depth which dipped 60° west-northwest. The authors attribute these features to the mantle plume beneath the hotspot being deflected eastward by flow in the upper mantle seen in S-wave models.
The Hawaii hotspot produced the Hawaiian–Emperor seamount chain. Tomographic images show it to be 500 to 600 km wide and up to 2,000 km deep.
Subducting plates are colder than the mantle into which they are moving. This creates a fast anomaly that is visible in tomographic images. Both the Farallon plate that subducted beneath the west coast of North America and the northern portion of the Indian plate that has subducted beneath Asia have been imaged with tomography.
Global seismic networks have expanded steadily since the 1960s, but are still concentrated on continents and in seismically active regions. Oceans, particularly in the southern hemisphere, are under-covered. Tomographic models in these areas will improve when more data becomes available. The uneven distribution of earthquakes naturally biases models to better resolution in seismically active regions.
The type of wave used in a model limits the resolution it can achieve. Longer wavelengths are able to penetrate deeper into the earth, but can only be used to resolve large features. Finer resolution can be achieved with surface waves, with the trade off that they cannot be used in models of the deep mantle. The disparity between wavelength and feature scale causes anomalies to appear of reduced magnitude and size in images. P- and S-wave models respond differently to the types of anomalies depending on the driving material property. First arrival time based models naturally prefer faster pathways, causing models based on these data to have lower resolution of slow (often hot) features. Shallow models must also consider the significant lateral velocity variations in continental crust.
Seismic tomography provides only the current velocity anomalies. Any prior structures are unknown and the slow rates of movement in the subsurface (mm to cm per year) prohibit resolution of changes over modern timescales.
Tomographic solutions are non-unique. Although statistical methods can be used to analyze the validity of a model, unresolvable uncertainty remains. This contributes to difficulty comparing the validity of different model results.
Computing power limits the amount of seismic data, number of unknowns, mesh size, and iterations in tomographic models. This is of particular importance in ocean basins, which due to limited network coverage and earthquake density require more complex processing of distant data. Shallow oceanic models also require smaller model mesh size due to the thinner crust.
Tomographic images are typically presented with a color ramp representing the strength of the anomalies. This has the consequence of making equal changes appear of differing magnitude based on visual perceptions of color, such as the change from orange to red being more subtle than blue to yellow. The degree of color saturation can also visually skew interpretations. These factors should be considered when analyzing images.