Mixed reality (MR) is a term used to describe the merging of a real-world environment and a computer-generated one. Physical and virtual objects may co-exist in mixed reality environments and interact in real time. Mixed reality is largely synonymous with augmented reality.
Mixed reality that incorporates haptics has sometimes been referred to as Visuo-haptic mixed reality.
In a physics context, the term "interreality system" refers to a virtual reality system coupled with its real-world counterpart. A 2007 paper describes an interreality system comprising a real physical pendulum coupled to a pendulum that only exists in virtual reality. This system has two stable states of motion: a "Dual Reality" state in which the motion of the two pendula are uncorrelated, and a "Mixed Reality" state in which the pendula exhibit stable phase-locked motion, which is highly correlated. The use of the terms "mixed reality" and "interreality" is clearly defined in the context of physics and may be slightly different in other fields, however, it is generally seen as, "bridging the physical and virtual world".
Mixed reality has been used in applications across fields including design, education, entertainment, military training, healthcare, product content management, and human-in-the-loop operation of robots.
Simulation-based learning includes VR and AR based training and interactive, experiential learning. There are many potential use cases for Mixed Reality in both educational settings and professional training settings. Notably in education, AR has been used to simulate historical battles, providing an unparalleled immersive experience for students and potentially enhanced learning experiences. In addition, AR has shown effectiveness in university education for health science and medical students within disciplines that benefit from 3D representations of models, such as physiology and anatomy.
From television shows to game consoles, mixed reality has many applications in the field of entertainment.
The 2004 British game show Bamzooki called upon child contestants to create virtual "Zooks" and watch them compete in a variety of challenges. The show used mixed reality to bring the Zooks to life. The television show ran for one season, ending in 2010.
The 2003 game show FightBox also called upon contestants to create competitive characters and used mixed reality to allow them to interact. Unlike Bamzoomi's generally non-violent challenges, the goal of FightBox was for new contestants to create the strongest fighter to win the competition.
In 2009, researchers presented to the International Symposium on Mixed and Augmented Reality (ISMAR) their social product called "BlogWall," which consisted of a projected screen on a wall. Users could post short text clips or images on the wall and play simple games such as Pong. The BlogWall also featured a poetry mode where it would rearrange the messages it received to form a poem and a polling mode where users could ask others to answer their polls.
Mario Kart Live: Home Circuit is a mixed reality racing game for the Nintendo Switch that was released in October 2020.[16a-New] The game allows players to use their home as a race track Within the first week of release, 73,918 copies were sold in Japan, making it the country's best selling game of the week.
Other research has examined the potential for mixed reality to be applied to theatre, film, and theme parks.
The first fully immersive mixed reality system was the Virtual Fixtures platform, which was developed in 1992 by Louis Rosenberg at the Armstrong Laboratories of the United States Air Force. It enabled human users to control robots in real-world environments that included real physical objects and 3D virtual overlays ("fixtures") that were added enhance human performance of manipulation tasks. Published studies showed that by introducing virtual objects into the real world, significant performance increases could be achieved by human operators.
Combat reality can be simulated and represented using complex, layered data and visual aides, most of which are head-mounted displays (HMD), which encompass any display technology that can be worn on the user's head. Military training solutions are often built on commercial off-the-shelf (COTS) technologies, such as Improbable's synthetic environment platform, Virtual Battlespace 3 and VirTra, with the latter two platforms used by the United States Army. As of 2018[update], VirTra is being used by both civilian and military law enforcement to train personnel in a variety of scenarios, including active shooter, domestic violence, and military traffic stops. Mixed reality technologies have been used by the United States Army Research Laboratory to study how this stress affects decision-making. With mixed reality, researchers may safely study military personnel in scenarios where soldiers would not likely survive.
In 2017, the U.S. Army was developing the Synthetic Training Environment (STE), a collection of technologies for training purposes that was expected to include mixed reality. As of 2018[update], STE was still in development without a projected completion date. Some recorded goals of STE included enhancing realism and increasing simulation training capabilities and STE availability to other systems.
It was claimed that mixed-reality environments like STE could reduce training costs, such as reducing the amount of ammunition expended during training. In 2018, it was reported that STE would include representation of any part of the world's terrain for training purposes. STE would offer a variety of training opportunities for squad brigade and combat teams, including Stryker, armory, and infantry teams.
A blended space is a space in which a physical environment and a virtual environment are deliberately integrated in a close knit way. The aim of blended space design is to provide people with the experience of feeling a sense of presence in the blended space, acting directly on the content of the blended space. Examples of blended spaces include augmented reality devices such as the Microsoft HoloLens and games such as Pokémon Go in addition to many smartphone tourism apps, smart meeting rooms and applications such as bus tracker systems.
The idea of blending comes from the ideas of conceptual integration, or conceptual blending introduced by Gilles Fauconnier and Mark Turner.
Manuel Imaz and David Benyon introduced blending theory to look at concepts in software engineering and human-computer interaction.
The simplest implementation of a blended space requires two features. The first required feature is input. The input can range from tactile, to changes in the environment. The next required feature is notifications received from the digital spaces. The correspondences between the physical and digital space have to be abstracted and exploited by the design of the blended space. Seamless integration of both the spaces is rare. Blended spaces need anchoring points or technologies to link the spaces.
A well designed blended space advertises and conveys the digital content in a subtle and unobtrusive way. Presence can be measured using physiological, behavioral, and subjective measures derived from the space.
There are two main components to any space. They are:
For presence in a blended space, there must be a physical space and a digital space. In the context of blended space, the higher the communication between the physical and digital spaces, the richer the experience. This communication happens through the medium of correspondents which relay the state and nature of objects.
For the purpose of looking at blended spaces, the nature and characteristics of any space can be represented by these factors:
Physical Space – Physical spaces are spaces which afford spatial interaction. This kind of spatial interaction greatly impacts the user's cognitive model.
Digital Space – Digital space (also called the information space) consists of all the information content. This content can be in any form.
Mixed reality allows a global workforce of remote teams to work together and tackle an organization's business challenges. No matter where they are physically located, an employee can wear a headset and noise-canceling headphones and enter a collaborative, immersive virtual environment. As these applications can accurately translate in real time, language barriers become irrelevant. This process also increases flexibility. While many employers still use inflexible models of fixed working time and location, there is evidence that employees are more productive if they have greater autonomy over where, when, and how they work. Some employees prefer loud work environments, while others need silence. Some work best in the morning; others work best at night. Employees also benefit from autonomy in how they work because of different ways of processing information. The classic model for learning styles differentiates between Visual, Auditory, and Kinesthetic learners.
Machine maintenance can also be executed with the help of mixed reality. Larger companies with multiple manufacturing locations and a lot of machinery can use mixed reality to educate and instruct their employees. The machines need regular checkups and have to be adjusted every now and then. These adjustments are mostly done by humans, so employees need to be informed about needed adjustments. By using mixed reality, employees from multiple locations can wear headsets and receive live instructions about the changes. Instructors can operate the representation that every employee sees, and can glide through the production area, zooming in to technical details and explaining every change needed. Employees completing a five-minute training session with such a mixed-reality program have been shown to attain the same learning results as reading a 50-page training manual. An extension to this environment is the incorporation of live data from operating machinery into the virtual collaborative space and then associated with three dimensional virtual models of the equipment. This enables training and execution of maintenance, operational and safety work processes, which would otherwise be difficult in a live setting, while making use of expertise, no matter their physical location.
Mixed reality can be used to build mockups that combine physical and digital elements. With the use of simultaneous localization and mapping (SLAM), mockups can interact with the physical world to gain control of more realistic sensory experiences  like object permanence, which would normally be infeasible or extremely difficult to track and analyze without the use of both digital and physical aides.
Smartglasses can be incorporated into the operating room to aide in surgical procedures; possibly displaying patient data conveniently while overlaying precise visual guides for the surgeon. Mixed reality headsets like the Microsoft HoloLens have been theorized to allow for efficient sharing of information between doctors, in addition to providing a platform for enhanced training. This can, in some situations (i.e. patient infected with contagious disease), improve doctor safety and reduce PPE use. While mixed reality has lots of potential for enhancing healthcare, it does have some drawbacks too. The technology may never fully integrate into scenarios when a patient is present, as there are ethical concerns surrounding the doctor not being able to see the patient. Mixed reality is also useful for healthcare education. For example, according to a 2022 report from the World Economic Forum, 85% of first-year medical students at Case Western Reserve University reported that mixed reality for teaching anatomy was “equivalent” or “better” than the in-person class.
Product content management before the advent of Mixed Reality consisted largely of brochures and little customer-product engagement outside of this 2-dimensional realm. With mixed reality technology improvements, new forms of interactive product content management has emerged. Most notably, 3-dimensional digital renderings of normally 2-dimensional products have increased reachability and effectiveness of consumer-product interaction.
Recent advances in mixed-reality technologies have renewed interest in alternative modes of communication for human-robot interaction. Human operators wearing mixed reality glasses such as HoloLens can interact with (control and monitor) e.g. robots and lifting machines on site in a digital factory setup. This use case typically requires real-time data communication between a mixed reality interface with the machine / process / system, which could be enabled by incorporating digital twin technology.
Mixed reality allows sellers to show the customers how a certain commodity will suit their demands. A seller may demonstrate how a certain product will fit into the homes of the buyer. The buyer with the assistance of the VR can virtually pick the item, spin around and place to their desired points. This improves the buyer’s confidence of making a purchase and reduces the number of returns.
Architectural firms can allow customers to virtually visit their desired homes.
While Mixed Reality refers to the intertwining of the virtual world and the physical world at a high level, there are a variety of digital mediums used to accomplish a mixed reality environment. They may range from handheld devices to entire rooms, each having practical uses in different disciplines.
Main article: Cave automatic virtual environment
The Cave Automatic Virtual Environment (CAVE) is an environment, typically a small room located in a larger outer room, in which a user is surrounded by projected displays around them, above them, and below them. 3D glasses and surround sound complement the projections to provide the user with a sense of perspective that aims to simulate the physical world. Since being developed, CAVE systems have been adopted by engineers developing and testing prototype products. They allow product designers to test their prototypes before expending resources to produce a physical prototype, while also opening doors for "hands-on" testing on non-tangible objects such as microscopic environments or entire factory floors. After developing the CAVE, the same researchers eventually released the CAVE2, which builds off of the original CAVE's shortcomings. The original projections were substituted for 37 megapixel 3D LCD panels, network cables integrate the CAVE2 with the internet, and a more precise camera system allows the environment to shift as the user moves throughout it.
Main article: Head-up display
Head-up display (HUD) is a display that projects imagery directly in front of a viewer without heavily obfuscating their environment. A standard HUD is composed of three elements: a projector, which is responsible for overlaying the graphics of the HUD, the combiner, which is the surface the graphics are projected onto, and the computer, which integrates the two other components and computes any real-time calculations or adjustments. Prototype HUDs were first used in military applications to aid fighter pilots in combat, but eventually evolved to aid in all aspects of flight - not just combat. HUDs were then standardized across commercial aviation as well, eventually creeping into the automotive industry. One of the first applications of HUD in automotive transport came with Pioneer's Heads-up system, which replaces the driver-side sun visor with a display that projects navigation instructions onto the road in front of the driver. Major manufacturers such as General Motors, Toyota, Audi, and BMW have since included some form of head-up display in certain models.
Main article: Head-mounted display
A head-mounted display (HMD), worn over the entire head or worn in front of the eyes, is a device that uses one or two optics to project an image directly in front of the user's eyes. Its applications range across medicine, entertainment, aviation, and engineering, providing a layer of visual immersion that traditional displays cannot achieve. Head-mounted displays are most popular with consumers in the entertainment market, with major tech companies developing HMDs to complement their existing products. However, these head-mounted displays are virtual reality displays and do not integrate the physical world. Popular augmented reality HMDs, however, are more favorable in enterprise environments. Microsoft's HoloLens is an augmented reality HMD that has applications in medicine, giving doctors more profound real-time insight, as well as engineering, overlaying important information on top of the physical world. Another notable augmented reality HMD has been developed by Magic Leap, a startup developing a similar product with applications in both the private sector and the consumer market.
Main article: Mobile device
Mobile devices, including smartphones and tablets, have continued to increase in computing power and portability. Many modern mobile devices come equipped with toolkits for developing augmented reality applications. These applications allow developers to overlay computer graphics over videos of the physical world. The first augmented reality mobile game with widespread success was Pokémon GO, which released in 2016 and accumulated 800 million downloads. While entertainment applications utilizing AR have proven successful, productivity and utility apps have also begun integrating AR features. Google has released updates to their Google Maps application that includes AR navigation directions overlaid onto the streets in front of the user, as well as expanding their translate app to overlay translated text onto physical writing in over 20 foreign languages. Mobile devices are unique display technologies due to the fact that they are commonly equipped at all times.