Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Review Article
  • Open access
  • Published: 25 October 2021

Augmented reality and virtual reality displays: emerging technologies and future perspectives

  • Jianghao Xiong 1 ,
  • En-Lin Hsiang 1 ,
  • Ziqian He 1 ,
  • Tao Zhan   ORCID: orcid.org/0000-0001-5511-6666 1 &
  • Shin-Tson Wu   ORCID: orcid.org/0000-0002-0943-0440 1  

Light: Science & Applications volume  10 , Article number:  216 ( 2021 ) Cite this article

113k Accesses

430 Citations

36 Altmetric

Metrics details

  • Liquid crystals

With rapid advances in high-speed communication and computation, augmented reality (AR) and virtual reality (VR) are emerging as next-generation display platforms for deeper human-digital interactions. Nonetheless, to simultaneously match the exceptional performance of human vision and keep the near-eye display module compact and lightweight imposes unprecedented challenges on optical engineering. Fortunately, recent progress in holographic optical elements (HOEs) and lithography-enabled devices provide innovative ways to tackle these obstacles in AR and VR that are otherwise difficult with traditional optics. In this review, we begin with introducing the basic structures of AR and VR headsets, and then describing the operation principles of various HOEs and lithography-enabled devices. Their properties are analyzed in detail, including strong selectivity on wavelength and incident angle, and multiplexing ability of volume HOEs, polarization dependency and active switching of liquid crystal HOEs, device fabrication, and properties of micro-LEDs (light-emitting diodes), and large design freedoms of metasurfaces. Afterwards, we discuss how these devices help enhance the AR and VR performance, with detailed description and analysis of some state-of-the-art architectures. Finally, we cast a perspective on potential developments and research directions of these photonic devices for future AR and VR displays.

Similar content being viewed by others

research paper on augmented reality

Advanced liquid crystal devices for augmented reality and virtual reality displays: principles and applications

Kun Yin, En-Lin Hsiang, … Shin-Tson Wu

research paper on augmented reality

Achromatic diffractive liquid-crystal optics for virtual reality displays

Zhenyi Luo, Yannanqi Li, … Shin-Tson Wu

research paper on augmented reality

Metasurface wavefront control for high-performance user-natural augmented reality waveguide glasses

Hyunpil Boo, Yoo Seung Lee, … Chee Wei Wong

Introduction

Recent advances in high-speed communication and miniature mobile computing platforms have escalated a strong demand for deeper human-digital interactions beyond traditional flat panel displays. Augmented reality (AR) and virtual reality (VR) headsets 1 , 2 are emerging as next-generation interactive displays with the ability to provide vivid three-dimensional (3D) visual experiences. Their useful applications include education, healthcare, engineering, and gaming, just to name a few 3 , 4 , 5 . VR embraces a total immersive experience, while AR promotes the interaction between user, digital contents, and real world, therefore displaying virtual images while remaining see-through capability. In terms of display performance, AR and VR face several common challenges to satisfy demanding human vision requirements, including field of view (FoV), eyebox, angular resolution, dynamic range, and correct depth cue, etc. Another pressing demand, although not directly related to optical performance, is ergonomics. To provide a user-friendly wearing experience, AR and VR should be lightweight and ideally have a compact, glasses-like form factor. The above-mentioned requirements, nonetheless, often entail several tradeoff relations with one another, which makes the design of high-performance AR/VR glasses/headsets particularly challenging.

In the 1990s, AR/VR experienced the first boom, which quickly subsided due to the lack of eligible hardware and digital content 6 . Over the past decade, the concept of immersive displays was revisited and received a new round of excitement. Emerging technologies like holography and lithography have greatly reshaped the AR/VR display systems. In this article, we firstly review the basic requirements of AR/VR displays and their associated challenges. Then, we briefly describe the properties of two emerging technologies: holographic optical elements (HOEs) and lithography-based devices (Fig. 1 ). Next, we separately introduce VR and AR systems because of their different device structures and requirements. For the immersive VR system, the major challenges and how these emerging technologies help mitigate the problems will be discussed. For the see-through AR system, we firstly review the present status of light engines and introduce some architectures for the optical combiners. Performance summaries on microdisplay light engines and optical combiners will be provided, that serve as a comprehensive overview of the current AR display systems.

figure 1

The left side illustrates HOEs and lithography-based devices. The right side shows the challenges in VR and architectures in AR, and how the emerging technologies can be applied

Key parameters of AR and VR displays

AR and VR displays face several common challenges to satisfy the demanding human vision requirements, such as FoV, eyebox, angular resolution, dynamic range, and correct depth cue, etc. These requirements often exhibit tradeoffs with one another. Before diving into detailed relations, it is beneficial to review the basic definitions of the above-mentioned display parameters.

Definition of parameters

Taking a VR system (Fig. 2a ) as an example. The light emitting from the display module is projected to a FoV, which can be translated to the size of the image perceived by the viewer. For reference, human vision’s horizontal FoV can be as large as 160° for monocular vision and 120° for overlapped binocular vision 6 . The intersection area of ray bundles forms the exit pupil, which is usually correlated with another parameter called eyebox. The eyebox defines the region within which the whole image FoV can be viewed without vignetting. It therefore generally manifests a 3D geometry 7 , whose volume is strongly dependent on the exit pupil size. A larger eyebox offers more tolerance to accommodate the user’s diversified interpupillary distance (IPD) and wiggling of headset when in use. Angular resolution is defined by dividing the total resolution of the display panel by FoV, which measures the sharpness of a perceived image. For reference, a human visual acuity of 20/20 amounts to 1 arcmin angular resolution, or 60 pixels per degree (PPD), which is considered as a common goal for AR and VR displays. Another important feature of a 3D display is depth cue. Depth cue can be induced by displaying two separate images to the left eye and the right eye, which forms the vergence cue. But the fixed depth of the displayed image often mismatches with the actual depth of the intended 3D image, which leads to incorrect accommodation cues. This mismatch causes the so-called vergence-accommodation conflict (VAC), which will be discussed in detail later. One important observation is that the VAC issue may be more serious in AR than VR, because the image in an AR display is directly superimposed onto the real-world with correct depth cues. The image contrast is dependent on the display panel and stray light. To achieve a high dynamic range, the display panel should exhibit high brightness, low dark level, and more than 10-bits of gray levels. Nowadays, the display brightness of a typical VR headset is about 150–200 cd/m 2 (or nits).

figure 2

a Schematic of a VR display defining FoV, exit pupil, eyebox, angular resolution, and accommodation cue mismatch. b Sketch of an AR display illustrating ACR

Figure 2b depicts a generic structure of an AR display. The definition of above parameters remains the same. One major difference is the influence of ambient light on the image contrast. For a see-through AR display, ambient contrast ratio (ACR) 8 is commonly used to quantify the image contrast:

where L on ( L off ) represents the on (off)-state luminance (unit: nit), L am is the ambient luminance, and T is the see-through transmittance. In general, ambient light is measured in illuminance (lux). For the convenience of comparison, we convert illuminance to luminance by dividing a factor of π, assuming the emission profile is Lambertian. In a normal living room, the illuminance is about 100 lux (i.e., L am  ≈ 30 nits), while in a typical office lighting condition, L am  ≈ 150 nits. For outdoors, on an overcast day, L am  ≈ 300 nits, and L am  ≈ 3000 nits on a sunny day. For AR displays, a minimum ACR should be 3:1 for recognizable images, 5:1 for adequate readability, and ≥10:1 for outstanding readability. To make a simple estimate without considering all the optical losses, to achieve ACR = 10:1 in a sunny day (~3000 nits), the display needs to deliver a brightness of at least 30,000 nits. This imposes big challenges in finding a high brightness microdisplay and designing a low loss optical combiner.

Tradeoffs and potential solutions

Next, let us briefly review the tradeoff relations mentioned earlier. To begin with, a larger FoV leads to a lower angular resolution for a given display resolution. In theory, to overcome this tradeoff only requires a high-resolution-display source, along with high-quality optics to support the corresponding modulation transfer function (MTF). To attain 60 PPD across 100° FoV requires a 6K resolution for each eye. This may be realizable in VR headsets because a large display panel, say 2–3 inches, can still accommodate a high resolution with acceptable manufacture cost. However, for a glasses-like wearable AR display, the conflict between small display size and the high solution becomes obvious as further shrinking the pixel size of a microdisplay is challenging.

To circumvent this issue, the concept of the foveated display is proposed 9 , 10 , 11 , 12 , 13 . The idea is based on that the human eye only has high visual acuity in the central fovea region, which accounts for about 10° FoV. If the high-resolution image is only projected to fovea while the peripheral image remains low resolution, then a microdisplay with 2K resolution can satisfy the need. Regarding the implementation method of foveated display, a straightforward way is to optically combine two display sources 9 , 10 , 11 : one for foveal and one for peripheral FoV. This approach can be regarded as spatial multiplexing of displays. Alternatively, time-multiplexing can also be adopted, by temporally changing the optical path to produce different magnification factors for the corresponding FoV 12 . Finally, another approach without multiplexing is to use a specially designed lens with intended distortion to achieve non-uniform resolution density 13 . Aside from the implementation of foveation, another great challenge is to dynamically steer the foveated region as the viewer’s eye moves. This task is strongly related to pupil steering, which will be discussed in detail later.

A larger eyebox or FoV usually decreases the image brightness, which often lowers the ACR. This is exactly the case for a waveguide AR system with exit pupil expansion (EPE) while operating under a strong ambient light. To improve ACR, one approach is to dynamically adjust the transmittance with a tunable dimmer 14 , 15 . Another solution is to directly boost the image brightness with a high luminance microdisplay and an efficient combiner optics. Details of this topic will be discussed in the light engine section.

Another tradeoff of FoV and eyebox in geometric optical systems results from the conservation of etendue (or optical invariant). To increase the system etendue requires a larger optics, which in turn compromises the form factor. Finally, to address the VAC issue, the display system needs to generate a proper accommodation cue, which often requires the modulation of image depth or wavefront, neither of which can be easily achieved in a traditional geometric optical system. While remarkable progresses have been made to adopt freeform surfaces 16 , 17 , 18 , to further advance AR and VR systems requires additional novel optics with a higher degree of freedom in structure design and light modulation. Moreover, the employed optics should be thin and lightweight. To mitigate the above-mentioned challenges, diffractive optics is a strong contender. Unlike geometric optics relying on curved surfaces to refract or reflect light, diffractive optics only requires a thin layer of several micrometers to establish efficient light diffractions. Two major types of diffractive optics are HOEs based on wavefront recording and manually written devices like surface relief gratings (SRGs) based on lithography. While SRGs have large design freedoms of local grating geometry, a recent publication 19 indicates the combination of HOE and freeform optics can also offer a great potential for arbitrary wavefront generation. Furthermore, the advances in lithography have also enabled optical metasurfaces beyond diffractive and refractive optics, and miniature display panels like micro-LED (light-emitting diode). These devices hold the potential to boost the performance of current AR/VR displays, while keeping a lightweight and compact form factor.

Formation and properties of HOEs

HOE generally refers to a recorded hologram that reproduces the original light wavefront. The concept of holography is proposed by Dennis Gabor 20 , which refers to the process of recording a wavefront in a medium (hologram) and later reconstructing it with a reference beam. Early holography uses intensity-sensitive recording materials like silver halide emulsion, dichromated gelatin, and photopolymer 21 . Among them, photopolymer stands out due to its easy fabrication and ability to capture high-fidelity patterns 22 , 23 . It has therefore found extensive applications like holographic data storage 23 and display 24 , 25 . Photopolymer HOEs (PPHOEs) have a relatively small refractive index modulation and therefore exhibits a strong selectivity on the wavelength and incident angle. Another feature of PPHOE is that several holograms can be recorded into a photopolymer film by consecutive exposures. Later, liquid-crystal holographic optical elements (LCHOEs) based on photoalignment polarization holography have also been developed 25 , 26 . Due to the inherent anisotropic property of liquid crystals, LCHOEs are extremely sensitive to the polarization state of the input light. This feature, combined with the polarization modulation ability of liquid crystal devices, offers a new possibility for dynamic wavefront modulation in display systems.

The formation of PPHOE is illustrated in Fig. 3a . When exposed to an interfering field with high-and-low intensity fringes, monomers tend to move toward bright fringes due to the higher local monomer-consumption rate. As a result, the density and refractive index is slightly larger in bright regions. Note the index modulation δ n here is defined as the difference between the maximum and minimum refractive indices, which may be twice the value in other definitions 27 . The index modulation δ n is typically in the range of 0–0.06. To understand the optical properties of PPHOE, we simulate a transmissive grating and a reflective grating using rigorous coupled-wave analysis (RCWA) 28 , 29 and plot the results in Fig. 3b . Details of grating configuration can be found in Table S1 . Here, the reason for only simulating gratings is that for a general HOE, the local region can be treated as a grating. The observation of gratings can therefore offer a general insight of HOEs. For a transmissive grating, its angular bandwidth (efficiency > 80%) is around 5° ( λ  = 550 nm), while the spectral band is relatively broad, with bandwidth around 175 nm (7° incidence). For a reflective grating, its spectral band is narrow, with bandwidth around 10 nm. The angular bandwidth varies with the wavelength, ranging from 2° to 20°. The strong selectivity of PPHOE on wavelength and incident angle is directly related to its small δ n , which can be adjusted by controlling the exposure dosage.

figure 3

a Schematic of the formation of PPHOE. Simulated efficiency plots for b1 transmissive and b2 reflective PPHOEs. c Working principle of multiplexed PPHOE. d Formation and molecular configurations of LCHOEs. Simulated efficiency plots for e1 transmissive and e2 reflective LCHOEs. f Illustration of polarization dependency of LCHOEs

A distinctive feature of PPHOE is the ability to multiplex several holograms into one film sample. If the exposure dosage of a recording process is controlled so that the monomers are not completely depleted in the first exposure, the remaining monomers can continue to form another hologram in the following recording process. Because the total amount of monomer is fixed, there is usually an efficiency tradeoff between multiplexed holograms. The final film sample would exhibit the wavefront modulation functions of multiple holograms (Fig. 3c ).

Liquid crystals have also been used to form HOEs. LCHOEs can generally be categorized into volume-recording type and surface-alignment type. Volume-recording type LCHOEs are either based on early polarization holography recordings with azo-polymer 30 , 31 , or holographic polymer-dispersed liquid crystals (HPDLCs) 32 , 33 formed by liquid-crystal-doped photopolymer. Surface-alignment type LCHOEs are based on photoalignment polarization holography (PAPH) 34 . The first step is to record the desired polarization pattern in a thin photoalignment layer, and the second step is to use it to align the bulk liquid crystal 25 , 35 . Due to the simple fabrication process, high efficiency, and low scattering from liquid crystal’s self-assembly nature, surface-alignment type LCHOEs based on PAPH have recently attracted increasing interest in applications like near-eye displays. Here, we shall focus on this type of surface-alignment LCHOE and refer to it as LCHOE thereafter for simplicity.

The formation of LCHOEs is illustrated in Fig. 3d . The information of the wavefront and the local diffraction pattern is recorded in a thin photoalignment layer. The volume liquid crystal deposited on the photoalignment layer, depending on whether it is nematic liquid crystal or cholesteric liquid crystal (CLC), forms a transmissive or a reflective LCHOE. In a transmissive LCHOE, the bulk nematic liquid crystal molecules generally follow the pattern of the bottom alignment layer. The smallest allowable pattern period is governed by the liquid crystal distortion-free energy model, which predicts the pattern period should generally be larger than sample thickness 36 , 37 . This results in a maximum diffraction angle under 20°. On the other hand, in a reflective LCHOE 38 , 39 , the bulk CLC molecules form a stable helical structure, which is tilted to match the k -vector of the bottom pattern. The structure exhibits a very low distorted free energy 40 , 41 and can accommodate a pattern period that is small enough to diffract light into the total internal reflection (TIR) of a glass substrate.

The diffraction property of LCHOEs is shown in Fig. 3e . The maximum refractive index modulation of LCHOE is equal to the liquid crystal birefringence (Δ n ), which may vary from 0.04 to 0.5, depending on the molecular conjugation 42 , 43 . The birefringence used in our simulation is Δ n  = 0.15. Compared to PPHOEs, the angular and spectral bandwidths are significantly larger for both transmissive and reflective LCHOEs. For a transmissive LCHOE, its angular bandwidth is around 20° ( λ  = 550 nm), while the spectral bandwidth is around 300 nm (7° incidence). For a reflective LCHOE, its spectral bandwidth is around 80 nm and angular bandwidth could vary from 15° to 50°, depending on the wavelength.

The anisotropic nature of liquid crystal leads to LCHOE’s unique polarization-dependent response to an incident light. As depicted in Fig. 3f , for a transmissive LCHOE the accumulated phase is opposite for the conjugated left-handed circular polarization (LCP) and right-handed circular polarization (RCP) states, leading to reversed diffraction directions. For a reflective LCHOE, the polarization dependency is similar to that of a normal CLC. For the circular polarization with the same handedness as the helical structure of CLC, the diffraction is strong. For the opposite circular polarization, the diffraction is negligible.

Another distinctive property of liquid crystal is its dynamic response to an external voltage. The LC reorientation can be controlled with a relatively low voltage (<10 V rms ) and the response time is on the order of milliseconds, depending mainly on the LC viscosity and layer thickness. Methods to dynamically control LCHOEs can be categorized as active addressing and passive addressing, which can be achieved by either directly switching the LCHOE or modulating the polarization state with an active waveplate. Detailed addressing methods will be described in the VAC section.

Lithography-enabled devices

Lithography technologies are used to create arbitrary patterns on wafers, which lays the foundation of the modern integrated circuit industry 44 . Photolithography is suitable for mass production while electron/ion beam lithography is usually used to create photomask for photolithography or to write structures with nanometer-scale feature size. Recent advances in lithography have enabled engineered structures like optical metasurfaces 45 , SRGs 46 , as well as micro-LED displays 47 . Metasurfaces exhibit a remarkable design freedom by varying the shape of meta-atoms, which can be utilized to achieve novel functions like achromatic focus 48 and beam steering 49 . Similarly, SRGs also offer a large design freedom by manipulating the geometry of local grating regions to realize desired optical properties. On the other hand, micro-LED exhibits several unique features, such as ultrahigh peak brightness, small aperture ratio, excellent stability, and nanosecond response time, etc. As a result, micro-LED is a promising candidate for AR and VR systems for achieving high ACR and high frame rate for suppressing motion image blurs. In the following section, we will briefly review the fabrication and properties of micro-LEDs and optical modulators like metasurfaces and SRGs.

Fabrication and properties of micro-LEDs

LEDs with a chip size larger than 300 μm have been widely used in solid-state lighting and public information displays. Recently, micro-LEDs with chip sizes <5 μm have been demonstrated 50 . The first micro-LED disc with a diameter of about 12 µm was demonstrated in 2000 51 . After that, a single color (blue or green) LED microdisplay was demonstrated in 2012 52 . The high peak brightness, fast response time, true dark state, and long lifetime of micro-LEDs are attractive for display applications. Therefore, many companies have since released their micro-LED prototypes or products, ranging from large-size TVs to small-size microdisplays for AR/VR applications 53 , 54 . Here, we focus on micro-LEDs for near-eye display applications. Regarding the fabrication of micro-LEDs, through the metal-organic chemical vapor deposition (MOCVD) method, the AlGaInP epitaxial layer is grown on GaAs substrate for red LEDs, and GaN epitaxial layers on sapphire substrate for green and blue LEDs. Next, a photolithography process is applied to define the mesa and deposit electrodes. To drive the LED array, the fabricated micro-LEDs are transferred to a CMOS (complementary metal oxide semiconductor) driver board. For a small size (<2 inches) microdisplay used in AR or VR, the precision of the pick-and-place transfer process is hard to meet the high-resolution-density (>1000 pixel per inch) requirement. Thus, the main approach to assemble LED chips with driving circuits is flip-chip bonding 50 , 55 , 56 , 57 , as Fig. 4a depicts. In flip-chip bonding, the mesa and electrode pads should be defined and deposited before the transfer process, while metal bonding balls should be preprocessed on the CMOS substrate. After that, thermal-compression method is used to bond the two wafers together. However, due to the thermal mismatch of LED chip and driving board, as the pixel size decreases, the misalignment between the LED chip and the metal bonding ball on the CMOS substrate becomes serious. In addition, the common n-GaN layer may cause optical crosstalk between pixels, which degrades the image quality. To overcome these issues, the LED epitaxial layer can be firstly metal-bonded with the silicon driver board, followed by the photolithography process to define the LED mesas and electrodes. Without the need for an alignment process, the pixel size can be reduced to <5 µm 50 .

figure 4

a Illustration of flip-chip bonding technology. b Simulated IQE-LED size relations for red and blue LEDs based on ABC model. c Comparison of EQE of different LED sizes with and without KOH and ALD side wall treatment. d Angular emission profiles of LEDs with different sizes. Metasurfaces based on e resonance-tuning, f non-resonance tuning and g combination of both. h Replication master and i replicated SRG based on nanoimprint lithography. Reproduced from a ref. 55 with permission from AIP Publishing, b ref. 61 with permission from PNAS, c ref. 66 with permission from IOP Publishing, d ref. 67 with permission from AIP Publishing, e ref. 69 with permission from OSA Publishing f ref. 48 with permission from AAAS g ref. 70 with permission from AAAS and h , i ref. 85 with permission from OSA Publishing

In addition to manufacturing process, the electrical and optical characteristics of LED also depend on the chip size. Generally, due to Shockley-Read-Hall (SRH) non-radiative recombination on the sidewall of active area, a smaller LED chip size results in a lower internal quantum efficiency (IQE), so that the peak IQE driving point will move toward a higher current density due to increased ratio of sidewall surface to active volume 58 , 59 , 60 . In addition, compared to the GaN-based green and blue LEDs, the AlGaInP-based red LEDs with a larger surface recombination and carrier diffusion length suffer a more severe efficiency drop 61 , 62 . Figure 4b shows the simulated result of IQE drop in relation with the LED chip size of blue and red LEDs based on ABC model 63 . To alleviate the efficiency drop caused by sidewall defects, depositing passivation materials by atomic layer deposition (ALD) or plasma enhanced chemical vapor deposition (PECVD) is proven to be helpful for both GaN and AlGaInP based LEDs 64 , 65 . In addition, applying KOH (Potassium hydroxide) treatment after ALD can further reduce the EQE drop of micro-LEDs 66 (Fig. 4c ). Small-size LEDs also exhibit some advantages, such as higher light extraction efficiency (LEE). Compared to an 100-µm LED, the LEE of a 2-µm LED increases from 12.2 to 25.1% 67 . Moreover, the radiation pattern of micro-LED is more directional than that of a large-size LED (Fig. 4d ). This helps to improve the lens collection efficiency in AR/VR display systems.

Metasurfaces and SGs

Thanks to the advances in lithography technology, low-loss dielectric metasurfaces working in the visible band have recently emerged as a platform for wavefront shaping 45 , 48 , 68 . They consist of an array of subwavelength-spaced structures with individually engineered wavelength-dependent polarization/phase/ amplitude response. In general, the light modulation mechanisms can be classified into resonant tuning 69 (Fig. 4e ), non-resonant tuning 48 (Fig. 4f ), and combination of both 70 (Fig. 4g ). In comparison with non-resonant tuning (based on geometric phase and/or dynamic propagation phase), the resonant tuning (such as Fabry–Pérot resonance, Mie resonance, etc.) is usually associated with a narrower operating bandwidth and a smaller out-of-plane aspect ratio (height/width) of nanostructures. As a result, they are easier to fabricate but more sensitive to fabrication tolerances. For both types, materials with a higher refractive index and lower absorption loss are beneficial to reduce the aspect ratio of nanostructure and improve the device efficiency. To this end, titanium dioxide (TiO 2 ) and gallium nitride (GaN) are the major choices for operating in the entire visible band 68 , 71 . While small-sized metasurfaces (diameter <1 mm) are usually fabricated via electron-beam lithography or focused ion beam milling in the labs, the ability of mass production is the key to their practical adoption. The deep ultraviolet (UV) photolithography has proven its feasibility for reproducing centimeter-size metalenses with decent imaging performance, while it requires multiple steps of etching 72 . Interestingly, the recently developed UV nanoimprint lithography based on a high-index nanocomposite only takes a single step and can obtain an aspect ratio larger than 10, which shows great promise for high-volume production 73 .

The arbitrary wavefront shaping capability and the thinness of the metasurfaces have aroused strong research interests in the development of novel AR/VR prototypes with improved performance. Lee et al. employed nanoimprint lithography to fabricate a centimeter-size, geometric-phase metalens eyepiece for full-color AR displays 74 . Through tailoring its polarization conversion efficiency and stacking with a circular polarizer, the virtual image can be superimposed with the surrounding scene. The large numerical aperture (NA~0.5) of the metalens eyepiece enables a wide FoV (>76°) that conventional optics are difficult to obtain. However, the geometric phase metalens is intrinsically a diffractive lens that also suffers from strong chromatic aberrations. To overcome this issue, an achromatic lens can be designed via simultaneously engineering the group delay and the group delay dispersion 75 , 76 , which will be described in detail later. Other novel and/or improved near-eye display architectures include metasurface-based contact lens-type AR 77 , achromatic metalens array enabled integral-imaging light field displays 78 , wide FoV lightguide AR with polarization-dependent metagratings 79 , and off-axis projection-type AR with an aberration-corrected metasurface combiner 80 , 81 , 82 . Nevertheless, from the existing AR/VR prototypes, metasurfaces still face a strong tradeoff between numerical aperture (for metalenses), chromatic aberration, monochromatic aberration, efficiency, aperture size, and fabrication complexity.

On the other hand, SRGs are diffractive gratings that have been researched for decades as input/output couplers of waveguides 83 , 84 . Their surface is composed of corrugated microstructures, and different shapes including binary, blazed, slanted, and even analogue can be designed. The parameters of the corrugated microstructures are determined by the target diffraction order, operation spectral bandwidth, and angular bandwidth. Compared to metasurfaces, SRGs have a much larger feature size and thus can be fabricated via UV photolithography and subsequent etching. They are usually replicated by nanoimprint lithography with appropriate heating and surface treatment. According to a report published a decade ago, SRGs with a height of 300 nm and a slant angle of up to 50° can be faithfully replicated with high yield and reproducibility 85 (Fig. 4g, h ).

Challenges and solutions of VR displays

The fully immersive nature of VR headset leads to a relatively fixed configuration where the display panel is placed in front of the viewer’s eye and an imaging optics is placed in-between. Regarding the system performance, although inadequate angular resolution still exists in some current VR headsets, the improvement of display panel resolution with advanced fabrication process is expected to solve this issue progressively. Therefore, in the following discussion, we will mainly focus on two major challenges: form factor and 3D cue generation.

Form factor

Compact and lightweight near-eye displays are essential for a comfortable user experience and therefore highly desirable in VR headsets. Current mainstream VR headsets usually have a considerably larger volume than eyeglasses, and most of the volume is just empty. This is because a certain distance is required between the display panel and the viewing optics, which is usually close to the focal length of the lens system as illustrated in Fig. 5a . Conventional VR headsets employ a transmissive lens with ~4 cm focal length to offer a large FoV and eyebox. Fresnel lenses are thinner than conventional ones, but the distance required between the lens and the panel does not change significantly. In addition, the diffraction artifacts and stray light caused by the Fresnel grooves can degrade the image quality, or MTF. Although the resolution density, quantified as pixel per inch (PPI), of current VR headsets is still limited, eventually Fresnel lens will not be an ideal solution when a high PPI display is available. The strong chromatic aberration of Fresnel singlet should also be compensated if a high-quality imaging system is preferred.

figure 5

a Schematic of a basic VR optical configuration. b Achromatic metalens used as VR eyepiece. c VR based on curved display and lenslet array. d Basic working principle of a VR display based on pancake optics. e VR with pancake optics and Fresnel lens array. f VR with pancake optics based on purely HOEs. Reprinted from b ref. 87 under the Creative Commons Attribution 4.0 License. Adapted from c ref. 88 with permission from IEEE, e ref. 91 and f ref. 92 under the Creative Commons Attribution 4.0 License

It is tempting to replace the refractive elements with a single thin diffractive lens like a transmissive LCHOE. However, the diffractive nature of such a lens will result in serious color aberrations. Interestingly, metalenses can fulfil this objective without color issues. To understand how metalenses achieve achromatic focus, let us first take a glance at the general lens phase profile \(\Phi (\omega ,r)\) expanded as a Taylor series 75 :

where \(\varphi _0(\omega )\) is the phase at the lens center, \(F\left( \omega \right)\) is the focal length as a function of frequency ω , r is the radial coordinate, and \(\omega _0\) is the central operation frequency. To realize achromatic focus, \(\partial F{{{\mathrm{/}}}}\partial \omega\) should be zero. With a designed focal length, the group delay \(\partial \Phi (\omega ,r){{{\mathrm{/}}}}\partial \omega\) and the group delay dispersion \(\partial ^2\Phi (\omega ,r){{{\mathrm{/}}}}\partial \omega ^2\) can be determined, and \(\varphi _0(\omega )\) is an auxiliary degree of freedom of the phase profile design. In the design of an achromatic metalens, the group delay is a function of the radial coordinate and monotonically increases with the metalens radius. Many designs have proven that the group delay has a limited variation range 75 , 76 , 78 , 86 . According to Shrestha et al. 86 , there is an inevitable tradeoff between the maximum radius of the metalens, NA, and operation bandwidth. Thus, the reported achromatic metalenses at visible usually have limited lens aperture (e.g., diameter < 250 μm) and NA (e.g., <0.2). Such a tradeoff is undesirable in VR displays, as the eyepiece favors a large clear aperture (inch size) and a reasonably high NA (>0.3) to maintain a wide FoV and a reasonable eye relief 74 .

To overcome this limitation, Li et al. 87 proposed a novel zone lens method. Unlike the traditional phase Fresnel lens where the zones are determined by the phase reset, the new approach divides the zones by the group delay reset. In this way, the lens aperture and NA can be much enlarged, and the group delay limit is bypassed. A notable side effect of this design is the phase discontinuity at zone boundaries that will contribute to higher-order focusing. Therefore, significant efforts have been conducted to find the optimal zone transition locations and to minimize the phase discontinuities. Using this method, they have demonstrated an impressive 2-mm-diameter metalens with NA = 0.7 and nearly diffraction-limited focusing for the designed wavelengths (488, 532, 658 nm) (Fig. 5b ). Such a metalens consists of 681 zones and works for the visible band ranging from 470 to 670 nm, though the focusing efficiency is in the order of 10%. This is a great starting point for the achromatic metalens to be employed as a compact, chromatic-aberration-free eyepiece in near-eye displays. Future challenges are how to further increase the aperture size, correct the off-axis aberrations, and improve the optical efficiency.

Besides replacing the refractive lens with an achromatic metalens, another way to reduce system focal length without decreasing NA is to use a lenslet array 88 . As depicted in Fig. 5c , both the lenslet array and display panel adopt a curved structure. With the latest flexible OLED panel, the display can be easily curved in one dimension. The system exhibits a large diagonal FoV of 180° with an eyebox of 19 by 12 mm. The geometry of each lenslet is optimized separately to achieve an overall performance with high image quality and reduced distortions.

Aside from trying to shorten the system focal length, another way to reduce total track is to fold optical path. Recently, polarization-based folded lenses, also known as pancake optics, are under active development for VR applications 89 , 90 . Figure 5d depicts the structure of an exemplary singlet pancake VR lens system. The pancake lenses can offer better imaging performance with a compact form factor since there are more degrees of freedom in the design and the actual light path is folded thrice. By using a reflective surface with a positive power, the field curvature of positive refractive lenses can be compensated. Also, the reflective surface has no chromatic aberrations and it contributes considerable optical power to the system. Therefore, the optical power of refractive lenses can be smaller, resulting in an even weaker chromatic aberration. Compared to Fresnel lenses, the pancake lenses have smooth surfaces and much fewer diffraction artifacts and stray light. However, such a pancake lens design is not perfect either, whose major shortcoming is low light efficiency. With two incidences of light on the half mirror, the maximum system efficiency is limited to 25% for a polarized input and 12.5% for an unpolarized input light. Moreover, due to the existence of multiple surfaces in the system, stray light caused by surface reflections and polarization leakage may lead to apparent ghost images. As a result, the catadioptric pancake VR headset usually manifests a darker imagery and lower contrast than the corresponding dioptric VR.

Interestingly, the lenslet and pancake optics can be combined to further reduce the system form. Bang et al. 91 demonstrated a compact VR system with a pancake optics and a Fresnel lenslet array. The pancake optics serves to fold the optical path between the display panel and the lenslet array (Fig. 5e ). Another Fresnel lens is used to collect the light from the lenslet array. The system has a decent horizontal FoV of 102° and an eyebox of 8 mm. However, a certain degree of image discontinuity and crosstalk are still present, which can be improved with further optimizations on the Fresnel lens and the lenslet array.

One step further, replacing all conventional optics in catadioptric VR headset with holographic optics can make the whole system even thinner. Maimone and Wang demonstrated such a lightweight, high-resolution, and ultra-compact VR optical system using purely HOEs 92 . This holographic VR optics was made possible by combining several innovative optical components, including a reflective PPHOE, a reflective LCHOE, and a PPHOE-based directional backlight with laser illumination, as shown in Fig. 5f . Since all the optical power is provided by the HOEs with negligible weight and volume, the total physical thickness can be reduced to <10 mm. Also, unlike conventional bulk optics, the optical power of a HOE is independent of its thickness, only subject to the recording process. Another advantage of using holographic optical devices is that they can be engineered to offer distinct phase profiles for different wavelengths and angles of incidence, adding extra degrees of freedom in optical designs for better imaging performance. Although only a single-color backlight has been demonstrated, such a PPHOE has the potential to achieve full-color laser backlight with multiplexing ability. The PPHOE and LCHOE in the pancake optics can also be optimized at different wavelengths for achieving high-quality full-color images.

Vergence-accommodation conflict

Conventional VR displays suffer from VAC, which is a common issue for stereoscopic 3D displays 93 . In current VR display modules, the distance between the display panel and the viewing optics is fixed, which means the VR imagery is displayed at a single depth. However, the image contents are generated by parallax rendering in three dimensions, offering distinct images for two eyes. This approach offers a proper stimulus to vergence but completely ignores the accommodation cue, which leads to the well-known VAC that can cause an uncomfortable user experience. Since the beginning of this century, numerous methods have been proposed to solve this critical issue. Methods to produce accommodation cue include multifocal/varifocal display 94 , holographic display 95 , and integral imaging display 96 . Alternatively, elimination of accommodation cue using a Maxwellian-view display 93 also helps to mitigate the VAC. However, holographic displays and Maxwellian-view displays generally require a totally different optical architecture than current VR systems. They are therefore more suitable for AR displays, which will be discussed later. Integral imaging, on the other hand, has an inherent tradeoff between view number and resolution. For current VR headsets pursuing high resolution to match human visual acuity, it may not be an appealing solution. Therefore, multifocal/varifocal displays that rely on depth modulation is a relatively practical and effective solution for VR headsets. Regarding the working mechanism, multifocal displays present multiple images with different depths to imitate the original 3D scene. Varifocal displays, in contrast, only show one image at each time frame. The image depth matches the viewer’s vergence depth. Nonetheless, the pre-knowledge of the viewer’s vergence depth requires an additional eye-tracking module. Despite different operation principles, a varifocal display can often be converted to a multifocal display as long as the varifocal module has enough modulation bandwidth to support multiple depths in a time frame.

To achieve depth modulation in a VR system, traditional liquid lens 97 , 98 with tunable focus suffers from the small aperture and large aberrations. Alvarez lens 99 is another tunable-focus solution but it requires mechanical adjustment, which adds to system volume and complexity. In comparison, transmissive LCHOEs with polarization dependency can achieve focus adjustment with electronic driving. Its ultra-thinness also satisfies the requirement of small form factors in VR headsets. The diffractive behavior of transmissive LCHOEs is often interpreted by the mechanism of Pancharatnam-Berry phase (also known as geometric phase) 100 . They are therefore often called Pancharatnam-Berry optical elements (PBOEs). The corresponding lens component is referred as Pancharatnam-Berry lens (PBL).

Two main approaches are used to switch the focus of a PBL, active addressing and passive addressing. In active addressing, the PBL itself (made of LC) can be switched by an applied voltage (Fig. 6a ). The optical power of the liquid crystal PBLs can be turned-on and -off by controlling the voltage. Stacking multiple active PBLs can produce 2 N depths, where N is the number of PBLs. The drawback of using active PBLs, however, is the limited spectral bandwidth since their diffraction efficiency is usually optimized at a single wavelength. In passive addressing, the depth modulation is achieved through changing the polarization state of input light by a switchable half-wave plate (HWP) (Fig. 6b ). The focal length can therefore be switched thanks to the polarization sensitivity of PBLs. Although this approach has a slightly more complicated structure, the overall performance can be better than the active one, because the PBLs made of liquid crystal polymer can be designed to manifest high efficiency within the entire visible spectrum 101 , 102 .

figure 6

Working principles of a depth switching PBL module based on a active addressing and b passive addressing. c A four-depth multifocal display based on time multiplexing. d A two-depth multifocal display based on polarization multiplexing. Reproduced from c ref. 103 with permission from OSA Publishing and d ref. 104 with permission from OSA Publishing

With the PBL module, multifocal displays can be built using time-multiplexing technique. Zhan et al. 103 demonstrated a four-depth multifocal display using two actively switchable liquid crystal PBLs (Fig. 6c ). The display is synchronized with the PBL module, which lowers the frame rate by the number of depths. Alternatively, multifocal displays can also be achieved by polarization-multiplexing, as demonstrated by Tan et al. 104 . The basic principle is to adjust the polarization state of local pixels so the image content on two focal planes of a PBL can be arbitrarily controlled (Fig. 6d ). The advantage of polarization multiplexing is that it does not sacrifice the frame rate, but it can only support two planes because only two orthogonal polarization states are available. Still, it can be combined with time-multiplexing to reduce the frame rate sacrifice by half. Naturally, varifocal displays can also be built with a PBL module. A fast-response 64-depth varifocal module with six PBLs has been demonstrated 105 .

The compact structure of PBL module leads to a natural solution of integrating it with above-mentioned pancake optics. A compact VR headset with dynamic depth modulation to solve VAC is therefore possible in practice. Still, due to the inherent diffractive nature of PBL, the PBL module face the issue of chromatic dispersion of focal length. To compensate for different focal depths for RGB colors may require additional digital corrections in image-rendering.

Architectures of AR displays

Unlike VR displays with a relatively fixed optical configuration, there exist a vast number of architectures in AR displays. Therefore, instead of following the narrative of tackling different challenges, a more appropriate way to review AR displays is to separately introduce each architecture and discuss its associated engineering challenges. An AR display usually consists of a light engine and an optical combiner. The light engine serves as display image source, while the combiner delivers the displayed images to viewer’s eye and in the meantime transmits the environment light. Some performance parameters like frame rate and power consumption are mainly determined by the light engine. Parameters like FoV, eyebox and MTF are primarily dependent on the combiner optics. Moreover, attributes like image brightness, overall efficiency, and form factor are influenced by both light engine and combiner. In this section, we will firstly discuss the light engine, where the latest advances in micro-LED on chip are reviewed and compared with existing microdisplay systems. Then, we will introduce two main types of combiners: free-space combiner and waveguide combiner.

Light engine

The light engine determines several essential properties of the AR system like image brightness, power consumption, frame rate, and basic etendue. Several types of microdisplays have been used in AR, including micro-LED, micro-organic-light-emitting-diodes (micro-OLED), liquid-crystal-on-silicon (LCoS), digital micromirror device (DMD), and laser beam scanning (LBS) based on micro-electromechanical system (MEMS). We will firstly describe the working principles of these devices and then analyze their performance. For those who are more interested in final performance parameters than details, Table 1 provides a comprehensive summary.

Working principles

Micro-LED and micro-OLED are self-emissive display devices. They are usually more compact than LCoS and DMD because no illumination optics is required. The fundamentally different material systems of LED and OLED lead to different approaches to achieve full-color displays. Due to the “green gap” in LEDs, red LEDs are manufactured on a different semiconductor material from green and blue LEDs. Therefore, how to achieve full-color display in high-resolution density microdisplays is quite a challenge for micro-LEDs. Among several solutions under research are two main approaches. The first is to combine three separate red, green and blue (RGB) micro-LED microdisplay panels 106 . Three single-color micro-LED microdisplays are manufactured separately through flip-chip transfer technology. Then, the projected images from three microdisplay panels are integrated by a trichroic prism (Fig. 7a ).

figure 7

a RGB micro-LED microdisplays combined by a trichroic prism. b QD-based micro-LED microdisplay. c Micro-OLED display with 4032 PPI. Working principles of d LCoS, e DMD, and f MEMS-LBS display modules. Reprinted from a ref. 106 with permission from IEEE, b ref. 108 with permission from Chinese Laser Press, c ref. 121 with permission from Jon Wiley and Sons, d ref. 124 with permission from Spring Nature, e ref. 126 with permission from Springer and f ref. 128 under the Creative Commons Attribution 4.0 License

Another solution is to assemble color-conversion materials like quantum dot (QD) on top of blue or ultraviolet (UV) micro-LEDs 107 , 108 , 109 (Fig. 7b ). The quantum dot color filter (QDCF) on top of the micro-LED array is mainly fabricated by inkjet printing or photolithography 110 , 111 . However, the display performance of color-conversion micro-LED displays is restricted by the low color-conversion efficiency, blue light leakage, and color crosstalk. Extensive efforts have been conducted to improve the QD-micro-LED performance. To boost QD conversion efficiency, structure designs like nanoring 112 and nanohole 113 , 114 have been proposed, which utilize the Förster resonance energy transfer mechanism to transfer excessive excitons in the LED active region to QD. To prevent blue light leakage, methods using color filters or reflectors like distributed Bragg reflector (DBR) 115 and CLC film 116 on top of QDCF are proposed. Compared to color filters that absorb blue light, DBR and CLC film help recycle the leaked blue light to further excite QDs. Other methods to achieve full-color micro-LED display like vertically stacked RGB micro-LED array 61 , 117 , 118 and monolithic wavelength tunable nanowire LED 119 are also under investigation.

Micro-OLED displays can be generally categorized into RGB OLED and white OLED (WOLED). RGB OLED displays have separate sub-pixel structures and optical cavities, which resonate at the desirable wavelength in RGB channels, respectively. To deposit organic materials onto the separated RGB sub-pixels, a fine metal mask (FMM) that defines the deposition area is required. However, high-resolution RGB OLED microdisplays still face challenges due to the shadow effect during the deposition process through FMM. In order to break the limitation, a silicon nitride film with small shadow has been proposed as a mask for high-resolution deposition above 2000 PPI (9.3 µm) 120 .

WOLED displays use color filters to generate color images. Without the process of depositing patterned organic materials, a high-resolution density up to 4000 PPI has been achieved 121 (Fig. 7c ). However, compared to RGB OLED, the color filters in WOLED absorb about 70% of the emitted light, which limits the maximum brightness of the microdisplay. To improve the efficiency and peak brightness of WOLED microdisplays, in 2019 Sony proposed to apply newly designed cathodes (InZnO) and microlens arrays on OLED microdisplays, which increased the peak brightness from 1600 nits to 5000 nits 120 . In addition, OLEDWORKs has proposed a multi-stacked OLED 122 with optimized microcavities whose emission spectra match the transmission bands of the color filters. The multi-stacked OLED shows a higher luminous efficiency (cd/A), but also requires a higher driving voltage. Recently, by using meta-mirrors as bottom reflective anodes, patterned microcavities with more than 10,000 PPI have been obtained 123 . The high-resolution meta-mirrors generate different reflection phases in the RGB sub-pixels to achieve desirable resonant wavelengths. The narrow emission spectra from the microcavity help to reduce the loss from color filters or even eliminate the need of color filters.

LCoS and DMD are light-modulating displays that generate images by controlling the reflection of each pixel. For LCoS, the light modulation is achieved by manipulating the polarization state of output light through independently controlling the liquid crystal reorientation in each pixel 124 , 125 (Fig. 7d ). Both phase-only and amplitude modulators have been employed. DMD is an amplitude modulation device. The modulation is achieved through controlling the tilt angle of bi-stable micromirrors 126 (Fig. 7e ). To generate an image, both LCoS and DMD rely on the light illumination systems, with LED or laser as light source. For LCoS, the generation of color image can be realized either by RGB color filters on LCoS (with white LEDs) or color-sequential addressing (with RGB LEDs or lasers). However, LCoS requires a linearly polarized light source. For an unpolarized LED light source, usually, a polarization recycling system 127 is implemented to improve the optical efficiency. For a single-panel DMD, the color image is mainly obtained through color-sequential addressing. In addition, DMD does not require a polarized light so that it generally exhibits a higher efficiency than LCoS if an unpolarized light source is employed.

MEMS-based LBS 128 , 129 utilizes micromirrors to directly scan RGB laser beams to form two-dimensional (2D) images (Fig. 7f ). Different gray levels are achieved by pulse width modulation (PWM) of the employed laser diodes. In practice, 2D scanning can be achieved either through a 2D scanning mirror or two 1D scanning mirrors with an additional focusing lens after the first mirror. The small size of MEMS mirror offers a very attractive form factor. At the same time, the output image has a large depth-of-focus (DoF), which is ideal for projection displays. One shortcoming, though, is that the small system etendue often hinders its applications in some traditional display systems.

Comparison of light engine performance

There are several important parameters for a light engine, including image resolution, brightness, frame rate, contrast ratio, and form factor. The resolution requirement (>2K) is similar for all types of light engines. The improvement of resolution is usually accomplished through the manufacturing process. Thus, here we shall focus on other three parameters.

Image brightness usually refers to the measured luminance of a light-emitting object. This measurement, however, may not be accurate for a light engine as the light from engine only forms an intermediate image, which is not directly viewed by the user. On the other hand, to solely focus on the brightness of a light engine could be misleading for a wearable display system like AR. Nowadays, data projectors with thousands of lumens are available. But the power consumption is too high for a battery-powered wearable AR display. Therefore, a more appropriate way to evaluate a light engine’s brightness is to use luminous efficacy (lm/W) measured by dividing the final output luminous flux (lm) by the input electric power (W). For a self-emissive device like micro-LED or micro-OLED, the luminous efficacy is directly determined by the device itself. However, for LCoS and DMD, the overall luminous efficacy should take into consideration the light source luminous efficacy, the efficiency of illumination optics, and the efficiency of the employed spatial light modulator (SLM). For a MEMS LBS engine, the efficiency of MEMS mirror can be considered as unity so that the luminous efficacy basically equals to that of the employed laser sources.

As mentioned earlier, each light engine has a different scheme for generating color images. Therefore, we separately list luminous efficacy of each scheme for a more inclusive comparison. For micro-LEDs, the situation is more complicated because the EQE depends on the chip size. Based on previous studies 130 , 131 , 132 , 133 , we separately calculate the luminous efficacy for RGB micro-LEDs with chip size ≈ 20 µm. For the scheme of direct combination of RGB micro-LEDs, the luminous efficacy is around 5 lm/W. For QD-conversion with blue micro-LEDs, the luminous efficacy is around 10 lm/W with the assumption of 100% color conversion efficiency, which has been demonstrated using structure engineering 114 . For micro-OLEDs, the calculated luminous efficacy is about 4–8 lm/W 120 , 122 . However, the lifetime and EQE of blue OLED materials depend on the driving current. To continuously display an image with brightness higher than 10,000 nits may dramatically shorten the device lifetime. The reason we compare the light engine at 10,000 nits is that it is highly desirable to obtain 1000 nits for the displayed image in order to keep ACR>3:1 with a typical AR combiner whose optical efficiency is lower than 10%.

For an LCoS engine using a white LED as light source, the typical optical efficiency of the whole engine is around 10% 127 , 134 . Then the engine luminous efficacy is estimated to be 12 lm/W with a 120 lm/W white LED source. For a color sequential LCoS using RGB LEDs, the absorption loss from color filters is eliminated, but the luminous efficacy of RGB LED source is also decreased to about 30 lm/W due to lower efficiency of red and green LEDs and higher driving current 135 . Therefore, the final luminous efficacy of the color sequential LCoS engine is also around 10 lm/W. If RGB linearly polarized lasers are employed instead of LEDs, then the LCoS engine efficiency can be quite high due to the high degree of collimation. The luminous efficacy of RGB laser source is around 40 lm/W 136 . Therefore, the laser-based LCoS engine is estimated to have a luminous efficacy of 32 lm/W, assuming the engine optical efficiency is 80%. For a DMD engine with RGB LEDs as light source, the optical efficiency is around 50% 137 , 138 , which leads to a luminous efficacy of 15 lm/W. By switching to laser light sources, the situation is similar to LCoS, with the luminous efficacy of about 32 lm/W. Finally, for MEMS-based LBS engine, there is basically no loss from the optics so that the final luminous efficacy is 40 lm/W. Detailed calculations of luminous efficacy can be found in Supplementary Information .

Another aspect of a light engine is the frame rate, which determines the volume of information it can deliver in a unit time. A high volume of information is vital for the construction of a 3D light field to solve the VAC issue. For micro-LEDs, the device response time is around several nanoseconds, which allows for visible light communication with bandwidth up to 1.5 Gbit/s 139 . For an OLED microdisplay, a fast OLED with ~200 MHz bandwidth has been demonstrated 140 . Therefore, the limitation of frame rate is on the driving circuits for both micro-LED and OLED. Another fact concerning driving circuit is the tradeoff between resolution and frame rate as a higher resolution panel means more scanning lines in each frame. So far, an OLED display with 480 Hz frame rate has been demonstrated 141 . For an LCoS, the frame rate is mainly limited by the LC response time. Depending on the LC material used, the response time is around 1 ms for nematic LC or 200 µs for ferroelectric LC (FLC) 125 . Nematic LC allows analog driving, which accommodates gray levels, typically with 8-bit depth. FLC is bistable so that PWM is used to generate gray levels. DMD is also a binary device. The frame rate can reach 30 kHz, which is mainly constrained by the response time of micromirrors. For MEMS-based LBS, the frame rate is limited by the scanning frequency of MEMS mirrors. A frame rate of 60 Hz with around 1 K resolution already requires a resonance frequency of around 50 kHz, with a Q-factor up to 145,000 128 . A higher frame rate or resolution requires a higher Q-factor and larger laser modulation bandwidth, which may be challenging.

Form factor is another crucial aspect for the light engines of near-eye displays. For self-emissive displays, both micro-OLEDs and QD-based micro-LEDs can achieve full color with a single panel. Thus, they are quite compact. A micro-LED display with separate RGB panels naturally have a larger form factor. In applications requiring direct-view full-color panel, the extra combining optics may also increase the volume. It needs to be pointed out, however, that the combing optics may not be necessary for some applications like waveguide displays, because the EPE process results in system’s insensitivity to the spatial positions of input RGB images. Therefore, the form factor of using three RGB micro-LED panels is medium. For LCoS and DMD with RGB LEDs as light source, the form factor would be larger due to the illumination optics. Still, if a lower luminous efficacy can be accepted, then a smaller form factor can be achieved by using a simpler optics 142 . If RGB lasers are used, the collimation optics can be eliminated, which greatly reduces the form factor 143 . For MEMS-LBS, the form factor can be extremely compact due to the tiny size of MEMS mirror and laser module.

Finally, contrast ratio (CR) also plays an important role affecting the observed images 8 . Micro-LEDs and micro-OLEDs are self-emissive so that their CR can be >10 6 :1. For a laser beam scanner, its CR can also achieve 10 6 :1 because the laser can be turned off completely at dark state. On the other hand, LCoS and DMD are reflective displays, and their CR is around 2000:1 to 5000:1 144 , 145 . It is worth pointing out that the CR of a display engine plays a significant role only in the dark ambient. As the ambient brightness increases, the ACR is mainly governed by the display’s peak brightness, as previously discussed.

The performance parameters of different light engines are summarized in Table 1 . Micro-LEDs and micro-OLEDs have similar levels of luminous efficacy. But micro-OLEDs still face the burn-in and lifetime issue when driving at a high current, which hinders its use for a high-brightness image source to some extent. Micro-LEDs are still under active development and the improvement on luminous efficacy from maturing fabrication process could be expected. Both devices have nanosecond response time and can potentially achieve a high frame rate with a well-designed integrated circuit. The frame rate of the driving circuit ultimately determines the motion picture response time 146 . Their self-emissive feature also leads to a small form factor and high contrast ratio. LCoS and DMD engines have similar performance of luminous efficacy, form factor, and contrast ratio. In terms of light modulation, DMD can provide a higher 1-bit frame rate, while LCoS can offer both phase and amplitude modulations. MEMS-based LBS exhibits the highest luminous efficacy so far. It also exhibits an excellent form factor and contrast ratio, but the presently demonstrated 60-Hz frame rate (limited by the MEMS mirrors) could cause image flickering.

Free-space combiners

The term ‘free-space’ generally refers to the case when light is freely propagating in space, as opposed to a waveguide that traps light into TIRs. Regarding the combiner, it can be a partial mirror, as commonly used in AR systems based on traditional geometric optics. Alternatively, the combiner can also be a reflective HOE. The strong chromatic dispersion of HOE necessitates the use of a laser source, which usually leads to a Maxwellian-type system.

Traditional geometric designs

Several systems based on geometric optics are illustrated in Fig. 8 . The simplest design uses a single freeform half-mirror 6 , 147 to directly collimate the displayed images to the viewer’s eye (Fig. 8a ). This design can achieve a large FoV (up to 90°) 147 , but the limited design freedom with a single freeform surface leads to image distortions, also called pupil swim 6 . The placement of half-mirror also results in a relatively bulky form factor. Another design using so-called birdbath optics 6 , 148 is shown in Fig. 8b . Compared to the single-combiner design, birdbath design has an extra optics on the display side, which provides space for aberration correction. The integration of beam splitter provides a folded optical path, which reduces the form factor to some extent. Another way to fold optical path is to use a TIR-prism. Cheng et al. 149 designed a freeform TIR-prism combiner (Fig. 8c ) offering a diagonal FoV of 54° and exit pupil diameter of 8 mm. All the surfaces are freeform, which offer an excellent image quality. To cancel the optical power for the transmitted environmental light, a compensator is added to the TIR prism. The whole system has a well-balanced performance between FoV, eyebox, and form factor. To release the space in front of viewer’s eye, relay optics can be used to form an intermediate image near the combiner 150 , 151 , as illustrated in Fig. 8d . Although the design offers more optical surfaces for aberration correction, the extra lenses also add to system weight and form factor.

figure 8

a Single freeform surface as the combiner. b Birdbath optics with a beam splitter and a half mirror. c Freeform TIR prism with a compensator. d Relay optics with a half mirror. Adapted from c ref. 149 with permission from OSA Publishing and d ref. 151 with permission from OSA Publishing

Regarding the approaches to solve the VAC issue, the most straightforward way is to integrate a tunable lens into the optical path, like a liquid lens 152 or Alvarez lens 99 , to form a varifocal system. Alternatively, integral imaging 153 , 154 can also be used, by replacing the original display panel with the central depth plane of an integral imaging module. The integral imaging can also be combined with varifocal approach to overcome the tradeoff between resolution and depth of field (DoF) 155 , 156 , 157 . However, the inherent tradeoff between resolution and view number still exists in this case.

Overall, AR displays based on traditional geometric optics have a relatively simple design with a decent FoV (~60°) and eyebox (8 mm) 158 . They also exhibit a reasonable efficiency. To measure the efficiency of an AR combiner, an appropriate measure is to divide the output luminance (unit: nit) by the input luminous flux (unit: lm), which we note as combiner efficiency. For a fixed input luminous flux, the output luminance, or image brightness, is related to the FoV and exit pupil of the combiner system. If we assume no light waste of the combiner system, then the maximum combiner efficiency for a typical diagonal FoV of 60° and exit pupil (10 mm square) is around 17,000 nit/lm (Eq. S2 ). To estimate the combiner efficiency of geometric combiners, we assume 50% of half-mirror transmittance and the efficiency of other optics to be 50%. Then the final combiner efficiency is about 4200 nit/lm, which is a high value in comparison with waveguide combiners. Nonetheless, to further shrink the system size or improve system performance ultimately encounters the etendue conservation issue. In addition, AR systems with traditional geometric optics is hard to achieve a configuration resembling normal flat glasses because the half-mirror has to be tilted to some extent.

Maxwellian-type systems

The Maxwellian view, proposed by James Clerk Maxwell (1860), refers to imaging a point light source in the eye pupil 159 . If the light beam is modulated in the imaging process, a corresponding image can be formed on the retina (Fig. 9a ). Because the point source is much smaller than the eye pupil, the image is always-in-focus on the retina irrespective of the eye lens’ focus. For applications in AR display, the point source is usually a laser with narrow angular and spectral bandwidths. LED light sources can also build a Maxwellian system, by adding an angular filtering module 160 . Regarding the combiner, although in theory a half-mirror can also be used, HOEs are generally preferred because they offer the off-axis configuration that places combiner in a similar position like eyeglasses. In addition, HOEs have a lower reflection of environment light, which provides a more natural appearance of the user behind the display.

figure 9

a Schematic of the working principle of Maxwellian displays. Maxwellian displays based on b SLM and laser diode light source and c MEMS-LBS with a steering mirror as additional modulation method. Generation of depth cues by d computational digital holography and e scanning of steering mirror to produce multiple views. Adapted from b, d ref. 143 and c, e ref. 167 under the Creative Commons Attribution 4.0 License

To modulate the light, a SLM like LCoS or DMD can be placed in the light path, as shown in Fig. 9b . Alternatively, LBS system can also be used (Fig. 9c ), where the intensity modulation occurs in the laser diode itself. Besides the operation in a normal Maxwellian-view, both implementations offer additional degrees of freedom for light modulation.

For a SLM-based system, there are several options to arrange the SLM pixels 143 , 161 . Maimone et al. 143 demonstrated a Maxwellian AR display with two modes to offer a large-DoF Maxwellian-view, or a holographic view (Fig. 9d ), which is often referred as computer-generated holography (CGH) 162 . To show an always-in-focus image with a large DoF, the image can be directly displayed on an amplitude SLM, or using amplitude encoding for a phase-only SLM 163 . Alternatively, if a 3D scene with correct depth cues is to be presented, then optimization algorithms for CGH can be used to generate a hologram for the SLM. The generated holographic image exhibits the natural focus-and-blur effect like a real 3D object (Fig. 9d ). To better understand this feature, we need to again exploit the concept of etendue. The laser light source can be considered to have a very small etendue due to its excellent collimation. Therefore, the system etendue is provided by the SLM. The micron-sized pixel-pitch of SLM offers a certain maximum diffraction angle, which, multiplied by the SLM size, equals system etendue. By varying the display content on SLM, the final exit pupil size can be changed accordingly. In the case of a large-DoF Maxwellian view, the exit pupil size is small, accompanied by a large FoV. For the holographic display mode, the reduced DoF requires a larger exit pupil with dimension close to the eye pupil. But the FoV is reduced accordingly due to etendue conservation. Another commonly concerned issue with CGH is the computation time. To achieve a real-time CGH rendering flow with an excellent image quality is quite a challenge. Fortunately, with recent advances in algorithm 164 and the introduction of convolutional neural network (CNN) 165 , 166 , this issue is gradually solved with an encouraging pace. Lately, Liang et al. 166 demonstrated a real-time CGH synthesis pipeline with a high image quality. The pipeline comprises an efficient CNN model to generate a complex hologram from a 3D scene and an improved encoding algorithm to convert the complex hologram to a phase-only one. An impressive frame rate of 60 Hz has been achieved on a desktop computing unit.

For LBS-based system, the additional modulation can be achieved by integrating a steering module, as demonstrated by Jang et al. 167 . The steering mirror can shift the focal point (viewpoint) within the eye pupil, therefore effectively expanding the system etendue. When the steering process is fast and the image content is updated simultaneously, correct 3D cues can be generated, as shown in Fig. 9e . However, there exists a tradeoff between the number of viewpoint and the final image frame rate, because the total frames are equally divided into each viewpoint. To boost the frame rate of MEMS-LBS systems by the number of views (e.g., 3 by 3) may be challenging.

Maxwellian-type systems offer several advantages. The system efficiency is usually very high because nearly all the light is delivered into viewer’s eye. The system FoV is determined by the f /# of combiner and a large FoV (~80° in horizontal) can be achieved 143 . The issue of VAC can be mitigated with an infinite-DoF image that deprives accommodation cue, or completely solved by generating a true-3D scene as discussed above. Despite these advantages, one major weakness of Maxwellian-type system is the tiny exit pupil, or eyebox. A small deviation of eye pupil location from the viewpoint results in the complete disappearance of the image. Therefore, to expand eyebox is considered as one of the most important challenges in Maxwellian-type systems.

Pupil duplication and steering

Methods to expand eyebox can be generally categorized into pupil duplication 168 , 169 , 170 , 171 , 172 and pupil steering 9 , 13 , 167 , 173 . Pupil duplication simply generates multiple viewpoints to cover a large area. In contrast, pupil steering dynamically shifts the viewpoint position, depending on the pupil location. Before reviewing detailed implementations of these two methods, it is worth discussing some of their general features. The multiple viewpoints in pupil duplication usually mean to equally divide the total light intensity. In each time frame, however, it is preferable that only one viewpoint enters the user’s eye pupil to avoid ghost image. This requirement, therefore, results in a reduced total light efficiency, while also conditioning the viewpoint separation to be larger than the pupil diameter. In addition, the separation should not be too large to avoid gap between viewpoints. Considering that human pupil diameter changes in response to environment illuminance, the design of viewpoint separation needs special attention. Pupil steering, on the other hand, only produces one viewpoint at each time frame. It is therefore more light-efficient and free from ghost images. But to determine the viewpoint position requires the information of eye pupil location, which demands a real-time eye-tracking module 9 . Another observation is that pupil steering can accommodate multiple viewpoints by its nature. Therefore, a pupil steering system can often be easily converted to a pupil duplication system by simultaneously generating available viewpoints.

To generate multiple viewpoints, one can focus on modulating the incident light or the combiner. Recall that viewpoint is the image of light source. To duplicate or shift light source can achieve pupil duplication or steering accordingly, as illustrated in Fig. 10a . Several schemes of light modulation are depicted in Fig. 10b–e . An array of light sources can be generated with multiple laser diodes (Fig. 10b ). To turn on all or one of the sources achieves pupil duplication or steering. A light source array can also be produced by projecting light on an array-type PPHOE 168 (Fig. 10c ). Apart from direct adjustment of light sources, modulating light on the path can also effectively steer/duplicate the light sources. Using a mechanical steering mirror, the beam can be deflected 167 (Fig. 10d ), which equals to shifting the light source position. Other devices like a grating or beam splitter can also serve as ray deflector/splitter 170 , 171 (Fig. 10e ).

figure 10

a Schematic of duplicating (or shift) viewpoint by modulation of incident light. Light modulation by b multiple laser diodes, c HOE lens array, d steering mirror and e grating or beam splitters. f Pupil duplication with multiplexed PPHOE. g Pupil steering with LCHOE. Reproduced from c ref. 168 under the Creative Commons Attribution 4.0 License, e ref. 169 with permission from OSA Publishing, f ref. 171 with permission from OSA Publishing and g ref. 173 with permission from OSA Publishing

Nonetheless, one problem of the light source duplication/shifting methods for pupil duplication/steering is that the aberrations in peripheral viewpoints are often serious 168 , 173 . The HOE combiner is usually recorded at one incident angle. For other incident angles with large deviations, considerable aberrations will occur, especially in the scenario of off-axis configuration. To solve this problem, the modulation can be focused on the combiner instead. While the mechanical shifting of combiner 9 can achieve continuous pupil steering, its integration into AR display with a small factor remains a challenge. Alternatively, the versatile functions of HOE offer possible solutions for combiner modulation. Kim and Park 169 demonstrated a pupil duplication system with multiplexed PPHOE (Fig. 10f ). Wavefronts of several viewpoints can be recorded into one PPHOE sample. Three viewpoints with a separation of 3 mm were achieved. However, a slight degree of ghost image and gap can be observed in the viewpoint transition. For a PPHOE to achieve pupil steering, the multiplexed PPHOE needs to record different focal points with different incident angles. If each hologram has no angular crosstalk, then with an additional device to change the light incident angle, the viewpoint can be steered. Alternatively, Xiong et al. 173 demonstrated a pupil steering system with LCHOEs in a simpler configuration (Fig. 10g ). The polarization-sensitive nature of LCHOE enables the controlling of which LCHOE to function with a polarization converter (PC). When the PC is off, the incident RCP light is focused by the right-handed LCHOE. When the PC is turned on, the RCP light is firstly converted to LCP light and passes through the right-handed LCHOE. Then it is focused by the left-handed LCHOE into another viewpoint. To add more viewpoints requires stacking more pairs of PC and LCHOE, which can be achieved in a compact manner with thin glass substrates. In addition, to realize pupil duplication only requires the stacking of multiple low-efficiency LCHOEs. For both PPHOEs and LCHOEs, because the hologram for each viewpoint is recorded independently, the aberrations can be eliminated.

Regarding the system performance, in theory the FoV is not limited and can reach a large value, such as 80° in horizontal direction 143 . The definition of eyebox is different from traditional imaging systems. For a single viewpoint, it has the same size as the eye pupil diameter. But due to the viewpoint steering/duplication capability, the total system eyebox can be expanded accordingly. The combiner efficiency for pupil steering systems can reach 47,000 nit/lm for a FoV of 80° by 80° and pupil diameter of 4 mm (Eq. S2 ). At such a high brightness level, eye safety could be a concern 174 . For a pupil duplication system, the combiner efficiency is decreased by the number of viewpoints. With a 4-by-4 viewpoint array, it can still reach 3000 nit/lm. Despite the potential gain of pupil duplication/steering, when considering the rotation of eyeball, the situation becomes much more complicated 175 . A perfect pupil steering system requires a 5D steering, which proposes a challenge for practical implementation.

Pin-light systems

Recently, another type of display in close relation with Maxwellian view called pin-light display 148 , 176 has been proposed. The general working principle of pin-light display is illustrated in Fig. 11a . Each pin-light source is a Maxwellian view with a large DoF. When the eye pupil is no longer placed near the source point as in Maxwellian view, each image source can only form an elemental view with a small FoV on retina. However, if the image source array is arranged in a proper form, the elemental views can be integrated together to form a large FoV. According to the specific optical architectures, pin-light display can take different forms of implementation. In the initial feasibility demonstration, Maimone et al. 176 used a side-lit waveguide plate as the point light source (Fig. 11b ). The light inside the waveguide plate is extracted by the etched divots, forming a pin-light source array. A transmissive SLM (LCD) is placed behind the waveguide plate to modulate the light intensity and form the image. The display has an impressive FoV of 110° thanks to the large scattering angle range. However, the direct placement of LCD before the eye brings issues of insufficient resolution density and diffraction of background light.

figure 11

a Schematic drawing of the working principle of pin-light display. b Pin-light display utilizing a pin-light source and a transmissive SLM. c An example of pin-mirror display with a birdbath optics. d SWD system with LBS image source and off-axis lens array. Reprinted from b ref. 176 under the Creative Commons Attribution 4.0 License and d ref. 180 with permission from OSA Publishing

To avoid these issues, architectures using pin-mirrors 177 , 178 , 179 are proposed. In these systems, the final combiner is an array of tiny mirrors 178 , 179 or gratings 177 , in contrast to their counterparts using large-area combiners. An exemplary system with birdbath design is depicted in Fig. 11c . In this case, the pin-mirrors replace the original beam-splitter in the birdbath and can thus shrink the system volume, while at the same time providing large DoF pin-light images. Nonetheless, such a system may still face the etendue conservation issue. Meanwhile, the size of pin-mirror cannot be too small in order to prevent degradation of resolution density due to diffraction. Therefore, its influence on the see-through background should also be considered in the system design.

To overcome the etendue conservation and improve see-through quality, Xiong et al. 180 proposed another type of pin-light system exploiting the etendue expansion property of waveguide, which is also referred as scanning waveguide display (SWD). As illustrated in Fig. 11d , the system uses an LBS as the image source. The collimated scanned laser rays are trapped in the waveguide and encounter an array of off-axis lenses. Upon each encounter, the lens out-couples the laser rays and forms a pin-light source. SWD has the merits of good see-through quality and large etendue. A large FoV of 100° was demonstrated with the help of an ultra-low f /# lens array based on LCHOE. However, some issues like insufficient image resolution density and image non-uniformity remain to be overcome. To further improve the system may require optimization of Gaussian beam profile and additional EPE module 180 .

Overall, pin-light systems inherit the large DoF from Maxwellian view. With adequate number of pin-light sources, the FoV and eyebox can be expanded accordingly. Nonetheless, despite different forms of implementation, a common issue of pin-light system is the image uniformity. The overlapped region of elemental views has a higher light intensity than the non-overlapped region, which becomes even more complicated considering the dynamic change of pupil size. In theory, the displayed image can be pre-processed to compensate for the optical non-uniformity. But that would require knowledge of precise pupil location (and possibly size) and therefore an accurate eye-tracking module 176 . Regarding the system performance, pin-mirror systems modified from other free-space systems generally shares similar FoV and eyebox with original systems. The combiner efficiency may be lower due to the small size of pin-mirrors. SWD, on the other hand, shares the large FoV and DoF with Maxwellian view, and large eyebox with waveguide combiners. The combiner efficiency may also be lower due to the EPE process.

Waveguide combiner

Besides free-space combiners, another common architecture in AR displays is waveguide combiner. The term ‘waveguide’ indicates the light is trapped in a substrate by the TIR process. One distinctive feature of a waveguide combiner is the EPE process that effectively enlarges the system etendue. In the EPE process, a portion of the trapped light is repeatedly coupled out of the waveguide in each TIR. The effective eyebox is therefore enlarged. According to the features of couplers, we divide the waveguide combiners into two types: diffractive and achromatic, as described in the followings.

Diffractive waveguides

As the name implies, diffractive-type waveguides use diffractive elements as couplers. The in-coupler is usually a diffractive grating and the out-coupler in most cases is also a grating with the same period as the in-coupler, but it can also be an off-axis lens with a small curvature to generate image with finite depth. Three major diffractive couplers have been developed: SRGs, photopolymer gratings (PPGs), and liquid crystal gratings (grating-type LCHOE; also known as polarization volume gratings (PVGs)). Some general protocols for coupler design are that the in-coupler should have a relatively high efficiency and the out-coupler should have a uniform light output. A uniform light output usually requires a low-efficiency coupler, with extra degrees of freedom for local modulation of coupling efficiency. Both in-coupler and out-coupler should have an adequate angular bandwidth to accommodate a reasonable FoV. In addition, the out-coupler should also be optimized to avoid undesired diffractions, including the outward diffraction of TIR light and diffraction of environment light into user’s eyes, which are referred as light leakage and rainbow. Suppression of these unwanted diffractions should also be considered in the optimization process of waveguide design, along with performance parameters like efficiency and uniformity.

The basic working principles of diffractive waveguide-based AR systems are illustrated in Fig. 12 . For the SRG-based waveguides 6 , 8 (Fig. 12a ), the in-coupler can be a transmissive-type or a reflective-type 181 , 182 . The grating geometry can be optimized for coupling efficiency with a large degree of freedom 183 . For the out-coupler, a reflective SRG with a large slant angle to suppress the transmission orders is preferred 184 . In addition, a uniform light output usually requires a gradient efficiency distribution in order to compensate for the decreased light intensity in the out-coupling process. This can be achieved by varying the local grating configurations like height and duty cycle 6 . For the PPG-based waveguides 185 (Fig. 12b ), the small angular bandwidth of a high-efficiency transmissive PPG prohibits its use as in-coupler. Therefore, both in-coupler and out-coupler are usually reflective types. The gradient efficiency can be achieved by space-variant exposure to control the local index modulation 186 or local Bragg slant angle variation through freeform exposure 19 . Due to the relatively small angular bandwidth of PPG, to achieve a decent FoV usually requires stacking two 187 or three 188 PPGs together for a single color. The PVG-based waveguides 189 (Fig. 12c ) also prefer reflective PVGs as in-couplers because the transmissive PVGs are much more difficult to fabricate due to the LC alignment issue. In addition, the angular bandwidth of transmissive PVGs in Bragg regime is also not large enough to support a decent FoV 29 . For the out-coupler, the angular bandwidth of a single reflective PVG can usually support a reasonable FoV. To obtain a uniform light output, a polarization management layer 190 consisting of a LC layer with spatially variant orientations can be utilized. It offers an additional degree of freedom to control the polarization state of the TIR light. The diffraction efficiency can therefore be locally controlled due to the strong polarization sensitivity of PVG.

figure 12

Schematics of waveguide combiners based on a SRGs, b PPGs and c PVGs. Reprinted from a ref. 85 with permission from OSA Publishing, b ref. 185 with permission from John Wiley and Sons and c ref. 189 with permission from OSA Publishing

The above discussion describes the basic working principle of 1D EPE. Nonetheless, for the 1D EPE to produce a large eyebox, the exit pupil in the unexpanded direction of the original image should be large. This proposes design challenges in light engines. Therefore, a 2D EPE is favored for practical applications. To extend EPE in two dimensions, two consecutive 1D EPEs can be used 191 , as depicted in Fig. 13a . The first 1D EPE occurs in the turning grating, where the light is duplicated in y direction and then turned into x direction. Then the light rays encounter the out-coupler and are expanded in x direction. To better understand the 2D EPE process, the k -vector diagram (Fig. 13b ) can be used. For the light propagating in air with wavenumber k 0 , its possible k -values in x and y directions ( k x and k y ) fall within the circle with radius k 0 . When the light is trapped into TIR, k x and k y are outside the circle with radius k 0 and inside the circle with radius nk 0 , where n is the refractive index of the substrate. k x and k y stay unchanged in the TIR process and are only changed in each diffraction process. The central red box in Fig. 13b indicates the possible k values within the system FoV. After the in-coupler, the k values are added by the grating k -vector, shifting the k values into TIR region. The turning grating then applies another k -vector and shifts the k values to near x -axis. Finally, the k values are shifted by the out-coupler and return to the free propagation region in air. One observation is that the size of red box is mostly limited by the width of TIR band. To accommodate a larger FoV, the outer boundary of TIR band needs to be expanded, which amounts to increasing waveguide refractive index. Another important fact is that when k x and k y are near the outer boundary, the uniformity of output light becomes worse. This is because the light propagation angle is near 90° in the waveguide. The spatial distance between two consecutive TIRs becomes so large that the out-coupled beams are spatially separated to an unacceptable degree. The range of possible k values for practical applications is therefore further shrunk due to this fact.

figure 13

a Schematic of 2D EPE based on two consecutive 1D EPEs. Gray/black arrows indicate light in air/TIR. Black dots denote TIRs. b k-diagram of the two-1D-EPE scheme. c Schematic of 2D EPE with a 2D hexagonal grating d k-diagram of the 2D-grating scheme

Aside from two consecutive 1D EPEs, the 2D EPE can also be directly implemented with a 2D grating 192 . An example using a hexagonal grating is depicted in Fig. 13c . The hexagonal grating can provide k -vectors in six directions. In the k -diagram (Fig. 13d ), after the in-coupling, the k values are distributed into six regions due to multiple diffractions. The out-coupling occurs simultaneously with pupil expansion. Besides a concise out-coupler configuration, the 2D EPE scheme offers more degrees of design freedom than two 1D EPEs because the local grating parameters can be adjusted in a 2D manner. The higher design freedom has the potential to reach a better output light uniformity, but at the cost of a higher computation demand for optimization. Furthermore, the unslanted grating geometry usually leads to a large light leakage and possibly low efficiency. Adding slant to the geometry helps alleviate the issue, but the associated fabrication may be more challenging.

Finally, we discuss the generation of full-color images. One important issue to clarify is that although diffractive gratings are used here, the final image generally has no color dispersion even if we use a broadband light source like LED. This can be easily understood in the 1D EPE scheme. The in-coupler and out-coupler have opposite k -vectors, which cancels the color dispersion for each other. In the 2D EPE schemes, the k -vectors always form a closed loop from in-coupled light to out-coupled light, thus, the color dispersion also vanishes likewise. The issue of using a single waveguide for full-color images actually exists in the consideration of FoV and light uniformity. The breakup of propagation angles for different colors results in varied out-coupling situations for each color. To be more specific, if the red and the blue channels use the same in-coupler, the propagating angle for the red light is larger than that of the blue light. The red light in peripheral FoV is therefore easier to face the mentioned large-angle non-uniformity issue. To acquire a decent FoV and light uniformity, usually two or three layers of waveguides with different grating pitches are adopted.

Regarding the system performance, the eyebox is generally large enough (~10 mm) to accommodate different user’s IPD and alignment shift during operation. A parameter of significant concern for a waveguide combiner is its FoV. From the k -vector analysis, we can conclude the theoretical upper limit is determined by the waveguide refractive index. But the light/color uniformity also influences the effective FoV, over which the degradation of image quality becomes unacceptable. Current diffractive waveguide combiners generally achieve a FoV of about 50°. To further increase FoV, a straightforward method is to use a higher refractive index waveguide. Another is to tile FoV through direct stacking of multiple waveguides or using polarization-sensitive couplers 79 , 193 . As to the optical efficiency, a typical value for the diffractive waveguide combiner is around 50–200 nit/lm 6 , 189 . In addition, waveguide combiners adopting grating out-couplers generate an image with fixed depth at infinity. This leads to the VAC issue. To tackle VAC in waveguide architectures, the most practical way is to generate multiple depths and use the varifocal or multifocal driving scheme, similar to those mentioned in the VR systems. But to add more depths usually means to stack multiple layers of waveguides together 194 . Considering the additional waveguide layers for RGB colors, the final waveguide thickness would undoubtedly increase.

Other parameters special to waveguide includes light leakage, see-through ghost, and rainbow. Light leakage refers to out-coupled light that goes outwards to the environment, as depicted in Fig. 14a . Aside from decreased efficiency, the leakage also brings drawback of unnatural “bright-eye” appearance of the user and privacy issue. Optimization of the grating structure like geometry of SRG may reduce the leakage. See-through ghost is formed by consecutive in-coupling and out-couplings caused by the out-coupler grating, as sketched in Fig. 14b , After the process, a real object with finite depth may produce a ghost image with shift in both FoV and depth. Generally, an out-coupler with higher efficiency suffers more see-through ghost. Rainbow is caused by the diffraction of environment light into user’s eye, as sketched in Fig. 14c . The color dispersion in this case will occur because there is no cancellation of k -vector. Using the k -diagram, we can obtain a deeper insight into the formation of rainbow. Here, we take the EPE structure in Fig. 13a as an example. As depicted in Fig. 14d , after diffractions by the turning grating and the out-coupler grating, the k values are distributed in two circles that shift from the origin by the grating k -vectors. Some diffracted light can enter the see-through FoV and form rainbow. To reduce rainbow, a straightforward way is to use a higher index substrate. With a higher refractive index, the outer boundary of k diagram is expanded, which can accommodate larger grating k -vectors. The enlarged k -vectors would therefore “push” these two circles outwards, leading to a decreased overlapping region with the see-through FoV. Alternatively, an optimized grating structure would also help reduce the rainbow effect by suppressing the unwanted diffraction.

figure 14

Sketches of formations of a light leakage, b see-through ghost and c rainbow. d Analysis of rainbow formation with k-diagram

Achromatic waveguide

Achromatic waveguide combiners use achromatic elements as couplers. It has the advantage of realizing full-color image with a single waveguide. A typical example of achromatic element is a mirror. The waveguide with partial mirrors as out-coupler is often referred as geometric waveguide 6 , 195 , as depicted in Fig. 15a . The in-coupler in this case is usually a prism to avoid unnecessary color dispersion if using diffractive elements otherwise. The mirrors couple out TIR light consecutively to produce a large eyebox, similarly in a diffractive waveguide. Thanks to the excellent optical property of mirrors, the geometric waveguide usually exhibits a superior image regarding MTF and color uniformity to its diffractive counterparts. Still, the spatially discontinuous configuration of mirrors also results in gaps in eyebox, which may be alleviated by using a dual-layer structure 196 . Wang et al. designed a geometric waveguide display with five partial mirrors (Fig. 15b ). It exhibits a remarkable FoV of 50° by 30° (Fig. 15c ) and an exit pupil of 4 mm with a 1D EPE. To achieve 2D EPE, similar architectures in Fig. 13a can be used by integrating a turning mirror array as the first 1D EPE module 197 . Unfortunately, the k -vector diagrams in Fig. 13b, d cannot be used here because the k values in x-y plane no longer conserve in the in-coupling and out-coupling processes. But some general conclusions remain valid, like a higher refractive index leading to a larger FoV and gradient out-coupling efficiency improving light uniformity.

figure 15

a Schematic of the system configuration. b Geometric waveguide with five partial mirrors. c Image photos demonstrating system FoV. Adapted from b , c ref. 195 with permission from OSA Publishing

The fabrication process of geometric waveguide involves coating mirrors on cut-apart pieces and integrating them back together, which may result in a high cost, especially for the 2D EPE architecture. Another way to implement an achromatic coupler is to use multiplexed PPHOE 198 , 199 to mimic the behavior of a tilted mirror (Fig. 16a ). To understand the working principle, we can use the diagram in Fig. 16b . The law of reflection states the angle of reflection equals to the angle of incidence. If we translate this behavior to k -vector language, it means the mirror can apply any length of k -vector along its surface normal direction. The k -vector length of the reflected light is always equal to that of the incident light. This puts a condition that the k -vector triangle is isosceles. With a simple geometric deduction, it can be easily observed this leads to the law of reflection. The behavior of a general grating, however, is very different. For simplicity we only consider the main diffraction order. The grating can only apply a k -vector with fixed k x due to the basic diffraction law. For the light with a different incident angle, it needs to apply different k z to produce a diffracted light with equal k -vector length as the incident light. For a grating with a broad angular bandwidth like SRG, the range of k z is wide, forming a lengthy vertical line in Fig. 16b . For a PPG with a narrow angular bandwidth, the line is short and resembles a dot. If multiple of these tiny dots are distributed along the oblique line corresponding to a mirror, then the final multiplexed PPGs can imitate the behavior of a tilted mirror. Such a PPHOE is sometimes referred as a skew-mirror 198 . In theory, to better imitate the mirror, a lot of multiplexed PPGs is preferred, while each PPG has a small index modulation δn . But this proposes a bigger challenge in device fabrication. Recently, Utsugi et al. demonstrated an impressive skew-mirror waveguide based on 54 multiplexed PPGs (Fig. 16c, d ). The display exhibits an effective FoV of 35° by 36°. In the peripheral FoV, there still exists some non-uniformity (Fig. 16e ) due to the out-coupling gap, which is an inherent feature of the flat-type out-couplers.

figure 16

a System configuration. b Diagram demonstrating how multiplexed PPGs resemble the behavior of a mirror. Photos showing c the system and d image. e Picture demonstrating effective system FoV. Adapted from c – e ref. 199 with permission from ITE

Finally, it is worth mentioning that metasurfaces are also promising to deliver achromatic gratings 200 , 201 for waveguide couplers ascribed to their versatile wavefront shaping capability. The mechanism of the achromatic gratings is similar to that of the achromatic lenses as previously discussed. However, the current development of achromatic metagratings is still in its infancy. Much effort is needed to improve the optical efficiency for in-coupling, control the higher diffraction orders for eliminating ghost images, and enable a large size design for EPE.

Generally, achromatic waveguide combiners exhibit a comparable FoV and eyebox with diffractive combiners, but with a higher efficiency. For a partial-mirror combiner, its combiner efficiency is around 650 nit/lm 197 (2D EPE). For a skew-mirror combiner, although the efficiency of multiplexed PPHOE is relatively low (~1.5%) 199 , the final combiner efficiency of the 1D EPE system is still high (>3000 nit/lm) due to multiple out-couplings.

Table 2 summarizes the performance of different AR combiners. When combing the luminous efficacy in Table 1 and the combiner efficiency in Table 2 , we can have a comprehensive estimate of the total luminance efficiency (nit/W) for different types of systems. Generally, Maxwellian-type combiners with pupil steering have the highest luminance efficiency when partnered with laser-based light engines like laser-backlit LCoS/DMD or MEM-LBS. Geometric optical combiners have well-balanced image performances, but to further shrink the system size remains a challenge. Diffractive waveguides have a relatively low combiner efficiency, which can be remedied by an efficient light engine like MEMS-LBS. Further development of coupler and EPE scheme would also improve the system efficiency and FoV. Achromatic waveguides have a decent combiner efficiency. The single-layer design also enables a smaller form factor. With advances in fabrication process, it may become a strong contender to presently widely used diffractive waveguides.

Conclusions and perspectives

VR and AR are endowed with a high expectation to revolutionize the way we interact with digital world. Accompanied with the expectation are the engineering challenges to squeeze a high-performance display system into a tightly packed module for daily wearing. Although the etendue conservation constitutes a great obstacle on the path, remarkable progresses with innovative optics and photonics continue to take place. Ultra-thin optical elements like PPHOEs and LCHOEs provide alternative solutions to traditional optics. Their unique features of multiplexing capability and polarization dependency further expand the possibility of novel wavefront modulations. At the same time, nanoscale-engineered metasurfaces/SRGs provide large design freedoms to achieve novel functions beyond conventional geometric optical devices. Newly emerged micro-LEDs open an opportunity for compact microdisplays with high peak brightness and good stability. Further advances on device engineering and manufacturing process are expected to boost the performance of metasurfaces/SRGs and micro-LEDs for AR and VR applications.

Data availability

All data needed to evaluate the conclusions in the paper are present in the paper. Additional data related to this paper may be requested from the authors.

Cakmakci, O. & Rolland, J. Head-worn displays: a review. J. Disp. Technol. 2 , 199–216 (2006).

Article   ADS   Google Scholar  

Zhan, T. et al. Augmented reality and virtual reality displays: perspectives and challenges. iScience 23 , 101397 (2020).

Rendon, A. A. et al. The effect of virtual reality gaming on dynamic balance in older adults. Age Ageing 41 , 549–552 (2012).

Article   Google Scholar  

Choi, S., Jung, K. & Noh, S. D. Virtual reality applications in manufacturing industries: past research, present findings, and future directions. Concurrent Eng. 23 , 40–63 (2015).

Li, X. et al. A critical review of virtual and augmented reality (VR/AR) applications in construction safety. Autom. Constr. 86 , 150–162 (2018).

Kress, B. C. Optical Architectures for Augmented-, Virtual-, and Mixed-Reality Headsets (Bellingham: SPIE Press, 2020).

Cholewiak, S. A. et al. A perceptual eyebox for near-eye displays. Opt. Express 28 , 38008–38028 (2020).

Lee, Y. H., Zhan, T. & Wu, S. T. Prospects and challenges in augmented reality displays. Virtual Real. Intell. Hardw. 1 , 10–20 (2019).

Kim, J. et al. Foveated AR: dynamically-foveated augmented reality display. ACM Trans. Graph. 38 , 99 (2019).

Tan, G. J. et al. Foveated imaging for near-eye displays. Opt. Express 26 , 25076–25085 (2018).

Lee, S. et al. Foveated near-eye display for mixed reality using liquid crystal photonics. Sci. Rep. 10 , 16127 (2020).

Yoo, C. et al. Foveated display system based on a doublet geometric phase lens. Opt. Express 28 , 23690–23702 (2020).

Akşit, K. et al. Manufacturing application-driven foveated near-eye displays. IEEE Trans. Vis. Computer Graph. 25 , 1928–1939 (2019).

Zhu, R. D. et al. High-ambient-contrast augmented reality with a tunable transmittance liquid crystal film and a functional reflective polarizer. J. Soc. Inf. Disp. 24 , 229–233 (2016).

Lincoln, P. et al. Scene-adaptive high dynamic range display for low latency augmented reality. In Proc. 21st ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games . (ACM, San Francisco, CA, 2017).

Duerr, F. & Thienpont, H. Freeform imaging systems: fermat’s principle unlocks “first time right” design. Light.: Sci. Appl. 10 , 95 (2021).

Bauer, A., Schiesser, E. M. & Rolland, J. P. Starting geometry creation and design method for freeform optics. Nat. Commun. 9 , 1756 (2018).

Rolland, J. P. et al. Freeform optics for imaging. Optica 8 , 161–176 (2021).

Jang, C. et al. Design and fabrication of freeform holographic optical elements. ACM Trans. Graph. 39 , 184 (2020).

Gabor, D. A new microscopic principle. Nature 161 , 777–778 (1948).

Kostuk, R. K. Holography: Principles and Applications (Boca Raton: CRC Press, 2019).

Lawrence, J. R., O'Neill, F. T. & Sheridan, J. T. Photopolymer holographic recording material. Optik 112 , 449–463 (2001).

Guo, J. X., Gleeson, M. R. & Sheridan, J. T. A review of the optimisation of photopolymer materials for holographic data storage. Phys. Res. Int. 2012 , 803439 (2012).

Jang, C. et al. Recent progress in see-through three-dimensional displays using holographic optical elements [Invited]. Appl. Opt. 55 , A71–A85 (2016).

Xiong, J. H. et al. Holographic optical elements for augmented reality: principles, present status, and future perspectives. Adv. Photonics Res. 2 , 2000049 (2021).

Tabiryan, N. V. et al. Advances in transparent planar optics: enabling large aperture, ultrathin lenses. Adv. Optical Mater. 9 , 2001692 (2021).

Zanutta, A. et al. Photopolymeric films with highly tunable refractive index modulation for high precision diffractive optics. Optical Mater. Express 6 , 252–263 (2016).

Moharam, M. G. & Gaylord, T. K. Rigorous coupled-wave analysis of planar-grating diffraction. J. Optical Soc. Am. 71 , 811–818 (1981).

Xiong, J. H. & Wu, S. T. Rigorous coupled-wave analysis of liquid crystal polarization gratings. Opt. Express 28 , 35960–35971 (2020).

Xie, S., Natansohn, A. & Rochon, P. Recent developments in aromatic azo polymers research. Chem. Mater. 5 , 403–411 (1993).

Shishido, A. Rewritable holograms based on azobenzene-containing liquid-crystalline polymers. Polym. J. 42 , 525–533 (2010).

Bunning, T. J. et al. Holographic polymer-dispersed liquid crystals (H-PDLCs). Annu. Rev. Mater. Sci. 30 , 83–115 (2000).

Liu, Y. J. & Sun, X. W. Holographic polymer-dispersed liquid crystals: materials, formation, and applications. Adv. Optoelectron. 2008 , 684349 (2008).

Xiong, J. H. & Wu, S. T. Planar liquid crystal polarization optics for augmented reality and virtual reality: from fundamentals to applications. eLight 1 , 3 (2021).

Yaroshchuk, O. & Reznikov, Y. Photoalignment of liquid crystals: basics and current trends. J. Mater. Chem. 22 , 286–300 (2012).

Sarkissian, H. et al. Periodically aligned liquid crystal: potential application for projection displays. Mol. Cryst. Liq. Cryst. 451 , 1–19 (2006).

Komanduri, R. K. & Escuti, M. J. Elastic continuum analysis of the liquid crystal polarization grating. Phys. Rev. E 76 , 021701 (2007).

Kobashi, J., Yoshida, H. & Ozaki, M. Planar optics with patterned chiral liquid crystals. Nat. Photonics 10 , 389–392 (2016).

Lee, Y. H., Yin, K. & Wu, S. T. Reflective polarization volume gratings for high efficiency waveguide-coupling augmented reality displays. Opt. Express 25 , 27008–27014 (2017).

Lee, Y. H., He, Z. Q. & Wu, S. T. Optical properties of reflective liquid crystal polarization volume gratings. J. Optical Soc. Am. B 36 , D9–D12 (2019).

Xiong, J. H., Chen, R. & Wu, S. T. Device simulation of liquid crystal polarization gratings. Opt. Express 27 , 18102–18112 (2019).

Czapla, A. et al. Long-period fiber gratings with low-birefringence liquid crystal. Mol. Cryst. Liq. Cryst. 502 , 65–76 (2009).

Dąbrowski, R., Kula, P. & Herman, J. High birefringence liquid crystals. Crystals 3 , 443–482 (2013).

Mack, C. Fundamental Principles of Optical Lithography: The Science of Microfabrication (Chichester: John Wiley & Sons, 2007).

Genevet, P. et al. Recent advances in planar optics: from plasmonic to dielectric metasurfaces. Optica 4 , 139–152 (2017).

Guo, L. J. Nanoimprint lithography: methods and material requirements. Adv. Mater. 19 , 495–513 (2007).

Park, J. et al. Electrically driven mid-submicrometre pixelation of InGaN micro-light-emitting diode displays for augmented-reality glasses. Nat. Photonics 15 , 449–455 (2021).

Khorasaninejad, M. et al. Metalenses at visible wavelengths: diffraction-limited focusing and subwavelength resolution imaging. Science 352 , 1190–1194 (2016).

Li, S. Q. et al. Phase-only transmissive spatial light modulator based on tunable dielectric metasurface. Science 364 , 1087–1090 (2019).

Liang, K. L. et al. Advances in color-converted micro-LED arrays. Jpn. J. Appl. Phys. 60 , SA0802 (2020).

Jin, S. X. et al. GaN microdisk light emitting diodes. Appl. Phys. Lett. 76 , 631–633 (2000).

Day, J. et al. Full-scale self-emissive blue and green microdisplays based on GaN micro-LED arrays. In Proc. SPIE 8268, Quantum Sensing and Nanophotonic Devices IX (SPIE, San Francisco, California, United States, 2012).

Huang, Y. G. et al. Mini-LED, micro-LED and OLED displays: present status and future perspectives. Light.: Sci. Appl. 9 , 105 (2020).

Parbrook, P. J. et al. Micro-light emitting diode: from chips to applications. Laser Photonics Rev. 15 , 2000133 (2021).

Day, J. et al. III-Nitride full-scale high-resolution microdisplays. Appl. Phys. Lett. 99 , 031116 (2011).

Liu, Z. J. et al. 360 PPI flip-chip mounted active matrix addressable light emitting diode on silicon (LEDoS) micro-displays. J. Disp. Technol. 9 , 678–682 (2013).

Zhang, L. et al. Wafer-scale monolithic hybrid integration of Si-based IC and III–V epi-layers—A mass manufacturable approach for active matrix micro-LED micro-displays. J. Soc. Inf. Disp. 26 , 137–145 (2018).

Tian, P. F. et al. Size-dependent efficiency and efficiency droop of blue InGaN micro-light emitting diodes. Appl. Phys. Lett. 101 , 231110 (2012).

Olivier, F. et al. Shockley-Read-Hall and Auger non-radiative recombination in GaN based LEDs: a size effect study. Appl. Phys. Lett. 111 , 022104 (2017).

Konoplev, S. S., Bulashevich, K. A. & Karpov, S. Y. From large-size to micro-LEDs: scaling trends revealed by modeling. Phys. Status Solidi (A) 215 , 1700508 (2018).

Li, L. Z. et al. Transfer-printed, tandem microscale light-emitting diodes for full-color displays. Proc. Natl Acad. Sci. USA 118 , e2023436118 (2021).

Oh, J. T. et al. Light output performance of red AlGaInP-based light emitting diodes with different chip geometries and structures. Opt. Express 26 , 11194–11200 (2018).

Shen, Y. C. et al. Auger recombination in InGaN measured by photoluminescence. Appl. Phys. Lett. 91 , 141101 (2007).

Wong, M. S. et al. High efficiency of III-nitride micro-light-emitting diodes by sidewall passivation using atomic layer deposition. Opt. Express 26 , 21324–21331 (2018).

Han, S. C. et al. AlGaInP-based Micro-LED array with enhanced optoelectrical properties. Optical Mater. 114 , 110860 (2021).

Wong, M. S. et al. Size-independent peak efficiency of III-nitride micro-light-emitting-diodes using chemical treatment and sidewall passivation. Appl. Phys. Express 12 , 097004 (2019).

Ley, R. T. et al. Revealing the importance of light extraction efficiency in InGaN/GaN microLEDs via chemical treatment and dielectric passivation. Appl. Phys. Lett. 116 , 251104 (2020).

Moon, S. W. et al. Recent progress on ultrathin metalenses for flat optics. iScience 23 , 101877 (2020).

Arbabi, A. et al. Efficient dielectric metasurface collimating lenses for mid-infrared quantum cascade lasers. Opt. Express 23 , 33310–33317 (2015).

Yu, N. F. et al. Light propagation with phase discontinuities: generalized laws of reflection and refraction. Science 334 , 333–337 (2011).

Liang, H. W. et al. High performance metalenses: numerical aperture, aberrations, chromaticity, and trade-offs. Optica 6 , 1461–1470 (2019).

Park, J. S. et al. All-glass, large metalens at visible wavelength using deep-ultraviolet projection lithography. Nano Lett. 19 , 8673–8682 (2019).

Yoon, G. et al. Single-step manufacturing of hierarchical dielectric metalens in the visible. Nat. Commun. 11 , 2268 (2020).

Lee, G. Y. et al. Metasurface eyepiece for augmented reality. Nat. Commun. 9 , 4562 (2018).

Chen, W. T. et al. A broadband achromatic metalens for focusing and imaging in the visible. Nat. Nanotechnol. 13 , 220–226 (2018).

Wang, S. M. et al. A broadband achromatic metalens in the visible. Nat. Nanotechnol. 13 , 227–232 (2018).

Lan, S. F. et al. Metasurfaces for near-eye augmented reality. ACS Photonics 6 , 864–870 (2019).

Fan, Z. B. et al. A broadband achromatic metalens array for integral imaging in the visible. Light.: Sci. Appl. 8 , 67 (2019).

Shi, Z. J., Chen, W. T. & Capasso, F. Wide field-of-view waveguide displays enabled by polarization-dependent metagratings. In Proc. SPIE 10676, Digital Optics for Immersive Displays (SPIE, Strasbourg, France, 2018).

Hong, C. C., Colburn, S. & Majumdar, A. Flat metaform near-eye visor. Appl. Opt. 56 , 8822–8827 (2017).

Bayati, E. et al. Design of achromatic augmented reality visors based on composite metasurfaces. Appl. Opt. 60 , 844–850 (2021).

Nikolov, D. K. et al. Metaform optics: bridging nanophotonics and freeform optics. Sci. Adv. 7 , eabe5112 (2021).

Tamir, T. & Peng, S. T. Analysis and design of grating couplers. Appl. Phys. 14 , 235–254 (1977).

Miller, J. M. et al. Design and fabrication of binary slanted surface-relief gratings for a planar optical interconnection. Appl. Opt. 36 , 5717–5727 (1997).

Levola, T. & Laakkonen, P. Replicated slanted gratings with a high refractive index material for in and outcoupling of light. Opt. Express 15 , 2067–2074 (2007).

Shrestha, S. et al. Broadband achromatic dielectric metalenses. Light.: Sci. Appl. 7 , 85 (2018).

Li, Z. Y. et al. Meta-optics achieves RGB-achromatic focusing for virtual reality. Sci. Adv. 7 , eabe4458 (2021).

Ratcliff, J. et al. ThinVR: heterogeneous microlens arrays for compact, 180 degree FOV VR near-eye displays. IEEE Trans. Vis. Computer Graph. 26 , 1981–1990 (2020).

Wong, T. L. et al. Folded optics with birefringent reflective polarizers. In Proc. SPIE 10335, Digital Optical Technologies 2017 (SPIE, Munich, Germany, 2017).

Li, Y. N. Q. et al. Broadband cholesteric liquid crystal lens for chromatic aberration correction in catadioptric virtual reality optics. Opt. Express 29 , 6011–6020 (2021).

Bang, K. et al. Lenslet VR: thin, flat and wide-FOV virtual reality display using fresnel lens and lenslet array. IEEE Trans. Vis. Computer Graph. 27 , 2545–2554 (2021).

Maimone, A. & Wang, J. R. Holographic optics for thin and lightweight virtual reality. ACM Trans. Graph. 39 , 67 (2020).

Kramida, G. Resolving the vergence-accommodation conflict in head-mounted displays. IEEE Trans. Vis. Computer Graph. 22 , 1912–1931 (2016).

Zhan, T. et al. Multifocal displays: review and prospect. PhotoniX 1 , 10 (2020).

Shimobaba, T., Kakue, T. & Ito, T. Review of fast algorithms and hardware implementations on computer holography. IEEE Trans. Ind. Inform. 12 , 1611–1622 (2016).

Xiao, X. et al. Advances in three-dimensional integral imaging: sensing, display, and applications [Invited]. Appl. Opt. 52 , 546–560 (2013).

Kuiper, S. & Hendriks, B. H. W. Variable-focus liquid lens for miniature cameras. Appl. Phys. Lett. 85 , 1128–1130 (2004).

Liu, S. & Hua, H. Time-multiplexed dual-focal plane head-mounted display with a liquid lens. Opt. Lett. 34 , 1642–1644 (2009).

Wilson, A. & Hua, H. Design and demonstration of a vari-focal optical see-through head-mounted display using freeform Alvarez lenses. Opt. Express 27 , 15627–15637 (2019).

Zhan, T. et al. Pancharatnam-Berry optical elements for head-up and near-eye displays [Invited]. J. Optical Soc. Am. B 36 , D52–D65 (2019).

Oh, C. & Escuti, M. J. Achromatic diffraction from polarization gratings with high efficiency. Opt. Lett. 33 , 2287–2289 (2008).

Zou, J. Y. et al. Broadband wide-view Pancharatnam-Berry phase deflector. Opt. Express 28 , 4921–4927 (2020).

Zhan, T., Lee, Y. H. & Wu, S. T. High-resolution additive light field near-eye display by switchable Pancharatnam–Berry phase lenses. Opt. Express 26 , 4863–4872 (2018).

Tan, G. J. et al. Polarization-multiplexed multiplane display. Opt. Lett. 43 , 5651–5654 (2018).

Lanman, D. R. Display systems research at facebook reality labs (conference presentation). In Proc. SPIE 11310, Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR) (SPIE, San Francisco, California, United States, 2020).

Liu, Z. J. et al. A novel BLU-free full-color LED projector using LED on silicon micro-displays. IEEE Photonics Technol. Lett. 25 , 2267–2270 (2013).

Han, H. V. et al. Resonant-enhanced full-color emission of quantum-dot-based micro LED display technology. Opt. Express 23 , 32504–32515 (2015).

Lin, H. Y. et al. Optical cross-talk reduction in a quantum-dot-based full-color micro-light-emitting-diode display by a lithographic-fabricated photoresist mold. Photonics Res. 5 , 411–416 (2017).

Liu, Z. J. et al. Micro-light-emitting diodes with quantum dots in display technology. Light.: Sci. Appl. 9 , 83 (2020).

Kim, H. M. et al. Ten micrometer pixel, quantum dots color conversion layer for high resolution and full color active matrix micro-LED display. J. Soc. Inf. Disp. 27 , 347–353 (2019).

Xuan, T. T. et al. Inkjet-printed quantum dot color conversion films for high-resolution and full-color micro light-emitting diode displays. J. Phys. Chem. Lett. 11 , 5184–5191 (2020).

Chen, S. W. H. et al. Full-color monolithic hybrid quantum dot nanoring micro light-emitting diodes with improved efficiency using atomic layer deposition and nonradiative resonant energy transfer. Photonics Res. 7 , 416–422 (2019).

Krishnan, C. et al. Hybrid photonic crystal light-emitting diode renders 123% color conversion effective quantum yield. Optica 3 , 503–509 (2016).

Kang, J. H. et al. RGB arrays for micro-light-emitting diode applications using nanoporous GaN embedded with quantum dots. ACS Applied Mater. Interfaces 12 , 30890–30895 (2020).

Chen, G. S. et al. Monolithic red/green/blue micro-LEDs with HBR and DBR structures. IEEE Photonics Technol. Lett. 30 , 262–265 (2018).

Hsiang, E. L. et al. Enhancing the efficiency of color conversion micro-LED display with a patterned cholesteric liquid crystal polymer film. Nanomaterials 10 , 2430 (2020).

Kang, C. M. et al. Hybrid full-color inorganic light-emitting diodes integrated on a single wafer using selective area growth and adhesive bonding. ACS Photonics 5 , 4413–4422 (2018).

Geum, D. M. et al. Strategy toward the fabrication of ultrahigh-resolution micro-LED displays by bonding-interface-engineered vertical stacking and surface passivation. Nanoscale 11 , 23139–23148 (2019).

Ra, Y. H. et al. Full-color single nanowire pixels for projection displays. Nano Lett. 16 , 4608–4615 (2016).

Motoyama, Y. et al. High-efficiency OLED microdisplay with microlens array. J. Soc. Inf. Disp. 27 , 354–360 (2019).

Fujii, T. et al. 4032 ppi High-resolution OLED microdisplay. J. Soc. Inf. Disp. 26 , 178–186 (2018).

Hamer, J. et al. High-performance OLED microdisplays made with multi-stack OLED formulations on CMOS backplanes. In Proc. SPIE 11473, Organic and Hybrid Light Emitting Materials and Devices XXIV . Online Only (SPIE, 2020).

Joo, W. J. et al. Metasurface-driven OLED displays beyond 10,000 pixels per inch. Science 370 , 459–463 (2020).

Vettese, D. Liquid crystal on silicon. Nat. Photonics 4 , 752–754 (2010).

Zhang, Z. C., You, Z. & Chu, D. P. Fundamentals of phase-only liquid crystal on silicon (LCOS) devices. Light.: Sci. Appl. 3 , e213 (2014).

Hornbeck, L. J. The DMD TM projection display chip: a MEMS-based technology. MRS Bull. 26 , 325–327 (2001).

Zhang, Q. et al. Polarization recycling method for light-pipe-based optical engine. Appl. Opt. 52 , 8827–8833 (2013).

Hofmann, U., Janes, J. & Quenzer, H. J. High-Q MEMS resonators for laser beam scanning displays. Micromachines 3 , 509–528 (2012).

Holmström, S. T. S., Baran, U. & Urey, H. MEMS laser scanners: a review. J. Microelectromechanical Syst. 23 , 259–275 (2014).

Bao, X. Z. et al. Design and fabrication of AlGaInP-based micro-light-emitting-diode array devices. Opt. Laser Technol. 78 , 34–41 (2016).

Olivier, F. et al. Influence of size-reduction on the performances of GaN-based micro-LEDs for display application. J. Lumin. 191 , 112–116 (2017).

Liu, Y. B. et al. High-brightness InGaN/GaN Micro-LEDs with secondary peak effect for displays. IEEE Electron Device Lett. 41 , 1380–1383 (2020).

Qi, L. H. et al. 848 ppi high-brightness active-matrix micro-LED micro-display using GaN-on-Si epi-wafers towards mass production. Opt. Express 29 , 10580–10591 (2021).

Chen, E. G. & Yu, F. H. Design of an elliptic spot illumination system in LED-based color filter-liquid-crystal-on-silicon pico projectors for mobile embedded projection. Appl. Opt. 51 , 3162–3170 (2012).

Darmon, D., McNeil, J. R. & Handschy, M. A. 70.1: LED-illuminated pico projector architectures. Soc. Inf. Disp. Int. Symp . Dig. Tech. Pap. 39 , 1070–1073 (2008).

Essaian, S. & Khaydarov, J. State of the art of compact green lasers for mobile projectors. Optical Rev. 19 , 400–404 (2012).

Sun, W. S. et al. Compact LED projector design with high uniformity and efficiency. Appl. Opt. 53 , H227–H232 (2014).

Sun, W. S., Chiang, Y. C. & Tsuei, C. H. Optical design for the DLP pocket projector using LED light source. Phys. Procedia 19 , 301–307 (2011).

Chen, S. W. H. et al. High-bandwidth green semipolar (20–21) InGaN/GaN micro light-emitting diodes for visible light communication. ACS Photonics 7 , 2228–2235 (2020).

Yoshida, K. et al. 245 MHz bandwidth organic light-emitting diodes used in a gigabit optical wireless data link. Nat. Commun. 11 , 1171 (2020).

Park, D. W. et al. 53.5: High-speed AMOLED pixel circuit and driving scheme. Soc. Inf. Disp. Int. Symp . Dig. Tech. Pap. 41 , 806–809 (2010).

Tan, L., Huang, H. C. & Kwok, H. S. 78.1: Ultra compact polarization recycling system for white light LED based pico-projection system. Soc. Inf. Disp. Int. Symp. Dig. Tech. Pap. 41 , 1159–1161 (2010).

Maimone, A., Georgiou, A. & Kollin, J. S. Holographic near-eye displays for virtual and augmented reality. ACM Trans. Graph. 36 , 85 (2017).

Pan, J. W. et al. Portable digital micromirror device projector using a prism. Appl. Opt. 46 , 5097–5102 (2007).

Huang, Y. et al. Liquid-crystal-on-silicon for augmented reality displays. Appl. Sci. 8 , 2366 (2018).

Peng, F. L. et al. Analytical equation for the motion picture response time of display devices. J. Appl. Phys. 121 , 023108 (2017).

Pulli, K. 11-2: invited paper: meta 2: immersive optical-see-through augmented reality. Soc. Inf. Disp. Int. Symp . Dig. Tech. Pap. 48 , 132–133 (2017).

Lee, B. & Jo, Y. in Advanced Display Technology: Next Generation Self-Emitting Displays (eds Kang, B., Han, C. W. & Jeong, J. K.) 307–328 (Springer, 2021).

Cheng, D. W. et al. Design of an optical see-through head-mounted display with a low f -number and large field of view using a freeform prism. Appl. Opt. 48 , 2655–2668 (2009).

Zheng, Z. R. et al. Design and fabrication of an off-axis see-through head-mounted display with an x–y polynomial surface. Appl. Opt. 49 , 3661–3668 (2010).

Wei, L. D. et al. Design and fabrication of a compact off-axis see-through head-mounted display using a freeform surface. Opt. Express 26 , 8550–8565 (2018).

Liu, S., Hua, H. & Cheng, D. W. A novel prototype for an optical see-through head-mounted display with addressable focus cues. IEEE Trans. Vis. Computer Graph. 16 , 381–393 (2010).

Hua, H. & Javidi, B. A 3D integral imaging optical see-through head-mounted display. Opt. Express 22 , 13484–13491 (2014).

Song, W. T. et al. Design of a light-field near-eye display using random pinholes. Opt. Express 27 , 23763–23774 (2019).

Wang, X. & Hua, H. Depth-enhanced head-mounted light field displays based on integral imaging. Opt. Lett. 46 , 985–988 (2021).

Huang, H. K. & Hua, H. Generalized methods and strategies for modeling and optimizing the optics of 3D head-mounted light field displays. Opt. Express 27 , 25154–25171 (2019).

Huang, H. K. & Hua, H. High-performance integral-imaging-based light field augmented reality display using freeform optics. Opt. Express 26 , 17578–17590 (2018).

Cheng, D. W. et al. Design and manufacture AR head-mounted displays: a review and outlook. Light.: Adv. Manuf. 2 , 24 (2021).

Google Scholar  

Westheimer, G. The Maxwellian view. Vis. Res. 6 , 669–682 (1966).

Do, H., Kim, Y. M. & Min, S. W. Focus-free head-mounted display based on Maxwellian view using retroreflector film. Appl. Opt. 58 , 2882–2889 (2019).

Park, J. H. & Kim, S. B. Optical see-through holographic near-eye-display with eyebox steering and depth of field control. Opt. Express 26 , 27076–27088 (2018).

Chang, C. L. et al. Toward the next-generation VR/AR optics: a review of holographic near-eye displays from a human-centric perspective. Optica 7 , 1563–1578 (2020).

Hsueh, C. K. & Sawchuk, A. A. Computer-generated double-phase holograms. Appl. Opt. 17 , 3874–3883 (1978).

Chakravarthula, P. et al. Wirtinger holography for near-eye displays. ACM Trans. Graph. 38 , 213 (2019).

Peng, Y. F. et al. Neural holography with camera-in-the-loop training. ACM Trans. Graph. 39 , 185 (2020).

Shi, L. et al. Towards real-time photorealistic 3D holography with deep neural networks. Nature 591 , 234–239 (2021).

Jang, C. et al. Retinal 3D: augmented reality near-eye display via pupil-tracked light field projection on retina. ACM Trans. Graph. 36 , 190 (2017).

Jang, C. et al. Holographic near-eye display with expanded eye-box. ACM Trans. Graph. 37 , 195 (2018).

Kim, S. B. & Park, J. H. Optical see-through Maxwellian near-to-eye display with an enlarged eyebox. Opt. Lett. 43 , 767–770 (2018).

Shrestha, P. K. et al. Accommodation-free head mounted display with comfortable 3D perception and an enlarged eye-box. Research 2019 , 9273723 (2019).

Lin, T. G. et al. Maxwellian near-eye display with an expanded eyebox. Opt. Express 28 , 38616–38625 (2020).

Jo, Y. et al. Eye-box extended retinal projection type near-eye display with multiple independent viewpoints [Invited]. Appl. Opt. 60 , A268–A276 (2021).

Xiong, J. H. et al. Aberration-free pupil steerable Maxwellian display for augmented reality with cholesteric liquid crystal holographic lenses. Opt. Lett. 46 , 1760–1763 (2021).

Viirre, E. et al. Laser safety analysis of a retinal scanning display system. J. Laser Appl. 9 , 253–260 (1997).

Ratnam, K. et al. Retinal image quality in near-eye pupil-steered systems. Opt. Express 27 , 38289–38311 (2019).

Maimone, A. et al. Pinlight displays: wide field of view augmented reality eyeglasses using defocused point light sources. In Proc. ACM SIGGRAPH 2014 Emerging Technologies (ACM, Vancouver, Canada, 2014).

Jeong, J. et al. Holographically printed freeform mirror array for augmented reality near-eye display. IEEE Photonics Technol. Lett. 32 , 991–994 (2020).

Ha, J. & Kim, J. Augmented reality optics system with pin mirror. US Patent 10,989,922 (2021).

Park, S. G. Augmented and mixed reality optical see-through combiners based on plastic optics. Inf. Disp. 37 , 6–11 (2021).

Xiong, J. H. et al. Breaking the field-of-view limit in augmented reality with a scanning waveguide display. OSA Contin. 3 , 2730–2740 (2020).

Levola, T. 7.1: invited paper: novel diffractive optical components for near to eye displays. Soc. Inf. Disp. Int. Symp . Dig. Tech. Pap. 37 , 64–67 (2006).

Laakkonen, P. et al. High efficiency diffractive incouplers for light guides. In Proc. SPIE 6896, Integrated Optics: Devices, Materials, and Technologies XII . (SPIE, San Jose, California, United States, 2008).

Bai, B. F. et al. Optimization of nonbinary slanted surface-relief gratings as high-efficiency broadband couplers for light guides. Appl. Opt. 49 , 5454–5464 (2010).

Äyräs, P., Saarikko, P. & Levola, T. Exit pupil expander with a large field of view based on diffractive optics. J. Soc. Inf. Disp. 17 , 659–664 (2009).

Yoshida, T. et al. A plastic holographic waveguide combiner for light-weight and highly-transparent augmented reality glasses. J. Soc. Inf. Disp. 26 , 280–286 (2018).

Yu, C. et al. Highly efficient waveguide display with space-variant volume holographic gratings. Appl. Opt. 56 , 9390–9397 (2017).

Shi, X. L. et al. Design of a compact waveguide eyeglass with high efficiency by joining freeform surfaces and volume holographic gratings. J. Optical Soc. Am. A 38 , A19–A26 (2021).

Han, J. et al. Portable waveguide display system with a large field of view by integrating freeform elements and volume holograms. Opt. Express 23 , 3534–3549 (2015).

Weng, Y. S. et al. Liquid-crystal-based polarization volume grating applied for full-color waveguide displays. Opt. Lett. 43 , 5773–5776 (2018).

Lee, Y. H. et al. Compact see-through near-eye display with depth adaption. J. Soc. Inf. Disp. 26 , 64–70 (2018).

Tekolste, R. D. & Liu, V. K. Outcoupling grating for augmented reality system. US Patent 10,073,267 (2018).

Grey, D. & Talukdar, S. Exit pupil expanding diffractive optical waveguiding device. US Patent 10,073, 267 (2019).

Yoo, C. et al. Extended-viewing-angle waveguide near-eye display with a polarization-dependent steering combiner. Opt. Lett. 45 , 2870–2873 (2020).

Schowengerdt, B. T., Lin, D. & St. Hilaire, P. Multi-layer diffractive eyepiece with wavelength-selective reflector. US Patent 10,725,223 (2020).

Wang, Q. W. et al. Stray light and tolerance analysis of an ultrathin waveguide display. Appl. Opt. 54 , 8354–8362 (2015).

Wang, Q. W. et al. Design of an ultra-thin, wide-angle, stray-light-free near-eye display with a dual-layer geometrical waveguide. Opt. Express 28 , 35376–35394 (2020).

Frommer, A. Lumus: maximus: large FoV near to eye display for consumer AR glasses. In Proc. SPIE 11764, AVR21 Industry Talks II . Online Only (SPIE, 2021).

Ayres, M. R. et al. Skew mirrors, methods of use, and methods of manufacture. US Patent 10,180,520 (2019).

Utsugi, T. et al. Volume holographic waveguide using multiplex recording for head-mounted display. ITE Trans. Media Technol. Appl. 8 , 238–244 (2020).

Aieta, F. et al. Multiwavelength achromatic metasurfaces by dispersive phase compensation. Science 347 , 1342–1345 (2015).

Arbabi, E. et al. Controlling the sign of chromatic dispersion in diffractive optics with dielectric metasurfaces. Optica 4 , 625–632 (2017).

Download references

Acknowledgements

The authors are indebted to Goertek Electronics for the financial support and Guanjun Tan for helpful discussions.

Author information

Authors and affiliations.

College of Optics and Photonics, University of Central Florida, Orlando, FL, 32816, USA

Jianghao Xiong, En-Lin Hsiang, Ziqian He, Tao Zhan & Shin-Tson Wu

You can also search for this author in PubMed   Google Scholar

Contributions

J.X. conceived the idea and initiated the project. J.X. mainly wrote the manuscript and produced the figures. E.-L.H., Z.H., and T.Z. contributed to parts of the manuscript. S.W. supervised the project and edited the manuscript.

Corresponding author

Correspondence to Shin-Tson Wu .

Ethics declarations

Conflict of interest.

The authors declare no competing interests.

Supplementary information

Supplementary information, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Xiong, J., Hsiang, EL., He, Z. et al. Augmented reality and virtual reality displays: emerging technologies and future perspectives. Light Sci Appl 10 , 216 (2021). https://doi.org/10.1038/s41377-021-00658-8

Download citation

Received : 06 June 2021

Revised : 26 September 2021

Accepted : 04 October 2021

Published : 25 October 2021

DOI : https://doi.org/10.1038/s41377-021-00658-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Color liquid crystal grating based color holographic 3d display system with large viewing angle.

  • Qiong-Hua Wang

Light: Science & Applications (2024)

Mass-produced and uniformly luminescent photochromic fibers toward future interactive wearable displays

Effects of virtual reality exposure therapy on state-trait anxiety in individuals with dentophobia.

  • Elham Majidi
  • Gholamreza Manshaee

Current Psychology (2024)

A review of convolutional neural networks in computer vision

  • Milan Parmar

Artificial Intelligence Review (2024)

Visual analytics for digital twins: a conceptual framework and case study

  • Hangbin Zheng
  • Tianyuan Liu
  • Jinsong Bao

Journal of Intelligent Manufacturing (2024)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

research paper on augmented reality

  • Reference Manager
  • Simple TEXT file

People also looked at

Systematic review article, a systematic review of 10 years of augmented reality usability studies: 2005 to 2014.

research paper on augmented reality

  • 1 Empathic Computing Laboratory, University of South Australia, Mawson Lakes, SA, Australia
  • 2 Human Interface Technology Lab New Zealand (HIT Lab NZ), University of Canterbury, Christchurch, New Zealand
  • 3 Mississippi State University, Starkville, MS, United States

Augmented Reality (AR) interfaces have been studied extensively over the last few decades, with a growing number of user-based experiments. In this paper, we systematically review 10 years of the most influential AR user studies, from 2005 to 2014. A total of 291 papers with 369 individual user studies have been reviewed and classified based on their application areas. The primary contribution of the review is to present the broad landscape of user-based AR research, and to provide a high-level view of how that landscape has changed. We summarize the high-level contributions from each category of papers, and present examples of the most influential user studies. We also identify areas where there have been few user studies, and opportunities for future research. Among other things, we find that there is a growing trend toward handheld AR user studies, and that most studies are conducted in laboratory settings and do not involve pilot testing. This research will be useful for AR researchers who want to follow best practices in designing their own AR user studies.

1. Introduction

Augmented Reality (AR) is a technology field that involves the seamless overlay of computer generated virtual images on the real world, in such a way that the virtual content is aligned with real world objects, and can be viewed and interacted with in real time ( Azuma, 1997 ). AR research and development has made rapid progress in the last few decades, moving from research laboratories to widespread availability on consumer devices. Since the early beginnings in the 1960's, more advanced and portable hardware has become available, and registration accuracy, graphics quality, and device size have been largely addressed to a satisfactory level, which has led to a rapid growth in the adoption of AR technology. AR is now being used in a wide range of application domains, including Education ( Furió et al., 2013 ; Fonseca et al., 2014a ; Ibáñez et al., 2014 ), Engineering ( Henderson and Feiner, 2009 ; Henderson S. J. and Feiner, 2011 ; Irizarry et al., 2013 ), and Entertainment ( Dow et al., 2007 ; Haugstvedt and Krogstie, 2012 ; Vazquez-Alvarez et al., 2012 ). However, to be widely accepted by end users, AR usability and user experience issues still need to be improved.

To help the AR community improve usability, this paper provides an overview of 10 years of AR user studies, from 2005 to 2014. Our work builds on the previous reviews of AR usability research shown in Table 1 . These years were chosen because they cover an important gap in other reviews, and also are far enough from the present to enable the impact of the papers to be measured. Our goals are to provide a broad overview of user-based AR research, to help researchers find example papers that contain related studies, to help identify areas where there have been few user studies conducted, and to highlight exemplary user studies that embody best practices. We therefore hope the scholarship in this paper leads to new research contributions by providing outstanding examples of AR user studies that can help current AR researchers.

www.frontiersin.org

Table 1 . Summary of earlier surveys of AR usability studies.

1.1. Previous User Study Survey Papers

Expanding on the studies shown in Table 1 , Swan and Gabbard (2005) conducted the first comprehensive survey of AR user studies. They reviewed 1,104 AR papers published in four important venues between 1992 and 2004; among these papers they found only 21 that reported formal user studies. They classified these user study papers into three categories: (1) low-level perceptual and cognitive issues such as depth perception, (2) interaction techniques such as virtual object manipulation, and (3) collaborative tasks. The next comprehensive survey was by Dünser et al. (2008) , who used a list of search queries across several common bibliographic databases, and found 165 AR-related publications reporting user studies. In addition to classifying the papers into the same categories as Swan and Gabbard (2005) , they additionally classified the papers based on user study methods such as objective, subjective, qualitative, and informal. In another literature survey, Bai and Blackwell (2012) reviewed 71 AR papers reporting user studies, but they only considered papers published in the International Symposium on Mixed and Augmented Reality (ISMAR) between 2001 and 2010. They also followed the classification of Swan and Gabbard (2005) , but additionally identified a new category of studies that investigated user experience (UX) issues. Their review thoroughly reported the evaluation goals, performance measures, UX factors investigated, and measurement instruments used. Additionally, they also reviewed the demographics of the studies' participants. However there has been no comprehensive study since 2010, and none of these earlier studies used an impact measure to determine the significance of the papers reviewed.

1.1.1. Survey Papers of AR Subsets

Some researchers have also published review papers focused on more specific classes of user studies. For example, Kruijff et al. (2010) reviewed AR papers focusing on the perceptual pipeline, and identified challenges that arise from the environment, capturing, augmentation, display technologies, and user. Similarly, Livingston et al. (2013) published a review of user studies in the AR X-ray vision domain. As such, their review deeply analyzed perceptual studies in a niche AR application area. Finally, Rankohi and Waugh (2013) reviewed AR studies in the construction industry, although their review additionally considers papers without user studies. In addition to these papers, many other AR papers have included literature reviews which may include a few related user studies such as Wang et al. (2013) , Carmigniani et al. (2011) , and Papagiannakis et al. (2008) .

1.2. Novelty and Contribution

These reviews are valued by the research community, as shown by the number of times they have been cited (e.g., 166 Google Scholar citations for Dünser et al., 2008 ). However, due to a numebr of factors there is a need for a more recent review. Firstly, while early research in AR was primarily based on head-mounted displays (HMDs), in the last few years there has been a rapid increase in the use of handheld AR devices, and more advanced hardware and sensors have become available. These new wearable and mobile devices have created new research directions, which have likely impacted the categories and methods used in AR user studies. In addition, in recent years the AR field has expanded, resulting in a dramatic increase in the number of published AR papers, and papers with user studies in them. Therefore, there is a need for a new categorization of current AR user research, as well as the opportunity to consider new classification measures such as paper impact, as reviewing all published papers has become less plausible. Finally, AR papers are now appearing in a wider range of research venues, so it is important to have a survey that covers many different journals and conferences.

1.2.1. New Contributions Over Existing Surveys

Compared to these earlier reviews, there are a number of important differences with the current survey, including:

• we have considered a larger number of publications from a wide range of sources

• our review covers more recent years than earlier surveys

• we have used paper impact to help filter the papers reviewed

• we consider a wider range of classification categories

• we also review issues experienced by the users.

1.2.2. New Aims of This Survey

To capture the latest trends in usability research in AR, we have conducted a thorough, systematic literature review of 10 years of AR papers published between 2005 and 2014 that contain a user study. We classified these papers based on their application areas, methodologies used, and type of display examined. Our aims are to:

1. identify the primary application areas for user research in AR

2. describe the methodologies and environments that are commonly used

3. propose future research opportunities and guidelines for making AR more user friendly.

The rest of the paper is organized as follows: section 2 details the method we followed to select the papers to review, and how we conducted the reviews. Section 3 then provides a high-level overview of the papers and studies, and introduces the classifications. The following sections report on each of the classifications in more detail, highlighting one of the more impactful user studies from each classification type. Section 5 concludes by summarizing the review and identifying opportunities for future research. Finally, in the appendix we have included a list of all papers reviewed in each of the categories with detailed information.

2. Methodology

We followed a systematic review process divided into two phases: the search process and the review process.

2.1. Search Process

One of our goals was to make this review as inclusive as practically possible. We therefore considered all papers published in conferences and journals between 2005 and 2014, which include the term “Augmented Reality,” and involve user studies. We searched the Scopus bibliographic database, using the same search terms that were used by Dünser et al. (2008) (Table 2 ). This initial search resulted in a total of 1,147 unique papers. We then scanned each one to identify whether or not it actually reported on AR research; excluding papers not related to AR reduced the number to 1,063. We next removed any paper that did not actually report on a user study, which reduced our pool to 604 papers. We then examined these 604 papers, and kept only those papers that provided all of the following information: (i) participant demographics (number, age, and gender), (ii) design of the user study, and (iii) the experimental task. Only 396 papers satisfied all three of these criteria. Finally, unlike previous surveys of AR usability studies, we next considered how much impact each paper had, to ensure that we were reviewing papers that others had cited. For each paper we used Google Scholar to find the total citations to date, and calculated its Average Citation Count (ACC):

For example, if a paper was published in 2010 (a 5 year lifetime until 2014) and had a total of 10 citations in Google Scholar in April 2015, its ACC would be 10/5 = 2.0. Based on this formula, we included all papers that had an ACC of at least 1.5, showing that they had at least a moderate impact in the field. This resulted in a final set of 291 papers that we reviewed in detail. We deliberately excluded papers more recent than 2015 because most of these hadn't gather significant citations yet.

www.frontiersin.org

Table 2 . Search terms used in the Scopus database.

2.2. Reviewing Process

In order to review this many papers, we randomly divided them among the authors for individual review. However, we first performed a norming process, where all of the authors first reviewed the same five randomly selected papers. We then met to discuss our reviews, and reached a consensus about what review data would be captured. We determined that our reviews would focus on the following attributes:

• application areas and keywords

• experimental design (within-subjects, between-subjects, or mixed-factorial)

• type of data collected (qualitative or quantitative)

• participant demographics (age, gender, number, etc.)

• experimental tasks and environments

• type of experiment (pilot, formal, field, heuristic, or case study)

• senses augmented (visual, haptic, olfactory, etc.)

• type of display used (handheld, head-mounted display, desktop, etc.).

In order to systematically enter this information for each paper, we developed a Google Form. During the reviews we also flagged certain papers for additional discussion. Overall, this reviewing phase encompassed approximately 2 months. During this time, we regularly met and discussed the flagged papers; we also clarified any concerns and generally strove to maintain consistency. At the end of the review process we had identified the small number of papers where the classification was unclear, so we held a final meeting to arrive at a consensus view.

2.3. Limitations and Validity Concerns

Although we strove to be systematic and thorough as we selected and reviewed these 291 papers, we can identify several limitations and validity concerns with our methods. The first involves using the Scopus bibliographic database. Although using such a database has the advantage of covering a wide range of publication venues and topics, and although it did cover all of the venues where the authors are used to seeing AR research, it remains possible that Scopus missed publication venues and papers that should have been included. Second, although the search terms we used seem intuitive (Table 2 ), there may have been papers that did not use “Augmented Reality” as a keyword when describing an AR experience. For example, some papers may have used the term “Mixed Reality,” or “Artificial Reality.”

Finally, although using the ACC as a selection factor narrowed the initial 604 papers to 291, it is possible that the ACC excluded papers that should have been included. In particular, because citations are accumulated over time, it is quite likely that we missed some papers from the last several years of our 10-year review period that may soon prove influential.

3. High-Level Overview of Reviewed Papers

Overall, the 291 papers report a total of 369 studies. Table 3 gives summary statistics for the papers, and Table 4 gives summary statistics for the studies. These tables contain bar graphs that visually depict the magnitude of the numbers; each color indicates the number of columns are spanned by the bars. For example, in Table 3 the columns Paper, Mean ACC, and Mean Author Count are summarized individually, and the longest bar in each column is scaled according to the largest number in that column. However, Publications spans two columns, and the largest value is 59, and so all of the other bars for Publications are scaled according to 59.

www.frontiersin.org

Table 3 . Summary of the 291 reviewed papers.

www.frontiersin.org

Table 4 . Summary of the 369 user studies reported by the 291 reviewed papers.

Figure 1 further summarizes the 291 papers through four graphs, all of which indicate changes over the 10 year period between 2005 and 2014. Figure 1A shows the fraction of the total number of AR papers that report user studies, Figure 1B analyzes the kind of display used, Figure 1C categorizes the experiments into application areas, and Figure 1D categorizes the papers according to the kind of experiment that was conducted.

www.frontiersin.org

Figure 1 . Throughout the 10 years, less than 10% of all published AR papers had a user study (A) . Out of the 291 reviewed papers, since 2011 most papers have examined handheld displays, rather than HMDs (B) . We filtered the papers based on ACC and categorized them into nine application areas; the largest areas are Perception and Interaction (C) . Most of the experiments were in controlled laboratory environments (D) .

3.1. Fraction of User Studies Over Time

Figure 1A shows the total number of AR papers published between 2005 and 2014, categorized by papers with and without a user study. As the graph shows, the number of AR papers published in 2014 is five times that published in 2005. However, the proportion of user study papers among all AR papers has remained low, less than 10% of all publication for each year.

3.2. Study Design

As shown in Table 4 , most of the papers (213, or 73%) used a within-subjects design, 43 papers (15%) used a between-subjects design, and 12 papers (4%) used a mixed-factorial design. However, there were 23 papers (8%) which used different study designs than the ones mentioned above, such as Baudisch et al. (2013) , Benko et al. (2014) , and Olsson et al. (2009) .

3.3. Study Type

We found that it was relatively rare for researchers to report on conducting pilot studies before their main study. Only 55 papers (19%) reported conducting at least one pilot study in their experimentation process and just 25 of them reported the pilot studies with adequate details such as study design, participants, and results. This shows that the importance of pilot studies is not well recognized. The majority of the papers (221, or 76%) conducted the experiments in controlled laboratory environments, while only 44 papers (15%) conducted the experiments in a natural environment or as a field study (Figure 1D ). This shows a lack of experimentation in real world conditions. Most of the experiments were formal user studies, and there were almost no heuristic studies, which may indicate that the heuristics of AR applications are not fully developed and there exists a need for heuristics and standardization.

3.4. Data Type

In terms of data collection, a total of 139 papers (48%) collected both quantitative and qualitative data, 78 (27%) papers only qualitative, and 74 (25%) only quantitative. For the experimental task, we found that the most popular task involved performance (178, or 61%), followed by filling out questionnaires (146, or 50%), perceptual tasks (53, or 18%), interviews (41, or 14%) and collaborative tasks (21, or 7%). In terms of dependent measures, subjective ratings were the most popular with 167 papers (57%), followed by error/accuracy measures (130, or 45%), and task completion time (123, or 42%). We defined task as any activity that was carried out by the participants to provide data—both quantitative and/or qualitative—about the experimental system(s). Note that many experiments used more than one experimental task or dependent measure, so the percentages sum to more than 100%. Finally, the bulk of the user studies were conducted in an indoor environment (246, or 83%), not outdoors (43, or 15%), or a combination of both settings (6, or 2%).

3.5. Senses

As expected, an overwhelming majority of papers (281, or 96%) augmented the visual sense. Haptic and Auditory senses were augmented in 27 (9%) and 21 (7%) papers respectively. Only six papers (2%) reported augmenting only the auditory sense and five (2%) papers reported augmenting only the haptic sense. This shows that there is an opportunity for conducting more user studies exploring non-visual senses.

3.6. Participants

The demographics of the participants showed that most of the studies were run with young participants, mostly university students. A total of 182 papers (62%) used participants with an approximate mean age of less than 30 years. A total of 227 papers (78%) reported involving female participants in their experiments, but the ratio of female participants to male participants was low (43% of total participants in those 227 papers). When all 291 papers are considered only 36% of participants were females. Many papers (117, or 40%) did not explicitly mention the source of participant recruitment. From those that did, most (102, or 35%) sourced their participants from universities, whereas only 36 papers (12%) mentioned sourcing participants from the general public. This shows that many AR user studies use young male university students as their subjects, rather than a more representative cross section of the population.

3.7. Displays

We also recorded the displays used in these experiments (Table 3 ). Most of the papers used either HMDs (102 papers, or 35%) or handhelds (100 papers, or 34%), including six papers that used both. Since 2009, the number of papers using HMDs started to decrease while the number of papers using handheld displays increased (Figure 1B ). For example, between 2010 and 2014 (204 papers in our review), 50 papers used HMDs and 79 used handhelds, including one paper that used both, and since 2011 papers using handheld displays consistently outnumbered papers using HMDs. This trend—that handheld mobile AR has recently become the primary display for AR user studies—is of course driven by the ubiquity of smartphones.

3.8. Categorization

We categorized the papers into nine different application areas (Tables 3 , 4 ): (i) Perception (51 papers, or 18%), (ii) Medical (43, or 15%), (iii) Education (42, or 14%), (iv) Entertainment and Gaming (14, or 5%), (v) Industrial (30, or 10%), (vi) Navigation and Driving (24, or 9%), (vii) Tourism and Exploration (8, or 2%), (viii) Collaboration (12, or 4%), and (ix) Interaction (67, or 23%). Figure 1C shows the change over time in number of AR papers with user studies in these categories. The Perception and Interaction categories are rather general areas of AR research, and contain work that reports on more low-level experiments, possibly across multiple application areas. Our analysis shows that there are fewer AR user studies published in Collaboration, Tourism and Exploration, and Entertainment and Gaming, identifying future application areas for user studies. There is also a noticeable increase in the number of user studies in educational applications over time. The drop in number of papers in 2014 is due to the selection criteria of papers having at least 1.5 average citations per year, as these papers were too recent to be cited often. Interestingly, although there were relatively few of them, papers in Collaboration, Tourism and Exploration categories received noticeably higher ACC scores than other categories.

3.9. Average Authors

As shown in Table 3 , most categories had a similar average number of authors for each paper, ranging between 3.24 (Education) and 3.87 (Industrial). However papers in the Medical domain had the highest average number of authors (6.02), which indicates the multidisciplinary nature of this research area. In contrast to all other categories, most of the papers in the Medical category were published in journals, compared to the common AR publications venues, which are mostly conferences. Entertainment and Gaming (4.71), and Navigation and Driving (4.58) also had considerably higher numbers of authors per paper on average.

3.10. Individual Studies

While a total of 369 studies were reported in these 291 papers (Table 4 ), the majority of the papers (231, or 80%) reported only one user study. Forty-seven (16.2%), nine (3.1%), two (<1%), and one (<1%) papers reported two, three, four, and five studies respectively, including pilot studies. In terms of the number of participants used (median) in each study, Tourism and Exploration, and Education were the highest among all categories with an average of 28 participants per study. Other categories used between 12 and 18 participants per study, while the overall median stands at 16 participants. Based on this insight, it can be claimed that 12 to 18 participants per study is a typical range in the AR community. Out of the 369 studies 31 (8.4%) were pilot studies, six (1.6%) heuristic evaluation, 54 (14.6%) field studies, and rest of the 278 (75.3%) were formal controlled user studies. Most of the studies (272, or 73.7%) were designed as within-subjects, 52 (14.1%) between-subjects, and 16 (4.3%) as mixed-factors (Table 4 ).

In the following section we review user studies in each of the nine application areas separately. We provide a commentary on each category and also discuss a representative paper with the highest ACCs in each application area, so that readers can understand typical user studies from that domain. We present tables summarizing all of the papers from these areas at the end of the paper.

4. Application Areas

4.1. collaboration.

A total of 15 studies were reported in 12 papers in the Collaboration application area. The majority of the studies investigated some form of remote collaboration (Table 5 ), although Henrysson et al. (2005a) presented a face-to-face collaborative AR game. Interestingly, out of the 15 studies, eight reported using handheld displays, seven used HMDs, and six used some form of desktop display. This makes sense as collaborative interfaces often require at least one collaborator to be stationary and desktop displays can be beneficial in such setups. One noticeable feature was the low number of studies performed in the wild or in natural settings (field studies). Only three out of 15 studies were performed in natural settings and there were no pilot studies reported, which is an area for potential improvement. While 14 out of 15 studies were designed to be within-subjects, only 12 participants were recruited per study. On average, roughly one-third of the participants were females in all studies considered together. All studies were performed in indoor locations except for ( Gauglitz et al., 2014b ), which was performed in outdoors. While a majority of the studies (8) collected both objective (quantitative) and subjective (qualitative) data, five studies were based on only subjective data, and two studies were based on only objective data, both of which were reported in one paper ( Henrysson et al., 2005a ). Besides subjective feedback or ratings, task completion time and error/accuracy were other prominent dependent variables used. Only one study used NASA TLX ( Wang and Dunston, 2011 ).

www.frontiersin.org

Table 5 . Summary of user studies in Collaboration application area.

4.1.1. Representative Paper

As an example of the type of collaborative AR experiments conducted, we discuss the paper of Henrysson et al. (2005a) in more detail. They developed an AR-based face-to-face collaboration tool using a mobile phone and reported on two user studies. This paper received an ACC of 22.9, which is the highest in this category of papers. In the first study, six pairs of participants played a table-top tennis game in three conditions—face to face AR, face to face non-AR, and non-face to face collaboration. In the second experiment, the authors added (and varied) audio and haptic feedback to the games and only evaluated face to face AR. The same six pairs were recruited for this study as well. Authors collected both quantitative and qualitative (survey and interview) data, although they focused more on the latter. They asked questions regarding the usability of system and asked participants to rank the conditions. They explored several usability issues and provided design guidelines for developing face to face collaborative AR applications using handheld displays. For example, designing applications that have a focus on a single shared work space.

4.1.2. Discussion

The work done in this category is mostly directed toward remote collaboration. With the advent of modern head mounted devices such the Microsoft HoloLens, new types of collaborations can be created, including opportunities for enhanced face to face collaboration. Work needs to be done toward making AR-based remote collaboration akin to the real world with not only shared understanding of the task but also shared understanding of the other collaborators emotional and physiological states. New gesture-based and gaze-based interactions and collaboration across multiple platforms (e.g., between AR and virtual reality users) are novel future research directions in this area.

4.2. Education

Fifty-five studies were reported in 42 papers in the Education application area (Table 6 ). As expected, all studies reported some kind of teaching and learning applications, with a few niche areas, such as music training, educational games, and teaching body movements. Out of 55 studies, 24 used handheld displays, 8 used HMDs, 16 used some form of desktop displays, and 11 used spatial or large-scale displays. One study had augmented only sound feedback and used a head-mounted speaker ( Hatala and Wakkary, 2005 ). Again, a trend of using handheld displays is prominent in this application area as well. Among all the studies reported, 13 were pilot studies, 14 field studies, and 28 controlled lab-based experiments. Thirty-one studies were designed as within-subjects studies, and 16 as between-subjects. Six studies had only one condition tested. The median number of participants was 28, jointly highest among all application areas. Almost 43% of participants were females. Forty-nine studies were performed in indoor locations, four in outdoor locations, and two studies were performed in both locations. Twenty-five studies collected only subjective data, 10 objective data, and 20 studies collected both types of data. While subjective rating was the primary dependent measure used in most of the studies, some specific measures were also noticed, such as pre- and post-test scores, number of items remembered, and engagement. From the keywords used in the papers, it appears that learning was the most common keyword and interactivity, users , and environments also received noticeable importance from the authors.

www.frontiersin.org

Table 6 . Summary of user studies in Education application area.

4.2.1. Representative Paper

The paper from Fonseca et al. (2014a) received the highest ACC (22) in the Education application area of AR. They developed a mobile phone-based AR teaching tool for 3D model visualization and architectural projects for classroom learning. They recruited a total of 57 students (29 females) in this study and collected qualitative data through questionnaires and quantitative data through pre- and post-tests. This data was collected over several months of instruction. The primary dependent variable was the academic performance improvement of the students. Authors used five-point Likert-scale questions as the primary instrument. They reported that using the AR tool in the classroom was correlated with increased motivation and academic achievement. This type of longitudinal study is not common in the AR literature, but is helpful in measuring the actual real-world impact of any application or intervention.

4.2.2. Discussion

The papers in this category covered a diverse range of education and training application areas. There are some papers used AR to teach physically or cognitively impaired patients, while a couple more promoted physical activity. This set of papers focused on both objective and subjective outcomes. For example, Anderson and Bischof (2014) reported a system called ARM trainer to train amputees in the use of myoelectric prostheses that provided an improved user experience over the current standard of care. In a similar work, Gama et al. (2012) presented a pilot study for upper body motor movements where users were taught to move body parts in accordance to the instructions of an expert such as physiotherapist and showed that AR-based system was preferred by the participants. Their system can be applied to teach other kinds of upper body movements beyond just rehabilitation purposes. In another paper, Chang et al. (2013) reported a study where AR helped cognitively impaired people to gain vocational job skills and the gained skills were maintained even after the intervention. Hsiao et al. (2012) and Hsiao (2010) presented a couple of studies where physical activity was included in the learning experience to promote “learning while exercising". There are few other papers that gamified the AR learning content and they primarily focused on subjective data. Iwata et al. (2011) presented ARGo an AR version of the GO game to investigate and promote self-learning. Juan et al. (2011b) developed ARGreenet game to create awareness for recycling. Three papers investigated education content themed around tourism and mainly focused on subjective opinion. For example, Hatala and Wakkary (2005) created a museum guide educating users about the objects in the museum and Szymczak et al. (2012) created multi-sensory application for teaching about the historic sites in a city. There were several other papers that proposed and evaluated different pedagogical approaches using AR including two papers that specifically designed for teaching music such as Liarokapis (2005) and Weing et al. (2013) . Overall these papers show that in the education space a variety of evaluation methods can be used, focusing both on educational outcomes and application usability. Integrating methods of intelligent tutoring systems ( Anderson et al., 1985 ) with AR could provide effective tools for education. Another interesting area to explore further is making these educational interfaces adaptive to the users cognitive load.

4.3. Entertainment and Gaming

We reviewed a total of 14 papers in the Entertainment and Gaming area with 18 studies were reported in these papers (Table 7 ). A majority of the papers reported a gaming application while fewer papers reported about other forms of entertainment applications. Out of the 18 studies, nine were carried out using handheld displays and four studies used HMDs. One of the reported studies, interestingly, did not use any display ( Xu et al., 2011 ). Again, the increasing use of handheld displays is expected as this kind of display provides greater mobility than HMDs. Five studies were conducted as field studies and the rest of the 13 studies were controlled lab-based experiments. Fourteen studies were designed as within-subjects and two were between-subjects. The median number of participants in these studies was 17. Roughly 41.5% of participants were females. Thirteen studies were performed in indoor areas, four were in outdoor locations, and one study was conducted in both locations. Eight studies collected only subjective data, another eight collected both subjective and objective data, and the remaining two collected only objective data. Subjective preference was the primary measure of interest. However, task completion time was also another important measure. In this area, error/accuracy was not found to be a measure in the studies used. In terms of the keywords used by the authors, besides games, mobile and handheld were other prominent keywords. These results highlight the utility of handheld displays for AR Entertainment and Gaming studies.

www.frontiersin.org

Table 7 . Summary of user studies in Entertainment and Gaming application area.

4.3.1. Representative Paper

Dow et al. (2007) presented a qualitative user study exploring the impact of immersive technologies on presence and engagement, using interactive drama, where players had to converse with characters and manipulate objects in the scene. This paper received the highest ACC (9.5) in this category of papers. They compared two versions of desktop 3D based interfaces with an immersive AR based interface in a lab-based environment. Participants communicated in the desktop versions using keyboards and voice. The AR version used a video see-though HMD. They recruited 12 participants (six females) in the within-subjects study, each of whom had to experience interactive dramas. This paper is unusual because user data was collected mostly from open-ended interviews and observation of participant behaviors, and not task performance or subjective questions. They reported that immersive AR caused an increased level of user Presence, however, higher presence did not always led to more engagement.

4.3.2. Discussion

It is clear that advances in mobile connectivity, CPU and GPU processing capabilities, wearable form factors, tracking robustness, and accessibility to commercial-grade game creation tools is leading to more interest in AR for entertainment. There is significant evidence from both AR and VR research of the power of immersion to provide a deeper sense of presence, leading to new opportunities for enjoyment in Mixed Reality (a continuum encompassing both AR and VR Milgram et al., 1995 ) spaces. Natural user interaction will be key to sustaining the use of AR in entertainment, as users will shy away from long term use of technologies that induce fatigue. In this sense, wearable AR will probably be more attractive for entertainment AR applications. In these types of entertainment applications, new types of evaluation measures will need to be used, as shown by the work of Dow et al. (2007) .

4.4. Industrial

There was a total of 30 papers reviewed that focused on Industrial applications, and together they reported 36 user studies. A majority of the studies reported maintenance and manufacturing/assembly related tasks (Table 8 ). Eleven studies used handheld displays, 21 used HMDs, four used spatial or large screen displays, and two used desktop displays. The prevalence of HMDs was expected as most of the applications in this area require use of both hands at times, and as such HMDs are more suitable as displays. Twenty-nine studies were executed in a formal lab-based environment and only six studies were executed in their natural setups. We believe performing more industrial AR studies in the natural environment will lead to more-usable results, as controlled environments may not expose the users to the issues that they face in real-world setups. Twenty-eight studies were designed as within-subjects and six as between-subjects. One study was designed to collect exploratory feedback from a focus group ( Olsson et al., 2009 ). The median number of participants used in these studies was 15 and roughly 23% of them were females. Thirty-two studies were performed in indoor locations and four in outdoor locations. Five studies were based on only subjective data, four on only objective data, and rest of the 27 collected both kinds of data. Use of NASA TLX was very common in this application area, which was expected given the nature of the tasks. Time and error/accuracy were other commonly used measurements along with subjective feedback. The keywords used by the authors to describe their papers highlight a strong interest in interaction, interfaces , and users . Guidance and maintenance are other prominent keywords that authors used.

www.frontiersin.org

Table 8 . Summary of user studies in Industrial area.

4.4.1. Representative Paper

As an example of the papers written in this area, Henderson S. and Feiner (2011) published a work exploring AR documentation for maintenance and repair tasks in a military vehicle, which received the highest ACC (26.25) in the Industrial area. They used a video see-though HMD to implement the study application. In the within-subjects study, the authors recruited six male participants who were professional military mechanics and they performed the tasks in the field settings. They had to perform 18 different maintenance tasks using three conditions—AR, LCD, and HUD. Several quantitative and qualitative (questionnaire) data were collected. As dependent variables they used task completion time, task localization time, head movement, and errors. The AR condition resulted in faster locating tasks and fewer head-movements. Qualitatively, AR was also reported to be more intuitive and satisfying. This paper provides an outstanding example of how to collect both qualitative and quantitative measures in an industrial setting, and so get a better indication of the user experience.

4.4.2. Discussion

Majority of the work in this category focused on maintenance and assembly tasks, whereas a few investigated architecture and planning tasks. Another prominent line of work in this category is military applications. Some work also cover surveying and item selection (stock picking). It will be interesting to investigate non-verbal communication cues in collaborative industrial applications where people form multiple cultural background can easily work together. As most of the industrial tasks require specific training and working in a particular environment, we assert that there needs to be more studies that recruit participants from the real users and perform studies in the field when possible.

4.5. Interaction

There were 71 papers in the Interaction design area and 83 user studies reported in these papers (see Table 9 ). Interaction is a very general area in AR, and the topics covered by these papers were diverse. Forty studies used handheld displays, 33 used HMDs, eight used desktop displays, 12 used spatial or large-screen displays, and 10 studies used a combination of multiple display types. Seventy-one studies were conducted in a lab-based environment, five studies were field studies, and six were pilot studies. Jones et al. (2013) were the only authors to conduct a heuristic evaluation. The median number of participants used in these studies was 14, and approximately 32% of participants were females. Seventy-five studies were performed in indoor locations, seven in outdoor locations, and one study used both locations. Sixteen studies collected only subjective data, 14 collected only objective data, and 53 studies collected both types of data. Task completion time and error/accuracy were the most commonly used dependent variables. A few studies used the NASA TLX workload survey ( Robertson et al., 2007 ; Henze and Boll, 2010b ) and most of the studies used different forms of subjective ratings, such as ranking conditions and rating on a Likert scale. The keywords used by authors identify that the papers in general were focused on interaction, interface, user, mobile , and display devices.

www.frontiersin.org

Table 9 . Summary of user studies in Interaction application area.

4.5.1. Representative Paper

Boring et al. (2010) presented a user study for remote manipulation of content on distant displays using their system, which was named Touch Projector and was implemented on an iPhone 3G. This paper received the highest ACC (31) in the Interaction category of papers. They implemented multiple interaction methods on this application, e.g., manual zoom, automatic zoom, and freezing. The user study involved 12 volunteers (four females) and was designed as a within-subjects study. In the experiment, participants selected targets and dragged targets between displays using the different conditions. Both quantitative and qualitative data (informal feedback) were collected. The main dependent variables were task completion time, failed trials, and docking offset. They reported that participants achieved highest performance with automatic zooming and temporary image freezing. This is a typical study in the AR domain based within a controlled laboratory environment. As usual in interaction studies, a significant amount of the study was focused on user performance with different input conditions, and this paper shows the benefit of capturing different types of performance measures, not just task completion time.

4.5.2. Discussion

User interaction is a cross-cutting focus of research, and as such, does not fall neatly within an application category, but deeply influences user experience in all categories. The balance of expressiveness and efficiency is a core concept in general human-computer interaction, but is of even greater importance in AR interaction, because of the desire to interact while on the go, the danger of increased fatigue, and the need to interact seamlessly with both real and virtual content. Both qualitative and quantitative evaluations will continue to be important in assessing usability in AR applications, and we encourage researchers to continue with this approach. It is also important to capture as many different performance measures as possible from the interaction user study to fully understand how a user interacts with the system.

4.6. Medicine

One of the most promising areas for applying AR is in medical sciences. However, most of the medical-related AR papers were published in medical journals rather than the most common AR publication venues. As we considered all venues in our review, we were able to identify 43 medical papers reporting AR studies and they in total reported 54 user studies. The specific topics were diverse, including laparoscopic surgery, rehabilitation and recovery, phobia treatment, and other medical training. This application area was dominated by desktop displays (34 studies), while 16 studies used HMDs, and handheld displays were used in only one study. This is very much expected, as often in medical setups, a clear view is needed along with free hands without adding any physical load. As expected, all studies were performed in indoor locations. Thirty-six studies were within-subjects and 11 were between-subjects. The median number of participants was 13, and approximately only 14.2% of participants were females, which is considerably lower than the gender-ratio in the profession of medicine. Twenty-two studies collected only objective data, 19 collected only subjective data, and 13 studies collected both types of data. Besides time and accuracy, various domain-specific surveys and other instruments were used in these studies as shown in Table 10 .

www.frontiersin.org

Table 10 . Summary of user studies in Medical application areas.

The keywords used by authors suggest that AR-based research was primarily used in training and simulation . Laparoscopy, rehabilitation , and phobia were topics of primary interest. One difference between the keywords used in medical science vs. other AR fields is the omission of the word user , which indicates that the interfaces designed for medical AR were primarily focused on achieving higher precision and not on user experience. This is understandable as the users are highly trained professionals who need to learn to use new complex interfaces. The precision of the interface is of utmost importance, as poor performance can be life threatening.

4.6.1. Representative Paper

Archip et al. (2007) reported on a study that used AR visualization for image-guided neurosurgery, which received the highest ACC (15.6) in this category of papers. Researchers recruited 11 patients (six females) with brain tumors who underwent surgery. Quantitative data about alignment accuracy was collected as a dependent variable. They found that using AR produced a significant improvement in alignment accuracy compared to the non-AR system already in use. An interesting aspect of the paper was that it focused purely on one user performance measure, alignment accuracy, and there was no qualitative data captured from users about how they felt about the system. This appears to be typical for many medical related AR papers.

4.6.2. Discussion

AR medical applications are typically designed for highly trained medical practitioners, which are a specialist set of users compared to other types of user studies. The overwhelming focus is on improving user performance in medical tasks, and so most of the user studies are heavily performance focused. However, there is an opportunity to include more qualitative measures in medical AR studies, especially those that relate to user estimation of their physical and cognitive workload, such as the NASA TLX survey. In many cases medical AR interfaces are aiming to improve user performance in medical tasks compared to traditional medical systems. This means that comparative evaluations will need to be carried out and previous experience with the existing systems will need to be taken into account.

4.7. Navigation and Driving

A total of 24 papers reported 28 user studies in the Navigation and Driving application areas (see Table 11 ). A majority of the studies reported applications for car driving. However, there were also pedestrian navigation applications for both indoors and outdoors. Fifteen studies used handheld displays, five used HMDs, and two used heads-up displays (HUDs). Spatial or large-screen displays were used in four studies. Twenty-three of the studies were performed in controlled setups and the remaining five were executed in the field. Twenty-two studies were designed as within-subjects, three as between-subjects, and the remaining three were mixed-factors studies. Approximately 38% of participants were females in these studies, where the median number of participants used was 18. Seven studies were performed in an outdoor environment and the rest in indoor locations. This indicates an opportunity to design and test hybrid AR navigation applications that can be used in both indoor and outdoor locations. Seven studies collected only objective data, 18 studies collected a combination of both objective and subjective data, whereas only three studies were based only on subjective data. Task completion time and error/accuracy were the most commonly used dependent variables. Other domain specific variables used were headway variation (deviation from intended path), targets found, number of steps, etc.

www.frontiersin.org

Table 11 . Summary of user studies in Navigation and Driving application area.

Analysis of author-specified keywords suggests that mobile received a strong importance, which is also evident by the profuse use of handheld displays in these studies, since these applications are about mobility. Acceptance was one of the noticeable keywords, which indicates that the studies intended to investigate whether or not a navigation interface is acceptable by the users, given the fact that, in many cases, a navigational tool can affect the safety of the user.

4.7.1. Representative Paper

Morrison et al. (2009) published a paper reporting on a field study that compared a mobile augmented reality map (MapLens) and a 2D map in a between-subjects field study, which received the highest ACC (16.3) in this application area of our review. MapLens was implemented on a Nokia N95 mobile phone and use AR to show virtual points of interest overlaid on a real map. The experimental task was to play a location-based treasure hunt type game outdoors using either MapLens or a 2D map. Researchers collected both quantitative and qualitative (photos, videos, field notes, and questionnaires) data. A total of 37 participants (20 female) took part in the study. The authors found that the AR map created more collaborations between players, and argued that AR maps are more useful as a collaboration tool. This work is important, because it provides an outstanding example of an AR Field study evaluation, which is not very common in the AR domain. User testing in the field can uncover several usability issues that normal lab-based testing cannot identify, particularly in the Navigation application area. For example, Morrison et al. (2009) were able to identify the challenges for a person of using a handheld AR device while trying to maintain awareness of the world around themselves.

4.7.2. Discussion

Navigation is an area where AR technology could provide significant benefit, due to the ability to overlay virtual cues on the real world. This will be increasingly important as AR displays become more common in cars (e.g., windscreen heads up displays) and consumers begin to wear head mounted displays outdoors. Most navigation studies have related to vehicle driving, and so there is a significant opportunity for pedestrian navigation studies. However human movement is more complex and erratic than driving, so these types of studies will be more challenging. Navigation studies will need to take into consideration the user's spatial ability, how to convey depth cues, and methods for spatial information display. The current user studies show how important it is to conduct navigation studies outdoors in a realistic testing environment, and the need to capture a variety of qualitative and quantitative data.

4.8. Perception

Similar to Interaction, Perception is another general field of study within AR, and appears in 51 papers in our review. There were a total of 71 studies reported in these papers. The primary focus was on visual perception (see Table 12 ) such as perception of depth/distance, color, and text. A few studies also reported perception of touch (haptic feedback). AR X-ray vision was also a common interface reported in this area. Perception of egocentric distance received significant attention, while exocentric distance was studied less. Also, near- to medium-field distance estimation was studied more than far-field distances. A comprehensive review of depth perception studies in AR can be found in Dey and Sandor (2014) , which also reports similar facts about AR perceptual studies as found in this review.

www.frontiersin.org

Table 12 . Summary of user studies in Perception application area.

Twenty-one studies used handheld displays, 34 studies used HMDs, and 9 studies used desktop displays. The Phantom haptic display was used by two studies where haptic feedback was studied. Sixty studies were performed as controlled lab-based experiments, and only three studies were performed in the field. Seven studies were pilot studies and there was one heuristic study ( Veas et al., 2012 ). Fifty-three studies were within-subjects, 12 between-subjects, and six mixed-factors. Overall, the median number of participants used in these studies was 16, and 27.3% of participants were females. Fifty-two studies were performed in indoor locations, only 17 studies were executed outdoors, and two studies used both locations. This indicates that indoor visual perception is well studied whereas more work is needed to investigate outdoor visual perception. Outdoor locations present additional challenges for visualizations such as brightness, screen-glare, and tracking (when mobile). This is an area to focus on as a research community. Thirty-two studies were based on only objective data, 14 used only subjective data, and 25 studies collected both kinds of data. Time and error/accuracy were most commonly used dependent measures along with subjective feedback.

Keywords used by authors indicate an emphasis on depth and visual perception, which is expected, as most of the AR interfaces augment the visual sense. Other prominent keywords were X-ray and see-through , which are the areas that have received a significant amount of attention from the community over the last decade.

4.8.1. Representative Paper

A recent paper by Suzuki et al. (2013) , reporting on the interaction of exteroceptive and interoceptive signals in virtual cardiac rubber hand perception, received the highest ACC (13.5) in this category of papers. The authors reported on a lab-based within-subjects user study using 21 participants (11 female) who wore a head-mounted display and experienced a tactile feedback simulating cardiac sensation. Both quantitative and qualitative (survey) data were collected. The main dependent variables were proprioceptive drift and virtual hand ownership. Authors reported that ownership of the virtual hand was significantly higher when tactile sensation was presented synchronously with the heart-beat of the participant than when provided asynchronously. This shows the benefit of combing perceptual cues to improve the user experience.

4.8.2. Discussion

A key focus of AR is trying to create a perceptual illusion that the AR content is seamlessly part of the user's real world. In order to measure how well this is occurring it is important to conduct perceptual user studies. Most studies to date have focused on visual perception, but there is a significant opportunity to conduct studies on non-visual cues, such as audio and haptic perception. One of the challenges of such studies is being able to measure the users perception of an AR cue, and also their confidence in how well they can perceive the cue. For example, asking users to estimate the distance on an AR object from them, and how sure they are about that estimation. New experimental methods may need to be developed to do this well.

4.9. Tourism and Exploration

Tourism is one of the relatively less explored areas of AR user studies, represented by only eight papers in our review (Table 13 ). A total of nine studies were reported, and the primary focus of the papers was on museum-based applications (five papers). Three studies used handheld displays, three used large-screen or spatial displays, and the rest head mounted displays. Six studies were conducted in the field, in the environment where the applications were meant to be used, and only three studies were performed in lab-based controlled environments. Six studies were designed to be within-subjects. This area of studies used a markedly higher number of participants compared to other areas, with the median number of participants being 28, with approximately 38% of them female. All studies were performed in indoor locations. While we are aware of studies in this area that have been performed in outdoor locations, these did not meet the inclusion criteria of our review. Seven studies were based completely on subjective data and two others used both subjective and objective data. As the nature of the interfaces were primarily personal experiences, the over reliance on subjective data is understandable. An analysis of keywords in the papers found that the focus was on museums . User was the most prominent keyword among all, which is very much expected for an interface technology such as AR.

www.frontiersin.org

Table 13 . Summary of user studies in Tourism and Exploration application area.

4.9.1. Representative Paper

The highest ACC (19) in this application area was received by an article published by Olsson et al. (2013) about the expectations of user experience of mobile augmented reality (MAR) services in a shopping context. Authors used semi-structured interviews as their research methodology and conducted 16 interview sessions with 28 participants (16 female) in two different shopping centers. Hence, their collected data was purely qualitative. The interviews were conducted individually, in pairs, and in groups. The authors reported on: (1) the characteristics of the expected user experience and, (2) central user requirements related to MAR in a shopping context. Users expected the MAR systems to be playful, inspiring, lively, collective, and surprising, along with providing context-aware and awareness-increasing services. This type of exploratory study is not common in the AR domain. However, it is a good example of how qualitative data can be used to identify user expectations and conceptualize user-centered AR applications. It is also an interesting study because people were asked what they expected of a mobile AR service, without actually seeing or trying the service out.

4.9.2. Discussion

One of the big advantages of studies done in this area is the relatively large sample sizes, as well as the common use of “in the wild” studies, that assess users outside of controlled environments. For these reasons, we see this application area as useful for exploring applied user interface designs, using real end-users in real environments. We also think that this category will continue to be attractive for applications that use handheld devices, as opposed to head-worn AR devices, since these are so common, and get out of the way of the content when someone wants to enjoy the physically beautiful/important works.

5. Conclusion

5.1. overall summary.

In this paper, we reported on 10 years of user studies published in AR papers. We reviewed papers from a wide range of journals and conferences as indexed by Scopus, which included 291 papers and 369 individual studies. Overall, on average, the number of user study papers among all AR papers published was less than 10% over the 10-year period we reviewed. Our exploration shows that although there has been an increase in the number of studies, the relative percentage appears the same. In addition, since 2011 there has been a shift toward more studies using handheld displays. Most studies were formal user studies, with little field testing and even fewer heuristic evaluations. Over the years there was an increase in AR user studies of educational applications, but there were few collaborative user studies. The use of pilot studies was also less than expected. The most popular data collection method involved filling out questionnaires, which led to subjective ratings being the most widely used dependent measure.

5.2. Findings and Suggestions

This analysis suggests opportunities for increased user studies in collaboration, more use of field studies, and a wider range of evaluation methods. We also find that participant populations are dominated by mostly young, educated, male participants, which suggests the field could benefit by incorporating a more diverse selection of participants. On a similar note, except for the Education and Tourism application categories, the median number of participants used in AR studies was between 12 and 18, which appears to be low compared to other fields of human-subject research. We have also noticed that within-subjects designs are dominant in AR, and these require fewer participants to achieve adequate statistical power. This is in contrast to general research in Psychology, where between-subject designs dominate.

Although formal, lab-based experiments dominated overall, the Education and Tourism application areas had higher ratios of field studies to formal lab-based studies, which required more participants. Researchers working in other application areas of AR could take inspiration from Education and Tourism papers and seek to perform more studies in real-world usage scenarios.

Similarly, because the social and environmental impact of outdoor locations differ from indoor locations, results obtained from indoor studies cannot be directly generalized to outdoor environments. Therefore, more user studies conducted outdoors are needed, especially ethnographic observational studies that report on how people naturally use AR applications. Finally, out of our initial 615 papers, 219 papers (35%) did not report either participant demographics, study design, or experimental task, and so could not be included in our survey. Any user study without these details is hard to replicate, and the results cannot be accurately generalized. This suggests a general need to improve the reporting quality of user studies, and education of researchers in the field on how to conduct good AR user studies.

5.3. Final Thoughts and Future Plans

For this survey, our goal has been to provide a comprehensive account of the AR user studies performed over the last decade. We hope that researchers and practitioners in a particular application area can use the respective summaries when planning their own research agendas. In the future, we plan to explore each individual application area in more depth, and create more detailed and focused reviews. We would also like to create a publicly-accessible, open database containing AR user study papers, where new papers can be added and accessed to inform and plan future research.

Author Contributions

All authors contributed significantly to the whole review process and the manuscript. AD initiated the process with Scopus database search, initial data collection, and analysis. AD, MB, RL, and JS all reviewed and collected data for an equal number of papers. All authors contributed almost equally to writing the paper, where AD and MB took the lead.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Ajanki, A., Billinghurst, M., Gamper, H., Järvenpää, T., Kandemir, M., Kaski, S., et al. (2011). An augmented reality interface to contextual information. Virt. Real. 15, 161–173. doi: 10.1007/s10055-010-0183-5

CrossRef Full Text | Google Scholar

Akinbiyi, T., Reiley, C. E., Saha, S., Burschka, D., Hasser, C. J., Yuh, D. D., et al. (2006). “Dynamic augmented reality for sensory substitution in robot-assisted surgical systems,” in Annual International Conference of the IEEE Engineering in Medicine and Biology - Proceedings , 567–570.

PubMed Abstract | Google Scholar

Albrecht, U.-V., Folta-Schoofs, K., Behrends, M., and Von Jan, U. (2013). Effects of mobile augmented reality learning compared to textbook learning on medical students: randomized controlled pilot study. J. Med. Int. Res. 15. doi: 10.2196/jmir.2497

PubMed Abstract | CrossRef Full Text | Google Scholar

Allen, M., Regenbrecht, H., and Abbott, M. (2011). “Smart-phone augmented reality for public participation in urban planning,” in Proceedings of the 23rd Australian Computer-Human Interaction Conference, OzCHI 2011 , 11–20.

Google Scholar

Almeida, I., Oikawa, M., Carres, J., Miyazaki, J., Kato, H., and Billinghurst, M. (2012). “AR-based video-mediated communication: a social presence enhancing experience,” in Proceedings - 2012 14th Symposium on Virtual and Augmented Reality, SVR 2012 , 125–130.

Alvarez-Santos, V., Iglesias, R., Pardo, X., Regueiro, C., and Canedo-Rodriguez, A. (2014). Gesture-based interaction with voice feedback for a tour-guide robot. J. Vis. Commun. Image Represent. 25, 499–509. doi: 10.1016/j.jvcir.2013.03.017

Anderson, F., and Bischof, W. F. (2014). Augmented reality improves myoelectric prosthesis training. Int. J. Disabil. Hum. Dev. 13, 349–354. doi: 10.1515/ijdhd-2014-0327

Anderson, J. R., Boyle, C. F., and Reiser, B. J. (1985). Intelligent tutoring systems. Science 228, 456–462.

Anderson, F., Grossman, T., Matejka, J., and Fitzmaurice, G. (2013). “YouMove: enhancing movement training with an augmented reality mirror,” in UIST 2013 - Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology , 311–320.

Archip, N., Clatz, O., Whalen, S., Kacher, D., Fedorov, A., Kot, A., et al. (2007). Non-rigid alignment of pre-operative MRI, fMRI, and DT-MRI with intra-operative MRI for enhanced visualization and navigation in image-guided neurosurgery. Neuroimage 35, 609–624. doi: 10.1016/j.neuroimage.2006.11.060

Arning, K., Ziefle, M., Li, M., and Kobbelt, L. (2012). “Insights into user experiences and acceptance of mobile indoor navigation devices,” in Proceedings of the 11th International Conference on Mobile and Ubiquitous Multimedia, MUM 2012 .

Arvanitis, T., Petrou, A., Knight, J., Savas, S., Sotiriou, S., Gargalakos, M., et al. (2009). Human factors and qualitative pedagogical evaluation of a mobile augmented reality system for science education used by learners with physical disabilities. Pers. Ubiquit. Comput. 13, 243–250. doi: 10.1007/s00779-007-0187-7

Asai, K., Kobayashi, H., and Kondo, T. (2005). “Augmented instructions - A fusion of augmented reality and printed learning materials,” in Proceedings - 5th IEEE International Conference on Advanced Learning Technologies, ICALT 2005 , Vol. 2005, 213–215.

Asai, K., Sugimoto, Y., and Billinghurst, M. (2010). “Exhibition of lunar surface navigation system facilitating collaboration between children and parents in Science Museum,” in Proceedings - VRCAI 2010, ACM SIGGRAPH Conference on Virtual-Reality Continuum and Its Application to Industry , 119–124.

Avery, B., Thomas, B. H., and Piekarski, W. (2008). “User evaluation of see-through vision for mobile outdoor augmented reality,” in Proceedings - 7th IEEE International Symposium on Mixed and Augmented Reality 2008, ISMAR 2008 , 69–72.

Axholt, M., Cooper, M., Skoglund, M., Ellis, S., O'Connell, S., and Ynnerman, A. (2011). “Parameter estimation variance of the single point active alignment method in optical see-through head mounted display calibration,” in Proceedings - IEEE Virtual Reality , 27–34.

Azuma, R. T. (1997). A survey of augmented reality. Presence 6, 355–385.

Bai, Z., and Blackwell, A. F. (2012). Analytic review of usability evaluation in ISMAR. Interact. Comput. 24, 450–460. doi: 10.1016/j.intcom.2012.07.004

Bai, H., Lee, G. A., and Billinghurst, M. (2012). “Freeze view touch and finger gesture based interaction methods for handheld augmented reality interfaces,” in ACM International Conference Proceeding Series , 126–131.

Bai, H., Gao, L., El-Sana, J. B. J., and Billinghurst, M. (2013a). “Markerless 3D gesture-based interaction for handheld augmented reality interfaces,” in SIGGRAPH Asia 2013 Symposium on Mobile Graphics and Interactive Applications, SA 2013 .

Bai, Z., Blackwell, A. F., and Coulouris, G. (2013b). “Through the looking glass: pretend play for children with autism,” in 2013 IEEE International Symposium on Mixed and Augmented Reality, ISMAR 2013 , 49–58.

Bai, H., Lee, G. A., and Billinghurst, M. (2014). “Using 3D hand gestures and touch input for wearable AR interaction,” in Conference on Human Factors in Computing Systems - Proceedings , 1321–1326.

Baldauf, M., Lasinger, K., and Fröhlich, P. (2012). “Private public screens - Detached multi-user interaction with large displays through mobile augmented reality. in Proceedings of the 11th International Conference on Mobile and Ubiquitous Multimedia, MUM 2012 .

Baričević, D., Lee, C., Turk, M., Höllerer, T., and Bowman, D. (2012). “A hand-held AR magic lens with user-perspective rendering,” in ISMAR 2012 - 11th IEEE International Symposium on Mixed and Augmented Reality 2012, Science and Technology Papers , 197–206.

Baudisch, P., Pohl, H., Reinicke, S., Wittmers, E., Lühne, P., Knaust, M., et al. (2013). “Imaginary reality gaming: Ball games without a ball,” in UIST 2013 - Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology (St. Andrews, UK), 405–410.

Benko, H., and Feiner, S. (2007). “Balloon selection: a multi-finger technique for accurate low-fatigue 3D selection,” in IEEE Symposium on 3D User Interfaces 2007 - Proceedings, 3DUI 2007 , 79–86.

Benko, H., Wilson, A., and Zannier, F. (2014). “Dyadic projected spatial augmented reality,” in UIST 2014 - Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology (Honolulu, HI), 645–656.

Bichlmeier, C., Heining, S., Rustaee, M., and Navab, N. (2007). “Laparoscopic virtual mirror for understanding vessel structure: evaluation study by twelve surgeons,” in 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, ISMAR .

Blum, T., Wieczorek, M., Aichert, A., Tibrewal, R., and Navab, N. (2010). “The effect of out-of-focus blur on visual discomfort when using stereo displays,” in 9th IEEE International Symposium on Mixed and Augmented Reality 2010: Science and Technology, ISMAR 2010 - Proceedings , 13–17.

Boring, S., Baur, D., Butz, A., Gustafson, S., and Baudisch, P. (2010). “Touch projector: Mobile interaction through video,” in Conference on Human Factors in Computing Systems - Proceedings , Vol. 4, (Atlanta, GA), 2287–2296.

Boring, S., Gehring, S., Wiethoff, A., Blöckner, M., Schöning, J., and Butz, A. (2011). “Multi-user interaction on media facades through live video on mobile devices,” in Conference on Human Factors in Computing Systems - Proceedings , 2721–2724.

Botden, S., Buzink, S., Schijven, M., and Jakimowicz, J. (2007). Augmented versus virtual reality laparoscopic simulation: what is the difference? A comparison of the ProMIS augmented reality laparoscopic simulator versus LapSim virtual reality laparoscopic simulator. World J. Surg. 31, 764–772. doi: 10.1007/s00268-006-0724-y

Botden, S., Buzink, S., Schijven, M., and Jakimowicz, J. (2008). ProMIS augmented reality training of laparoscopic procedures face validity. Simul. Healthc. 3, 97–102. doi: 10.1097/SIH.0b013e3181659e91

Botella, C., Juan, M., Baños, R., Alcañiz, M., Guillén, V., and Rey, B. (2005). Mixing realities? An application of augmented reality for the treatment of cockroach phobia. Cyberpsychol. Behav. 8, 162–171. doi: 10.1089/cpb.2005.8.162

Botella, C., Bretón-López, J., Quero, S., Baños, R., and García-Palacios, A. (2010). Treating cockroach phobia with augmented reality. Behav. Ther. 41, 401–413. doi: 10.1016/j.beth.2009.07.002

Botella, C., Breton-López, J., Quero, S., Baños, R., García-Palacios, A., Zaragoza, I., et al. (2011). Treating cockroach phobia using a serious game on a mobile phone and augmented reality exposure: a single case study. Comput. Hum. Behav. 27, 217–227. doi: 10.1016/j.chb.2010.07.043

Bretón-López, J., Quero, S., Botella, C., García-Palacios, A., Baños, R., and Alcañiz, M. (2010). An augmented reality system validation for the treatment of cockroach phobia. Cyberpsychol. Behav. Soc. Netw. 13, 705–710. doi: 10.1089/cyber.2009.0170

Brinkman, W., Havermans, S., Buzink, S., Botden, S., Jakimowicz, J. E., and Schoot, B. (2012). Single versus multimodality training basic laparoscopic skills. Surg. Endosc. Other Intervent. Tech. 26, 2172–2178. doi: 10.1007/s00464-012-2184-9

Bruno, F., Cosco, F., Angilica, A., and Muzzupappa, M. (2010). “Mixed prototyping for products usability evaluation,” in Proceedings of the ASME Design Engineering Technical Conference , Vol. 3, 1381–1390.

Bunnun, P., Subramanian, S., and Mayol-Cuevas, W. W. (2013). In Situ interactive image-based model building for Augmented Reality from a handheld device. Virt. Real. 17, 137–146. doi: 10.1007/s10055-011-0206-x

Cai, S., Chiang, F.-K., and Wang, X. (2013). Using the augmented reality 3D technique for a convex imaging experiment in a physics course. Int. J. Eng. Educ. 29, 856–865.

Cai, S., Wang, X., and Chiang, F.-K. (2014). A case study of Augmented Reality simulation system application in a chemistry course. Comput. Hum. Behav. 37, 31–40. doi: 10.1016/j.chb.2014.04.018

Carmigniani, J., Furht, B., Anisetti, M., Ceravolo, P., Damiani, E., and Ivkovic, M. (2011). Augmented reality technologies, systems and applications. Multim. Tools Appl. 51, 341–377. doi: 10.1007/s11042-010-0660-6

Chang, Y.-J., Kang, Y.-S., and Huang, P.-C. (2013). An augmented reality (AR)-based vocational task prompting system for people with cognitive impairments. Res. Dev. Disabil. 34, 3049–3056. doi: 10.1016/j.ridd.2013.06.026

Chastine, J., Nagel, K., Zhu, Y., and Yearsovich, L. (2007). “Understanding the design space of referencing in collaborative augmented reality environments,” in Proceedings - Graphics Interface , 207–214.

Chen, S., Chen, M., Kunz, A., Yantaç, A., Bergmark, M., Sundin, A., et al. (2013). “SEMarbeta: mobile sketch-gesture-video remote support for car drivers,” in ACM International Conference Proceeding Series , 69–76.

Chiang, T., Yang, S., and Hwang, G.-J. (2014). Students' online interactive patterns in augmented reality-based inquiry activities. Comput. Educ. 78, 97–108. doi: 10.1016/j.compedu.2014.05.006

Chintamani, K., Cao, A., Ellis, R., and Pandya, A. (2010). Improved telemanipulator navigation during display-control misalignments using augmented reality cues. IEEE Trans. Syst. Man Cybern. A Syst. Humans 40, 29–39. doi: 10.1109/TSMCA.2009.2030166

Choi, J., and Kim, G. J. (2013). Usability of one-handed interaction methods for handheld projection-based augmented reality. Pers. Ubiquit. Comput. 17, 399–409. doi: 10.1007/s00779-011-0502-1

Choi, J., Jang, B., and Kim, G. J. (2011). Organizing and presenting geospatial tags in location-based augmented reality. Pers. Ubiquit. Comput. 15, 641–647. doi: 10.1007/s00779-010-0343-3

Chun, W. H., and Höllerer, T. (2013). “Real-time hand interaction for augmented reality on mobile phones,” in International Conference on Intelligent User Interfaces, Proceedings IUI , 307–314.

Cocciolo, A., and Rabina, D. (2013). Does place affect user engagement and understanding?: mobile learner perceptions on the streets of New York. J. Document. 69, 98–120. doi: 10.1108/00220411311295342

Datcu, D., and Lukosch, S. (2013). “Free-hands interaction in augmented reality,” in SUI 2013 - Proceedings of the ACM Symposium on Spatial User Interaction , 33–40.

Denning, T., Dehlawi, Z., and Kohno, T. (2014). “ In situ with bystanders of augmented reality glasses: perspectives on recording and privacy-mediating technologies,” in Conference on Human Factors in Computing Systems - Proceedings , 2377–2386.

Dey, A., and Sandor, C. (2014). Lessons learned: evaluating visualizations for occluded objects in handheld augmented reality. Int. J. Hum. Comput. Stud. 72, 704–716. doi: 10.1016/j.ijhcs.2014.04.001

Dey, A., Cunningham, A., and Sandor, C. (2010). “Evaluating depth perception of photorealistic Mixed Reality visualizations for occluded objects in outdoor environments,” in Proceedings of the ACM Symposium on Virtual Reality Software and Technology, VRST , 211–218.

Dey, A., Jarvis, G., Sandor, C., and Reitmayr, G. (2012). “Tablet versus phone: depth perception in handheld augmented reality,” in ISMAR 2012 - 11th IEEE International Symposium on Mixed and Augmented Reality 2012, Science and Technology Papers , 187–196.

Dierker, A., Mertes, C., Hermann, T., Hanheide, M., and Sagerer, G. (2009). “Mediated attention with multimodal augmented reality,” in ICMI-MLMI'09 - Proceedings of the International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interfaces , 245–252.

Dixon, B., Daly, M., Chan, H., Vescan, A., Witterick, I., and Irish, J. (2011). Augmented image guidance improves skull base navigation and reduces task workload in trainees: a preclinical trial. Laryngoscope 121, 2060–2064. doi: 10.1002/lary.22153

Dow, S., Mehta, M., Harmon, E., MacIntyre, B., and Mateas, M. (2007). “Presence and engagement in an interactive drama,” in Conference on Human Factors in Computing Systems - Proceedings (San Jose, CA), 1475–1484.

Dünser, A., Grasset, R., and Billinghurst, M. (2008). A Survey of Evaluation Techniques Used in Augmented Reality Studies . Technical Report.

Dünser, A., Billinghurst, M., Wen, J., Lehtinen, V., and Nurminen, A. (2012a). Exploring the use of handheld AR for outdoor navigation. Comput. Graph. 36, 1084–1095. doi: 10.1016/j.cag.2012.10.001

Dünser, A., Walker, L., Horner, H., and Bentall, D. (2012b). “Creating interactive physics education books with augmented reality,” in Proceedings of the 24th Australian Computer-Human Interaction Conference, OzCHI 2012 ,107–114.

Espay, A., Baram, Y., Dwivedi, A., Shukla, R., Gartner, M., Gaines, L., et al. (2010). At-home training with closed-loop augmented-reality cueing device for improving gait in patients with Parkinson disease. J. Rehabil. Res. Dev. 47, 573–582. doi: 10.1682/JRRD.2009.10.0165

Fichtinger, G., Deguet, A., Masamune, K., Balogh, E., Fischer, G., Mathieu, H., et al. (2005). Image overlay guidance for needle insertion in CT scanner. IEEE Trans. Biomed. Eng. 52, 1415–1424. doi: 10.1109/TBME.2005.851493

Fichtinger, G. D., Deguet, A., Fischer, G., Iordachita, I., Balogh, E. B., Masamune, K., et al. (2005). Image overlay for CT-guided needle insertions. Comput. Aided Surg. 10, 241–255. doi: 10.3109/10929080500230486

Fiorentino, M., Debernardis, S., Uva, A. E., and Monno, G. (2013). Augmented reality text style readability with see-through head-mounted displays in industrial context. Presence 22, 171–190. doi: 10.1162/PRES_a_00146

Fiorentino, M., Uva, A. E., Gattullo, M., Debernardis, S., and Monno, G. (2014). Augmented reality on large screen for interactive maintenance instructions. Comput. Indust. 65, 270–278. doi: 10.1016/j.compind.2013.11.004

Fonseca, D., Redondo, E., and Villagrasa, S. (2014b). “Mixed-methods research: a new approach to evaluating the motivation and satisfaction of university students using advanced visual technologies,” in Universal Access in the Information Society .

Fonseca, D., Martí, N., Redondo, E., Navarro, I., and Sánchez, A. (2014a). Relationship between student profile, tool use, participation, and academic performance with the use of Augmented Reality technology for visualized architecture models. Comput. Hum. Behav. 31, 434–445. doi: 10.1016/j.chb.2013.03.006

Freitas, R., and Campos, P. (2008). “SMART: a system of augmented reality for teaching 2nd grade students,” in Proceedings of the 22nd British HCI Group Annual Conference on People and Computers: Culture, Creativity, Interaction, BCS HCI 2008 , Vol. 2, 27–30.

Fröhlich, P., Simon, R., Baillie, L., and Anegg, H. (2006). “Comparing conceptual designs for mobile access to geo-spatial information,” in ACM International Conference Proceeding Series , Vol. 159, 109–112.

Fröhlich, P., Baldauf, M., Hagen, M., Suette, S., Schabus, D., and Kun, A. (2011). “Investigating safety services on the motorway: the role of realistic visualization,” in Proceedings of the 3rd International Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI 2011 , 143–150.

Furió, D., González-Gancedo, S., Juan, M.-C., Seguí, I., and Rando, N. (2013). Evaluation of learning outcomes using an educational iPhone game vs. traditional game. Comput. Educ. 64, 1–23. doi: 10.1016/j.compedu.2012.12.001

Gabbard, J., and Swan II, J. c. (2008). Usability engineering for augmented reality: employing user-based studies to inform design. IEEE Trans. Visual. Comput. Graph. 14, 513–525. doi: 10.1109/TVCG.2008.24

Gabbard, J., Schulman, R., Edward Swan II, J., Lucas, J., Hix, D., and Gupta, D. (2005). “An empirical user-based study of text drawing styles and outdoor background textures for augmented reality,” in Proceedings - IEEE Virtual Reality , 11–18.317.

Gabbard, J., Swan II, J., and Mix, D. (2006). The effects of text drawing styles, background textures, and natural lighting on text legibility in outdoor augmented reality. Presence 15, 16–32. doi: 10.1162/pres.2006.15.1.16

Gabbard, J., Swan II, J., Hix, D., Kim, S.-J., and Fitch, G. (2007). “Active text drawing styles for outdoor augmented reality: a user-based study and design implications,” in Proceedings - IEEE Virtual Reality , 35–42.

Gama, A. D., Chaves, T., Figueiredo, L., and Teichrieb, V. (2012). “Guidance and movement correction based on therapeutics movements for motor rehabilitation support systems,” in Proceedings - 2012 14th Symposium on Virtual and Augmented Reality, SVR 2012 (Rio de Janeiro), 191–200.

Gandy, M., Catrambone, R., MacIntyre, B., Alvarez, C., Eiriksdottir, E., Hilimire, M., et al. (2010). “Experiences with an AR evaluation test bed: presence, performance, and physiological measurement,” in 9th IEEE International Symposium on Mixed and Augmented Reality 2010: Science and Technology, ISMAR 2010 - Proceedings , 127–136.

Gauglitz, S., Lee, C., Turk, M., and Höllerer, T. (2012). “Integrating the physical environment into mobile remote collaboration,” in MobileHCI'12 - Proceedings of the 14th International Conference on Human Computer Interaction with Mobile Devices and Services , 241–250.

Gauglitz, S., Nuernberger, B., Turk, M., and Höllerer, T. (2014a). “In touch with the remote world: remote collaboration with augmented reality drawings and virtual navigation,” in Proceedings of the ACM Symposium on Virtual Reality Software and Technology, VRST , 197–205.

Gauglitz, S., Nuernberger, B., Turk, M., and Höllerer, T. (2014b). “World-stabilized annotations and virtual scene navigation for remote collaboration,” in UIST 2014 - Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology (Honolulu, HI), 449–460.

Gavish, N., Gutiérrez, T., Webel, S., Rodríguez, J., Peveri, M., Bockholt, U., et al. (2013). Evaluating virtual reality and augmented reality training for industrial maintenance and assembly tasks. Interact. Learn. Environ . 23, 778–798. doi: 10.1080/10494820.2013.815221

Gee, A., Webb, M., Escamilla-Ambrosio, J., Mayol-Cuevas, W., and Calway, A. (2011). A topometric system for wide area augmented reality. Comput. Graph. (Pergamon) 35, 854–868. doi: 10.1016/j.cag.2011.04.006

Goldiez, B., Ahmad, A., and Hancock, P. (2007). Effects of augmented reality display settings on human wayfinding performance. IEEE Trans. Syst. Man Cybern. C Appl. Rev. 37, 839–845. doi: 10.1109/TSMCC.2007.900665

Grasset, R., Lamb, P., and Billinghurst, M. (2005). “Evaluation of mixed-space collaboration,” in Proceedings - Fourth IEEE and ACM International Symposium on Symposium on Mixed and Augmented Reality, ISMAR 2005 , Vol. 2005, 90–99.

Grasset, R., Langlotz, T., Kalkofen, D., Tatzgern, M., and Schmalstieg, D. (2012). “Image-driven view management for augmented reality browsers,” in ISMAR 2012 - 11th IEEE International Symposium on Mixed and Augmented Reality 2012, Science and Technology Papers , 177–186.

Grasso, R., Faiella, E., Luppi, G., Schena, E., Giurazza, F., Del Vescovo, R., et al. (2013). Percutaneous lung biopsy: comparison between an augmented reality CT navigation system and standard CT-guided technique. Int. J. Comput. Assist. Radiol. Surg. 8, 837–848. doi: 10.1007/s11548-013-0816-8

Grechkin, T. Y., Nguyen, T. D., Plumert, J. M., Cremer, J. F., and Kearney, J. K. (2010). How does presentation method and measurement protocol affect distance estimation in real and virtual environments? ACM Trans. Appl. Percept. 7:26. doi: 10.1145/1823738.1823744

Grubert, J., Morrison, A., Munz, H., and Reitmayr, G. (2012). “Playing it real: magic lens and static peephole interfaces for games in a public space,” in MobileHCI'12 - Proceedings of the 14th International Conference on Human Computer Interaction with Mobile Devices and Services , 231–240.

Gupta, A., Fox, D., Curless, B., and Cohen, M. (2012). “DuploTrack: a real-time system for authoring and guiding duplo block assembly,” in UIST'12 - Proceedings of the 25th Annual ACM Symposium on User Interface Software and Technology , 389–401.

Gustafsson, A., and Gyllenswärd, M. (2005). “The power-aware cord: Energy awareness through ambient information display,” in Conference on Human Factors in Computing Systems - Proceedings , 1423–1426.

Ha, T., and Woo, W. (2010). “An empirical evaluation of virtual hand techniques for 3D object manipulation in a tangible augmented reality environment,” in 3DUI 2010 - IEEE Symposium on 3D User Interfaces 2010, Proceedings , 91–98.

Ha, T., Billinghurst, M., and Woo, W. (2012). An interactive 3D movement path manipulation method in an augmented reality environment. Interact. Comput. 24, 10–24. doi: 10.1016/j.intcom.2011.06.006

Hakkarainen, M., Woodward, C., and Billinghurst, M. (2008). “Augmented assembly using a mobile phone,” in Proceedings - 7th IEEE International Symposium on Mixed and Augmented Reality 2008, ISMAR 2008 , 167–168.

Hartl, A., Grubert, J., Schmalstieg, D., and Reitmayr, G. (2013). “Mobile interactive hologram verification,” in 2013 IEEE International Symposium on Mixed and Augmented Reality, ISMAR 2013 , 75–82.

Hatala, M., and Wakkary, R. (2005). Ontology-based user modeling in an augmented audio reality system for museums. User Modell. User Adapt. Interact. 15, 339–380. doi: 10.1007/s11257-005-2304-5

Hatala, M., Wakkary, R., and Kalantari, L. (2005). Rules and ontologies in support of real-time ubiquitous application. Web Semant. 3, 5–22. doi: 10.1016/j.websem.2005.05.004

Haugstvedt, A.-C., and Krogstie, J. (2012). “Mobile augmented reality for cultural heritage: A technology acceptance study,” in ISMAR 2012 - 11th IEEE International Symposium on Mixed and Augmented Reality 2012, Science and Technology Papers (Atlanta, GA), 247–255.

Heller, F., Krämer, A., and Borchers, J. (2014). “Simplifying orientation measurement for mobile audio augmented reality applications,” in Conference on Human Factors in Computing Systems - Proceedings , 615–623.

Henderson, S. J., and Feiner, S. (2008). “Opportunistic controls: leveraging natural affordances as tangible user interfaces for augmented reality,” in Proceedings of the ACM Symposium on Virtual Reality Software and Technology, VRST , 211–218.

Henderson, S. J., and Feiner, S. (2009). “Evaluating the benefits of augmented reality for task localization in maintenance of an armored personnel carrier turret,” in Science and Technology Proceedings - IEEE 2009 International Symposium on Mixed and Augmented Reality, ISMAR 2009 (Orlando, FL), 135–144.

Henderson, S., and Feiner, S. (2010). Opportunistic tangible user interfaces for augmented reality. IEEE Trans. Visual. Comput. Graph. 16, 4–16. doi: 10.1109/TVCG.2009.91

Henderson, S., and Feiner, S. (2011). Exploring the benefits of augmented reality documentation for maintenance and repair. IEEE Trans Visual. Comput. Graphics 17, 1355–1368. doi: 10.1109/TVCG.2010.245

Henderson, S. J., and Feiner, S. K. (2011). “Augmented reality in the psychomotor phase of a procedural task,” in 2011 10th IEEE International Symposium on Mixed and Augmented Reality, ISMAR 2011 (Basel), 191–200.

Henrysson, A., Billinghurst, M., and Ollila, M. (2005a). “Face to face collaborative AR on mobile phones,” in Proceedings - Fourth IEEE and ACM International Symposium on Symposium on Mixed and Augmented Reality, ISMAR 2005 , Vol. 2005, 80–89.

Henrysson, A., Billinghurst, M., and Ollila, M. (2005b). “Virtual object manipulation using a mobile phone,” in ACM International Conference Proceeding Series , Vol. 157, 164–171.

Henrysson, A., Marshall, J., and Billinghurst, M. (2007). “Experiments in 3D interaction for mobile phone AR,” in Proceedings - GRAPHITE 2007, 5th International Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia , 187–194.

Henze, N., and Boll, S. (2010a). “Designing a CD augmentation for mobile phones,” in Conference on Human Factors in Computing Systems - Proceedings , 3979–3984.

Henze, N., and Boll, S. (2010b). “Evaluation of an off-screen visualization for magic lens and dynamic peephole interfaces,” in ACM International Conference Proceeding Series (Lisbon), 191–194.

Hincapié-Ramos, J., Roscher, S., Büschel, W., Kister, U., Dachselt, R., and Irani, P. (2014). “CAR: contact augmented reality with transparent-display mobile devices,” in PerDis 2014 - Proceedings: 3rd ACM International Symposium on Pervasive Displays 2014 , 80–85.

Hoang, T. N., and Thomas, B. H. (2010). “Augmented viewport: an action at a distance technique for outdoor AR using distant and zoom lens cameras,” in Proceedings - International Symposium on Wearable Computers, ISWC .

Horeman, T., Rodrigues, S., Van Den Dobbelsteen, J., Jansen, F.-W., and Dankelman, J. (2012). Visual force feedback in laparoscopic training. Surg. Endosc. Other Intervent. Techniq. 26, 242–248. doi: 10.1007/s00464-011-1861-4

Horeman, T., Van Delft, F., Blikkendaal, M., Dankelman, J., Van Den Dobbelsteen, J., and Jansen, F.-W. (2014). Learning from visual force feedback in box trainers: tissue manipulation in laparoscopic surgery. Surg. Endosc. Other Intervent. Techniq. 28, 1961–1970. doi: 10.1007/s00464-014-3425-x

Hou, L., and Wang, X. (2013). A study on the benefits of augmented reality in retaining working memory in assembly tasks: a focus on differences in gender. Automat. Construct. 32, 38–45. doi: 10.1016/j.autcon.2012.12.007

Hsiao, K.-F., Chen, N.-S., and Huang, S.-Y. (2012). Learning while exercising for science education in augmented reality among adolescents. Interact. Learn. Environ. 20, 331–349. doi: 10.1080/10494820.2010.486682

Hsiao, O.-F. (2010). Can we combine learning with augmented reality physical activity? J. Cyber Ther. Rehabil. 3, 51–62.

Hunter, S., Kalanithi, J., and Merrill, D. (2010). “Make a Riddle and TeleStory: designing children's applications for the Siftables platform,” in Proceedings of IDC2010: The 9th International Conference on Interaction Design and Children , 206–209.

Hürst, W., and Van Wezel, C. (2013). Gesture-based interaction via finger tracking for mobile augmented reality. Multimedia Tools Appl. 62, 233–258. doi: 10.1007/s11042-011-0983-y

Ibáñez, M., Di Serio, A., Villarán, D., and Delgado Kloos, C. (2014). Experimenting with electromagnetism using augmented reality: impact on flow student experience and educational effectiveness. Comput. Educ. 71, 1–13. doi: 10.1016/j.compedu.2013.09.004

Irizarry, J., Gheisari, M., Williams, G., and Walker, B. (2013). InfoSPOT: a mobile augmented reality method for accessing building information through a situation awareness approach. Autom. Construct. 33, 11–23. doi: 10.1016/j.autcon.2012.09.002

Iwai, D., Yabiki, T., and Sato, K. (2013). View management of projected labels on nonplanar and textured surfaces. IEEE Trans. Visual. Comput. Graph. 19, 1415–1424. doi: 10.1109/TVCG.2012.321

Iwata, T., Yamabe, T., and Nakajima, T. (2011). “Augmented reality go: extending traditional game play with interactive self-learning support,” in Proceedings - 17th IEEE International Conference on Embedded and Real-Time Computing Systems and Applications, RTCSA 2011 , Vol. 1, (Toyama), 105–114.

Jankowski, J., Samp, K., Irzynska, I., Jozwowicz, M., and Decker, S. (2010). “Integrating text with video and 3D graphics: the effects of text drawing styles on text readability,” in Conference on Human Factors in Computing Systems - Proceedings , Vol. 2, 1321–1330.

Jeon, S., and Choi, S. (2011). Real stiffness augmentation for haptic augmented reality. Presence 20, 337–370. doi: 10.1162/PRES_a_00051

Jeon, S., and Harders, M. (2012). “Extending haptic augmented reality: modulating stiffness during two-point squeezing,” in Haptics Symposium 2012, HAPTICS 2012 - Proceedings , 141–146.

Jeon, S., Choi, S., and Harders, M. (2012). Rendering virtual tumors in real tissue Mock-Ups using haptic augmented reality. IEEE Trans. Hapt. 5, 77–84. doi: 10.1109/TOH.2011.40

Jo, H., Hwang, S., Park, H., and Ryu, J.-H. (2011). Aroundplot: focus+context interface for off-screen objects in 3D environments. Comput. Graph. (Pergamon) 35, 841–853.

Jones, J., Swan, J., Singh, G., Kolstad, E., and Ellis, S. (2008). “The effects of virtual reality, augmented reality, and motion parallax on egocentric depth perception,” in APGV 2008 - Proceedings of the Symposium on Applied Perception in Graphics and Visualization , 9–14.

Jones, J., Swan II, J., Singh, G., and Ellis, S. (2011). “Peripheral visual information and its effect on distance judgments in virtual and augmented environments,” in Proceedings - APGV 2011: ACM SIGGRAPH Symposium on Applied Perception in Graphics and Visualization , 29–36.

Jones, B., Benko, H., Ofek, E., and Wilson, A. (2013). “IllumiRoom: peripheral projected illusions for interactive experiences,” in Conference on Human Factors in Computing Systems - Proceedings (Paris), 869–878.

Juan, M., and Joele, D. (2011). A comparative study of the sense of presence and anxiety in an invisible marker versus a marker augmented reality system for the treatment of phobia towards small animals. Int. J. Hum. Comput. Stud. 69, 440–453. doi: 10.1016/j.ijhcs.2011.03.002

Juan, M., and Prez, D. (2010). Using augmented and virtual reality for the development of acrophobic scenarios. Comparison of the levels of presence and anxiety. Comput. Graph. (Pergamon) 34, 756–766. doi: 10.1016/j.cag.2010.08.001

Juan, M., Carrizo, M., Abad, F., and Giménez, M. (2011a). “Using an augmented reality game to find matching pairs,” in 19th International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision, WSCG 2011 - In Co-operation with EUROGRAPHICS, Full Papers Proceedings , 59–66.

Juan, M., Furió, D., Alem, L., Ashworth, P., and Cano, J. (2011b). “ARGreenet and BasicGreenet: Two mobile games for learning how to recycle,” in 19th International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision, WSCG 2011 - In Co-operation with EUROGRAPHICS, Full Papers Proceedings (Plzen), 25–32.

Kasahara, S., and Rekimoto, J. (2014). “JackIn: integrating first-person view with out-of-body vision generation for human-human augmentation,” in ACM International Conference Proceeding Series .

Kellner, F., Bolte, B., Bruder, G., Rautenberg, U., Steinicke, F., Lappe, M., et al. (2012). Geometric calibration of head-mounted displays and its effects on distance estimation. IEEE Trans. Visual. Comput. Graph. 18, 589–596. doi: 10.1109/TVCG.2012.45

Kerber, F., Lessel, P., Mauderer, M., Daiber, F., Oulasvirta, A., and Krüger, A. (2013). “Is autostereoscopy useful for handheld AR?,” in Proceedings of the 12th International Conference on Mobile and Ubiquitous Multimedia, MUM 2013 .

Kern, D., Stringer, M., Fitzpatrick, G., and Schmidt, A. (2006). “Curball - A prototype tangible game for inter-generational play,” in Proceedings of the Workshop on Enabling Technologies: Infrastructure for Collaborative Enterprises, WETICE , 412–417.

Kerr, S., Rice, M., Teo, Y., Wan, M., Cheong, Y., Ng, J., et al. (2011). “Wearable mobile augmented reality: evaluating outdoor user experience,” in Proceedings of VRCAI 2011: ACM SIGGRAPH Conference on Virtual-Reality Continuum and its Applications to Industry , 209–216.

Kim, M. J. (2013). A framework for context immersion in mobile augmented reality. Automat. Construct. 33, 79–85. doi: 10.1016/j.autcon.2012.10.020

King, M., Hale, L., Pekkari, A., Persson, M., Gregorsson, M., and Nilsson, M. (2010). An affordable, computerised, table-based exercise system for stroke survivors. Disabil. Rehabil. Assist. Technol. 5, 288–293. doi: 10.3109/17483101003718161

Kjeldskov, J., Skov, M. B., Nielsen, G. W., Thorup, S., and Vestergaard, M. (2013). Digital urban ambience: mediating context on mobile devices in a city. Pervasive Mobile Comput. 9, 738–749. doi: 10.1016/j.pmcj.2012.05.002

Knörlein, B., Di Luca, M., and Harders, M. (2009). “Influence of visual and haptic delays on stiffness perception in augmented reality,” in Science and Technology Proceedings - IEEE 2009 International Symposium on Mixed and Augmented Reality, ISMAR 2009 , 49–52.

Ko, S. M., Chang, W. S., and Ji, Y. G. (2013). Usability principles for augmented reality applications in a smartphone environment. Int. J. Hum. Comput. Interact. 29, 501–515. doi: 10.1080/10447318.2012.722466

Kron, A., and Schmidt, G. (2005). “Haptic telepresent control technology applied to disposal of explosive ordnances: principles and experimental results,” in IEEE International Symposium on Industrial Electronics , Vol. IV, 1505–1510.

Kruijff, E., Swan II, J. E., and Feiner, S. (2010). “Perceptual issues in augmented reality revisited,” in Mixed and Augmented Reality (ISMAR), 2010 9th IEEE International Symposium on (Seoul), 3–12.

Kurt, S. (2010). From information to experience: place-based augmented reality games as a model for learning in a globally networked society. Teach. Coll. Rec. 112, 2565–2602.

Langlotz, T., Regenbrecht, H., Zollmann, S., and Schmalstieg, D. (2013). “Audio stickies: visually-guided spatial audio annotations on a mobile augmented reality platform,” in Proceedings of the 25th Australian Computer-Human Interaction Conference: Augmentation, Application, Innovation, Collaboration, OzCHI 2013 , 545–554.

Lau, M., Hirose, M., Ohgawara, A., Mitani, J., and Igarashi, T. (2012). “Situated modeling: a shape-stamping interface with tangible primitives,” in Proceedings of the 6th International Conference on Tangible, Embedded and Embodied Interaction, TEI 2012 , 275–282.

Leblanc, F., Senagore, A., Ellis, C., Champagne, B., Augestad, K., Neary, P., et al. (2010). Hand-assisted laparoscopic sigmoid colectomy skills acquisition: augmented reality simulator versus human cadaver training models. J. Surg. Educ. 67, 200–204. doi: 10.1016/j.jsurg.2010.06.004

Lee, M., and Billinghurst, M. (2008). “A Wizard of Oz Study for an AR Multimodal Interface,” in ICMI'08: Proceedings of the 10th International Conference on Multimodal Interfaces , 249–256.

Lee, G. A., and Billinghurst, M. (2011). “A user study on the Snap-To-Feature interaction method,” in 2011 10th IEEE International Symposium on Mixed and Augmented Reality, ISMAR 2011 , 245–246.

Lee, C., Bonebrake, S., Höllerer, T., and Bowman, D. (2009). “A replication study testing the validity of AR simulation in VR for controlled experiments,” in Science and Technology Proceedings - IEEE 2009 International Symposium on Mixed and Augmented Reality, ISMAR 2009 , 203–204.

Lee, C., Bonebrake, S., Höllerer, T., and Bowman, D. (2010). “The role of latency in the validity of AR simulation,” in Proceedings - IEEE Virtual Reality , 11–18.

Lee, J., Kim, Y., and Kim, G. J. (2012). “Funneling and saltation effects for tactile interaction with virtual objectsvirtual,” in Conference on Human Factors in Computing Systems - Proceedings , 3141–3148.

Lee, C., Rincon, G., Meyer, G., Höllerer, T., and Bowman, D. (2013a). The effects of visual realism on search tasks in mixed reality simulation. IEEE Trans. Visual. Comput. Graph. 19, 547–556. doi: 10.1109/TVCG.2013.41

Lee, J., Olwal, A., Ishii, H., and Boulanger, C. (2013b). “SpaceTop: integrating 2D and spatial 3D interactions in a see-through desktop environment,” in Conference on Human Factors in Computing Systems - Proceedings , 189–192.

Lee, M., Billinghurst, M., Baek, W., Green, R., and Woo, W. (2013c). A usability study of multimodal input in an augmented reality environment. Virt. Real. 17, 293–305. doi: 10.1007/s10055-013-0230-0

Lee, S., Lee, J., Lee, A., Park, N., Lee, S., Song, S., et al. (2013d). Augmented reality intravenous injection simulator based 3D medical imaging for veterinary medicine. Veter. J. 196, 197–202. doi: 10.1016/j.tvjl.2012.09.015

Lehtinen, V., Nurminen, A., and Oulasvirta, A. (2012). “Integrating spatial sensing to an interactive mobile 3D map,” in IEEE Symposium on 3D User Interfaces 2012, 3DUI 2012 - Proceedings , 11–14.

Leithinger, D., Follmer, S., Olwal, A., Luescher, S., Hogge, A., Lee, J., et al. (2013). “Sublimate: state-changing virtual and physical rendering to augment interaction with shape displays,” in Conference on Human Factors in Computing Systems - Proceedings , 1441–1450.

Li, N., Gu, Y., Chang, L., and Duh, H.-L. (2011). “Influences of AR-supported simulation on learning effectiveness in face-to-face collaborative learning for physics,” in Proceedings of the 2011 11th IEEE International Conference on Advanced Learning Technologies, ICALT 2011 , 320–322.

Liarokapis, F. (2005). “Augmented reality scenarios for guitar learning,” in Theory and Practice of Computer Graphics 2005, TPCG 2005 - Eurographics UK Chapter Proceedings , 163–170.

Lin, T.-J., Duh, H.-L., Li, N., Wang, H.-Y., and Tsai, C.-C. (2013). An investigation of learners' collaborative knowledge construction performances and behavior patterns in an augmented reality simulation system. Comput. Educ. 68, 314–321. doi: 10.1016/j.compedu.2013.05.011

Lindeman, R., Noma, H., and De Barros, P. (2007). “Hear-through and mic-through augmented reality: using bone conduction to display spatialized audio,” in 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, ISMAR .

Liu, S., Hua, H., and Cheng, D. (2010). A novel prototype for an optical see-through head-mounted display with addressable focus cues. IEEE Trans. Visual. Comput. Graph. 16, 381–393. doi: 10.1109/TVCG.2009.95

Liu, C., Huot, S., Diehl, J., MacKay, W., and Beaudouin-Lafon, M. (2012). “Evaluating the benefits of real-time feedback in mobile Augmented Reality with hand-held devices,” in Conference on Human Factors in Computing Systems - Proceedings , 2973–2976.

Livingston, M. A., and Ai, Z. (2008). “The effect of registration error on tracking distant augmented objects,” in Proceedings - 7th IEEE International Symposium on Mixed and Augmented Reality 2008, ISMAR 2008 , 77–86.

Livingston, M., Zanbaka, C., Edward Swan II, J., and Smallman, H. (2005). “Objective measures for the effectiveness of augmented reality,” in Proceedings - IEEE Virtual Reality , 287–288.

Livingston, M., Ai, Z., Swan II, J., and Smallman, H. (2009a). “Indoor vs. outdoor depth perception for mobile augmented reality,” in Proceedings - IEEE Virtual Reality , 55–62.

Livingston, M. A., Ai, Z., and Decker, J. W. (2009b). “A user study towards understanding stereo perception in head-worn augmented reality displays,” in Science and Technology Proceedings - IEEE 2009 International Symposium on Mixed and Augmented Reality, ISMAR 2009 , 53–56.

Livingston, M. A., Barrow, J. H., and Sibley, C. M. (2009c). “Quantification of contrast sensitivity and color perception using Head-Worn augmented reality displays,” in Proceedings - IEEE Virtual Reality , 115–122.

Livingston, M. A., Ai, Z., Karsch, K., and Gibson, G. O. (2011). User interface design for military AR applications. Virt. Real. 15, 175–184. doi: 10.1007/s10055-010-0179-1

Livingston, M. A., Dey, A., Sandor, C., and Thomas, B. H. (2013). Pursuit of “X-Ray Vision” for Augmented Reality . New York, NY: Springer.

Livingston, M. A. (2007). “Quantification of visual capabilities using augmented reality displays,” in Proceedings - ISMAR 2006: Fifth IEEE and ACM International Symposium on Mixed and Augmented Reality , 3–12.

Looser, J., Billinghurst, M., Grasset, R., and Cockburn, A. (2007). “An evaluation of virtual lenses for object selection in augmented reality,” in Proceedings - GRAPHITE 2007, 5th International Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia , 203–210.

Lu, W., Duh, B.-L., and Feiner, S. (2012). “Subtle cueing for visual search in augmented reality,” in ISMAR 2012 - 11th IEEE International Symposium on Mixed and Augmented Reality 2012, Science and Technology Papers , 161–166.

Luckin, R., and Fraser, D. (2011). Limitless or pointless? An evaluation of augmented reality technology in the school and home. Int. J. Technol. Enhanced Learn. 3, 510–524. doi: 10.1504/IJTEL.2011.042102

Luo, X., Kline, T., Fischer, H., Stubblefield, K., Kenyon, R., and Kamper, D. (2005a). “Integration of augmented reality and assistive devices for post-stroke hand opening rehabilitation,” in Annual International Conference of the IEEE Engineering in Medicine and Biology - Proceedings , Vol. 7, 6855–6858.

Luo, X. c., Kenyon, R., Kline, T., Waldinger, H., and Kamper, D. (2005b). “An augmented reality training environment for post-stroke finger extension rehabilitation,” in Proceedings of the 2005 IEEE 9th International Conference on Rehabilitation Robotics , Vol. 2005, 329–332.

Lv, Z. (2013). “Wearable smartphone: wearable hybrid framework for hand and foot gesture interaction on smartphone,” in Proceedings of the IEEE International Conference on Computer Vision , 436–443.

Magnusson, C., Molina, M., Rassmus-Gröhn, K., and Szymczak, D. (2010). “Pointing for non-visual orientation and navigation,” in NordiCHI 2010: Extending Boundaries - Proceedings of the 6th Nordic Conference on Human-Computer Interaction , 735–738.

Maier, P., Dey, A., Waechter, C. A. L., Sandor, C., Tšnnis, M., and Klinker, G. (2011). “An empiric evaluation of confirmation methods for optical see-through head-mounted display calibration,” in 2011 10th IEEE International Symposium on Mixed and Augmented Reality , 267–268.

Markov-Vetter, D., Moll, E., and Staadt, O. (2012). “Evaluation of 3D selection tasks in parabolic flight conditions: pointing task in augmented reality user interfaces,” in Proceedings - VRCAI 2012: 11th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and Its Applications in Industry , 287–293.

Markovic, M., Dosen, S., Cipriani, C., Popovic, D., and Farina, D. (2014). Stereovision and augmented reality for closed-loop control of grasping in hand prostheses. J. Neural Eng. 11:046001. doi: 10.1088/1741-2560/11/4/046001

Marner, M. R., Irlitti, A., and Thomas, B. H. (2013). “Improving procedural task performance with Augmented Reality annotations,” in 2013 IEEE International Symposium on Mixed and Augmented Reality, ISMAR 2013 , 39–48.

Martin-Gutierrez, J. (2011). “Generic user manual for maintenance of mountain bike brakes based on augmented reality,” in Proceedings of the 28th International Symposium on Automation and Robotics in Construction, ISARC 2011 , 1401–1406.

Mercier-Ganady, J., Lotte, F., Loup-Escande, E., Marchal, M., and Lecuyer, A. (2014). “The Mind-Mirror: see your brain in action in your head using EEG and augmented reality,” in Proceedings - IEEE Virtual Reality , 33–38.

Milgram, P., Takemura, H., Utsumi, A., and Kishino, F. (1995). “Augmented reality: a class of displays on the reality-virtuality continuum,” in Telemanipulator and Telepresence Technologies , Vol. 2351 (International Society for Optics and Photonics), 282–293.

Morrison, A., Oulasvirta, A., Peltonen, P., Lemmelä, S., Jacucci, G., Reitmayr, G., et al. (2009). “Like bees around the hive: a comparative study of a mobile augmented reality map,” in Conference on Human Factors in Computing Systems - Proceedings (Boston, MA), 1889–1898.

Mossel, A., Venditti, B., and Kaufmann, H. (2013a). “3Dtouch and homer-s: intuitive manipulation techniques for one-handed handheld augmented reality,” in ACM International Conference Proceeding Series .

Mossel, A., Venditti, B., and Kaufmann, H. (2013b). “Drillsample: precise selection in dense handheld augmented reality environments,” in ACM International Conference Proceeding Series .

Moussa, G., Radwan, E., and Hussain, K. (2012). Augmented reality vehicle system: left-turn maneuver study. Transport. Res. C Emerging Technol. 21, 1–16. doi: 10.1016/j.trc.2011.08.005

Mulloni, A., Wagner, D., and Schmalstieg, D. (2008). “Mobility and social interaction as core gameplay elements in multi-player augmented reality,” in Proceedings - 3rd International Conference on Digital Interactive Media in Entertainment and Arts, DIMEA 2008 , 472–478.

Mulloni, A., Seichter, H., and Schmalstieg, D. (2011a). “Handheld augmented reality indoor navigation with activity-based instructions,” in Mobile HCI 2011 - 13th International Conference on Human-Computer Interaction with Mobile Devices and Services , 211–220.

Mulloni, A., Seichter, H., and Schmalstieg, D. (2011b). “User experiences with augmented reality aided navigation on phones,” in 2011 10th IEEE International Symposium on Mixed and Augmented Reality, ISMAR 2011 , 229–230.

Mulloni, A., Ramachandran, M., Reitmayr, G., Wagner, D., Grasset, R., and Diaz, S. (2013). “User friendly SLAM initialization,” in 2013 IEEE International Symposium on Mixed and Augmented Reality, ISMAR 2013 , 153–162.

Möller, A., Kranz, M., Huitl, R., Diewald, S., and Roalter, L. (2012). “A mobile indoor navigation system interface adapted to vision-based localization,” in Proceedings of the 11th International Conference on Mobile and Ubiquitous Multimedia, MUM 2012 .

Möller, A., Kranz, M., Diewald, S., Roalter, L., Huitl, R., Stockinger, T., et al. (2014). “Experimental evaluation of user interfaces for visual indoor navigation,” in Conference on Human Factors in Computing Systems - Proceedings , 3607–3616.

Ng-Thow-Hing, V., Bark, K., Beckwith, L., Tran, C., Bhandari, R., and Sridhar, S. (2013). “User-centered perspectives for automotive augmented reality,” in 2013 IEEE International Symposium on Mixed and Augmented Reality - Arts, Media, and Humanities, ISMAR-AMH 2013 , 13–22.

Nicolau, S., Garcia, A., Pennec, X., Soler, L., and Ayache, N. (2005). An augmented reality system to guide radio-frequency tumour ablation. Comput. Animat. Virt. Worlds 16, 1–10. doi: 10.1002/cav.52

Nilsson, S., and Johansson, B. (2007). “Fun and usable: augmented Reality instructions in a hospital setting,” in Australasian Computer-Human Interaction Conference, OZCHI'07 , 123–130.

Oda, O., and Feiner, S. (2009). “Interference avoidance in multi-user hand-held augmented reality,” in Science and Technology Proceedings - IEEE 2009 International Symposium on Mixed and Augmented Reality, ISMAR 2009 , 13–22.

Ofek, E., Iqbal, S. T., and Strauss, K. (2013). “Reducing disruption from subtle information delivery during a conversation: mode and bandwidth investigation,” in Conference on Human Factors in Computing Systems - Proceedings , 3111–3120.

Oh, S., and Byun, Y. (2012). “The design and implementation of augmented reality learning systems,” in Proceedings - 2012 IEEE/ACIS 11th International Conference on Computer and Information Science, ICIS 2012 , 651–654.

Oh, J.-Y., and Hua, H. (2007). “User evaluations on form factors of tangible magic lenses,” in Proceedings - ISMAR 2006: Fifth IEEE and ACM International Symposium on Mixed and Augmented Reality , 23–32.

Olsson, T., and Salo, M. (2011). “Online user survey on current mobile augmented reality applications,” in 2011 10th IEEE International Symposium on Mixed and Augmented Reality, ISMAR 2011 , 75–84.

Olsson, T., and Salo, M. (2012). “Narratives of satisfying and unsatisfying experiences of current mobile Augmented Reality applications,” in Conference on Human Factors in Computing Systems - Proceedings , 2779–2788.

Olsson, T., Ihamäki, P., Lagerstam, E., Ventä-Olkkonen, L., and Väänänen-Vainio-Mattila, K. (2009). “User expectations for mobile mixed reality services: an initial user study,” in VTT Symp. (Valtion Teknillinen Tutkimuskeskus) (Helsinki), 177–184.

Olsson, T., Kärkkäinen, T., Lagerstam, E., and Ventä-Olkkonen, L. (2012). User evaluation of mobile augmented reality scenarios. J. Ambient Intell. Smart Environ. 4, 29–47. doi: 10.3233/AIS-2011-0127

Olsson, T., Lagerstam, E., Kärkkäinen, T., and Väänänen-Vainio-Mattila, K. (2013). Expected user experience of mobile augmented reality services: a user study in the context of shopping centres. Pers. Ubiquit. Comput. 17, 287–304. doi: 10.1007/s00779-011-0494-x

Papagiannakis, G., Singh, G., and Magnenat-Thalmann, N. (2008). A survey of mobile and wireless technologies for augmented reality systems. Comput. Anim. Virt. Worlds 19, 3–22. doi: 10.1002/cav.v19:1

Pescarin, S., Pagano, A., Wallergård, M., Hupperetz, W., and Ray, C. (2012). “Archeovirtual 2011: an evaluation approach to virtual museums,” in Proceedings of the 2012 18th International Conference on Virtual Systems and Multimedia, VSMM 2012: Virtual Systems in the Information Society , 25–32.

Petersen, N., and Stricker, D. (2009). “Continuous natural user interface: reducing the gap between real and digital world,” in Science and Technology Proceedings - IEEE 2009 International Symposium on Mixed and Augmented Reality, ISMAR 2009 , 23–26.

Peterson, S., Axholt, M., Cooper, M., and Ellis, S. (2009). “Visual clutter management in augmented reality: effects of three label separation methods on spatial judgments,” in 3DUI - IEEE Symposium on 3D User Interfaces 2009 - Proceedings , 111–118.

Poelman, R., Akman, O., Lukosch, S., and Jonker, P. (2012). “As if being there: mediated reality for crime scene investigation,” in Proceedings of the ACM Conference on Computer Supported Cooperative Work, CSCW , 1267–1276.

Porter, S. R., Marner, M. R., Smith, R. T., Zucco, J. E., and Thomas, B. H. (2010). “Validating spatial augmented reality for interactive rapid prototyping,” in 9th IEEE International Symposium on Mixed and Augmented Reality 2010: Science and Technology, ISMAR 2010 - Proceedings , 265–266.

Pucihar, K., Coulton, P., and Alexander, J. (2014). “The use of surrounding visual context in handheld AR: device vs. user perspective rendering,” in Conference on Human Factors in Computing Systems - Proceedings , 197–206.

Pusch, A., Martin, O., and Coquillart, S. (2008). “HEMP - Hand-displacement-based Pseudo-haptics: a study of a force field application,” in 3DUI - IEEE Symposium on 3D User Interfaces 2008 , 59–66.

Pusch, A., Martin, O., and Coquillart, S. (2009). HEMP-hand-displacement-based pseudo-haptics: a study of a force field application and a behavioural analysis. Int. J. Hum. Comput. Stud. 67, 256–268. doi: 10.1016/j.ijhcs.2008.09.015

Rankohi, S., and Waugh, L. (2013). Review and analysis of augmented reality literature for construction industry. Visual. Eng. 1, 1–18. doi: 10.1186/2213-7459-1-9

Rauhala, M., Gunnarsson, A.-S., Henrysson, A., and Ynnerman, A. (2006). “A novel interface to sensor networks using handheld augmented reality,” in ACM International Conference Proceeding Series , Vol. 159, 145–148.

Regenbrecht, H., McGregor, G., Ott, C., Hoermann, S., Schubert, T., Hale, L., et al. (2011). “Out of reach? - A novel AR interface approach for motor rehabilitation,” in 2011 10th IEEE International Symposium on Mixed and Augmented Reality, ISMAR 2011 , 219–228.

Regenbrecht, H., Hoermann, S., McGregor, G., Dixon, B., Franz, E., Ott, C., et al. (2012). Visual manipulations for motor rehabilitation. Comput. Graph. (Pergamon) 36, 819–834.

Regenbrecht, H., Hoermann, S., Ott, C., Muller, L., and Franz, E. (2014). Manipulating the experience of reality for rehabilitation applications. Proc. IEEE 102, 170–184. doi: 10.1109/JPROC.2013.2294178

Reif, R., and Günthner, W. A. (2009). Pick-by-vision: augmented reality supported order picking. Vis. Comput. 25, 461–467. doi: 10.1007/s00371-009-0348-y

Ritter, E., Kindelan, T., Michael, C., Pimentel, E., and Bowyer, M. (2007). Concurrent validity of augmented reality metrics applied to the fundamentals of laparoscopic surgery (FLS). Surg. Endosc. Other Intervent. Techniq. 21, 1441–1445. doi: 10.1007/s00464-007-9261-5

Robertson, C., MacIntyre, B., and Walker, B. N. (2007). “An evaluation of graphical context as a means for ameliorating the effects of registration error,” in IEEE Transactions on Visualization and Computer Graphics , Vol. 15, 179–192.

Robertson, C., Maclntyre, B., and Walker, B. (2008). “An evaluation of graphical context when the graphics are outside of the task area,” in Proceedings - 7th IEEE International Symposium on Mixed and Augmented Reality 2008, ISMAR 2008 , 73–76.

Rohs, M., Schöning, J., Raubal, M., Essl, G., and Krüger, A. (2007). “Map navigation with mobile devices: virtual versus physical movement with and without visual context,” in Proceedings of the 9th International Conference on Multimodal Interfaces, ICMI'07 , 146–153.

Rohs, M., Schleicher, R., Schöning, J., Essl, G., Naumann, A., and Krüger, A. (2009a). Impact of item density on the utility of visual context in magic lens interactions. Pers. Ubiquit. Comput. 13, 633–646. doi: 10.1007/s00779-009-0247-2

Rohs, M., Schöning, J., Schleicher, R., Essl, G., Naumann, A., and Krüger, A. (2009b). “Impact of item density on magic lens interactions,” in MobileHCI09 - The 11th International Conference on Human-Computer Interaction with Mobile Devices and Services .

Rohs, M., Oulasvirta, A., and Suomalainen, T. (2011). “Interaction with magic lenses: Real-world validation of a Fitts' law model,” in Conference on Human Factors in Computing Systems - Proceedings , 2725–2728.

Rosenthal, S., Kane, S., Wobbrock, J., and Avrahami, D. (2010). “Augmenting on-screen instructions with micro-projected guides: when it works, and when it fails,” in UbiComp'10 - Proceedings of the 2010 ACM Conference on Ubiquitous Computing , 203–212.

Rusch, M., Schall, M. Jr., Gavin, P., Lee, J., Dawson, J., Vecera, S., et al. (2013). Directing driver attention with augmented reality cues. Transport. Res. Part F Traf. Psychol. Behav. 16, 127–137. doi: 10.1016/j.trf.2012.08.007

Salamin, P., Thalmann, D., and Vexo, F. (2006). “The benefits of third-person perspective in virtual and augmented reality,” in Proceedings of the ACM Symposium on Virtual Reality Software and Technology, VRST , 27–30.

Salvador-Herranz, G., Pérez-López, D., Ortega, M., Soto, E., Alcañiz, M., and Contero, M. (2013). “Manipulating virtual objects with your hands: a case study on applying desktop Augmented Reality at the primary school,” in Proceedings of the Annual Hawaii International Conference on System Sciences , 31–39.

Sandor, C., Cunningham, A., Dey, A., and Mattila, V.-V. (2010). “An augmented reality X-ray system based on visual saliency,” in 9th IEEE International Symposium on Mixed and Augmented Reality 2010: Science and Technology, ISMAR 2010 - Proceedings , 27–36.

Santos, M. E. C., Chen, A., Terawaki, M., Yamamoto, G., Taketomi, T., Miyazaki, J., et al. (2013). “Augmented reality x-ray interaction in k-12 education: Theory, student perception and teacher evaluation,” in Proceedings - 2013 IEEE 13th International Conference on Advanced Learning Technologies, ICALT 2013 , 141–145.

Schall, G., Zollmann, S., and Reitmayr, G. (2013a). Smart Vidente: advances in mobile augmented reality for interactive visualization of underground infrastructure. Pers. Ubiquit. Comput. 17, 1533–1549. doi: 10.1007/s00779-012-0599-x

Schall, M., Rusch, M., Lee, J., Dawson, J., Thomas, G., Aksan, N., et al. (2013b). Augmented reality cues and elderly driver hazard perception. Hum. Fact. 55, 643–658. doi: 10.1177/0018720812462029

Schinke, T., Henze, N., and Boll, S. (2010). “Visualization of off-screen objects in mobile augmented reality,” in ACM International Conference Proceeding Series , 313–316.

Schoenfelder, R., and Schmalstieg, D. (2008). “Augmented reality for industrial building acceptance,” in Proceedings - IEEE Virtual Reality , 83–90.

Schwerdtfeger, B., and Klinker, G. (2008). “Supporting order picking with augmented reality,” in Proceedings - 7th IEEE International Symposium on Mixed and Augmented Reality 2008, ISMAR 2008 , 91–94.

Schwerdtfeger, B., Reif, R., Günthner, W., Klinker, G., Hamacher, D., Schega, L., et al. (2009). “Pick-by-vision: a first stress test,” in Science and Technology Proceedings - IEEE 2009 International Symposium on Mixed and Augmented Reality, ISMAR 2009 , 115–124.

Schwerdtfeger, B., Reif, R., Günthner, W. A., and Klinker, G. (2011). Pick-by-vision: there is something to pick at the end of the augmented tunnel. Virt. Real. 15, 213–223. doi: 10.1007/s10055-011-0187-9

Shatte, A., Holdsworth, J., and Lee, I. (2014). Mobile augmented reality based context-aware library management system. Exp. Syst. Appl. 41, 2174–2185. doi: 10.1016/j.eswa.2013.09.016

Singh, G., Swan II, J., Jones, J., and Ellis, S. (2010). “Depth judgment measures and occluding surfaces in near-field augmented reality,” in Proceedings - APGV 2010: Symposium on Applied Perception in Graphics and Visualization , 149–156.

Singh, G., Swan II, J., Jones, J., and Ellis, S. (2012). “Depth judgments by reaching and matching in near-field augmented reality,” in Proceedings - IEEE Virtual Reality , 165–166.

Sodhi, R. B., Benko, H., and Wilson, A. (2012). “LightGuide: projected visualizations for hand movement guidance,” in Conference on Human Factors in Computing Systems - Proceedings , 179–188.

Sodhi, R., Jones, B., Forsyth, D., Bailey, B., and Maciocci, G. (2013). “BeThere: 3D mobile collaboration with spatial input,” in Conference on Human Factors in Computing Systems - Proceedings , 179–188.

Sommerauer, P., and Müller, O. (2014). Augmented reality in informal learning environments: a field experiment in a mathematics exhibition. Comput. Educ. 79, 59–68. doi: 10.1016/j.compedu.2014.07.013

Sukan, M., Feiner, S., Tversky, B., and Energin, S. (2012). “Quick viewpoint switching for manipulating virtual objects in hand-held augmented reality using stored snapshots,” in ISMAR 2012 - 11th IEEE International Symposium on Mixed and Augmented Reality 2012, Science and Technology Papers , 217–226.

Sumadio, D. D., and Rambli, D. R. A. (2010). “Preliminary evaluation on user acceptance of the augmented reality use for education,” in 2010 2nd International Conference on Computer Engineering and Applications, ICCEA 2010 , Vol. 2, 461–465.

Suzuki, K., Garfinkel, S., Critchley, H., and Seth, A. (2013). Multisensory integration across exteroceptive and interoceptive domains modulates self-experience in the rubber-hand illusion. Neuropsychologia 51, 2909–2917. doi: 10.1016/j.neuropsychologia.2013.08.014

Swan, J. E. II., and Gabbard, J. L. (2005). “Survey of user-based experimentation in augmented reality,” in Proceedings of 1st International Conference on Virtual Reality, HCI International 2005 , 1–9.

Sylaiou, S., Mania, K., Karoulis, A., and White, M. (2010). Exploring the relationship between presence and enjoyment in a virtual museum. Int. J. Hum. Comput. Stud. 68, 243–253. doi: 10.1016/j.ijhcs.2009.11.002

Szymczak, D., Rassmus-G?ohn, K., Magnusson, C., and Hedvall, P.-O. (2012). “A real-world study of an audio-tactile tourist guide,” in MobileHCI'12 - Proceedings of the 14th International Conference on Human Computer Interaction with Mobile Devices and Services (San Francisco, CA), 335–344.

Takano, K., Hata, N., and Kansaku, K. (2011). Towards intelligent environments: an augmented reality-brain-machine interface operated with a see-through head-mount display. Front. Neurosci. 5:60. doi: 10.3389/fnins.2011.00060

Tangmanee, K., and Teeravarunyou, S. (2012). “Effects of guided arrows on head-up display towards the vehicle windshield,” in 2012 Southeast Asian Network of Ergonomics Societies Conference: Ergonomics Innovations Leveraging User Experience and Sustainability, SEANES 2012 .

Teber, D., Guven, S., Simpfendörfer, T., Baumhauer, M., Güven, E., Yencilek, F., et al. (2009). Augmented reality: a new tool To improve surgical accuracy during laparoscopic partial nephrectomy? Preliminary in vitro and in vivo results. Eur. Urol. 56, 332–338. doi: 10.1016/j.eururo.2009.05.017

Thomas, R., John, N., and Delieu, J. (2010). Augmented reality for anatomical education. J. Vis. Commun. Med. 33, 6–15. doi: 10.3109/17453050903557359

Thomas, B. H. (2007). “Evaluation of three input techniques for selection and annotation of physical objects through an augmented reality view,” in Proceedings - ISMAR 2006: Fifth IEEE and ACM International Symposium on Mixed and Augmented Reality , 33–36.

Tillon, A. B., Marchal, I., and Houlier, P. (2011). “Mobile augmented reality in the museum: Can a lace-like technology take you closer to works of art?,” in 2011 IEEE International Symposium on Mixed and Augmented Reality - Arts, Media, and Humanities, ISMAR-AMH 2011 , 41–47.

Tomioka, M., Ikeda, S., and Sato, K. (2013). “Approximated user-perspective rendering in tablet-based augmented reality,” in 2013 IEEE International Symposium on Mixed and Augmented Reality, ISMAR 2013 , 21–28.

Toyama, T., Dengel, A., Suzuki, W., and Kise, K. (2013). “Wearable reading assist system: augmented reality document combining document retrieval and eye tracking,” in Proceedings of the International Conference on Document Analysis and Recognition, ICDAR , 30–34.

Toyama, T., Sonntag, D., Dengel, A., Matsuda, T., Iwamura, M., and Kise, K. (2014a). “A mixed reality head-mounted text translation system using eye gaze input,” in International Conference on Intelligent User Interfaces, Proceedings IUI , 329–334.

Toyama, T., Sonntag, D., Orlosky, J., and Kiyokawa, K. (2014b). “A natural interface for multi-focal plane head mounted displays using 3D gaze,” in Proceedings of the Workshop on Advanced Visual Interfaces AVI , 25–32.

Tsuda, T., Yamamoto, H., Kameda, Y., and Ohta, Y. (2005). “Visualization methods for outdoor see-through vision,” in ACM International Conference Proceeding Series , Vol. 157, 62–69.

Tumler, J., Doil, F., Mecke, R., Paul, G., Schenk, M., Pfister, E., et al. (2008). “Mobile augmented reality in industrial applications: approaches for solution of user-related issues,” in Proceedings - 7th IEEE International Symposium on Mixed and Augmented Reality 2008, ISMAR 2008 , 87–90.

Tönnis, M., and Klinker, G. (2007). “Effective control of a car driver's attention for visual and acoustic guidance towards the direction of imminent dangers,” in Proceedings - ISMAR 2006: Fifth IEEE and ACM International Symposium on Mixed and Augmented Reality , 13–22.

Tönnis, M., Sandor, C., Klinker, G., Lange, C., and Bubb, H. (2005). “Experimental evaluation of an augmented reality visualization for directing a car driver's attention,” in Proceedings - Fourth IEEE and ACM International Symposium on Symposium on Mixed and Augmented Reality, ISMAR 2005 , Vol. 2005, 56–59.

Vazquez-Alvarez, Y., Oakley, I., and Brewster, S. (2012). Auditory display design for exploration in mobile audio-augmented reality. Pers. Ubiquit. Comput. 16, 987–999. doi: 10.1007/s00779-011-0459-0

Veas, E., Mendez, E., Feiner, S., and Schmalstieg, D. (2011). “Directing attention and influencing memory with visual saliency modulation,” in Conference on Human Factors in Computing Systems - Proceedings , 1471–1480.

Veas, E., Grasset, R., Kruijff, E., and Schmalstieg, D. (2012). Extended overview techniques for outdoor augmented reality. IEEE Trans. Visual. Comput. Graphics 18, 565–572. doi: 10.1109/TVCG.2012.44

Vignais, N., Miezal, M., Bleser, G., Mura, K., Gorecky, D., and Marin, F. (2013). Innovative system for real-time ergonomic feedback in industrial manufacturing. Appl. Ergon. 44, 566–574. doi: 10.1016/j.apergo.2012.11.008

Voida, S., Podlaseck, M., Kjeldsen, R., and Pinhanez, C. (2005). “A study on the manipulation of 2D objects in a projector/camera-based augmented reality environment,” in CHI 2005: Technology, Safety, Community: Conference Proceedings - Conference on Human Factors in Computing Systems , 611–620.

Wacker, F., Vogt, S., Khamene, A., Jesberger, J., Nour, S., Elgort, D., et al. (2006). An augmented reality system for MR image-guided needle biopsy: initial results in a swine model. Radiology 238, 497–504. doi: 10.1148/radiol.2382041441

Wagner, D., Billinghurst, M., and Schmalstieg, D. (2006). “How real should virtual characters be?,” in International Conference on Advances in Computer Entertainment Technology 2006 .

Wang, X., and Dunston, P. (2011). Comparative effectiveness of mixed reality-based virtual environments in collaborative design. IEEE Trans. Syst. Man Cybernet. C Appl. Rev. 41, 284–296. doi: 10.1109/TSMCC.2010.2093573

Wang, X., Kim, M. J., Love, P. E., and Kang, S.-C. (2013). Augmented reality in built environment: classification and implications for future research. Autom. Construct. 32, 1–13. doi: 10.1016/j.autcon.2012.11.021

Weichel, C., Lau, M., Kim, D., Villar, N., and Gellersen, H. (2014). “MixFab: a mixed-reality environment for personal fabrication,” in Conference on Human Factors in Computing Systems - Proceedings , 3855–3864.

Weing, M., Schaub, F., Röhlig, A., Könings, B., Rogers, K., Rukzio, E., et al. (2013). “P.I.A.N.O.: enhancing instrument learning via interactive projected augmentation,” in UbiComp 2013 Adjunct - Adjunct Publication of the 2013 ACM Conference on Ubiquitous Computing (Zurich), 75–78.

White, S., Lister, L., and Feiner, S. (2007). “Visual hints for tangible gestures in augmented reality,” in 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, ISMAR .

White, S., Feng, D., and Feiner, S. (2009). “Interaction and presentation techniques for shake menus in tangible augmented reality,” in Science and Technology Proceedings - IEEE 2009 International Symposium on Mixed and Augmented Reality, ISMAR 2009 , 39–48.

Wilson, K., Doswell, J., Fashola, O., Debeatham, W., Darko, N., Walker, T., et al. (2013). Using augmented reality as a clinical support tool to assist combat medics in the treatment of tension pneumothoraces. Milit. Med. 178, 981–985. doi: 10.7205/MILMED-D-13-00074

Wither, J., and Höllerer, T. (2005). “Pictorial depth cues for outdoor augmented reality,” in Proceedings - International Symposium on Wearable Computers, ISWC , Vol. 2005, 92–99.

Wither, J., DiVerdi, S., and Höllerer, T. (2007). “Evaluating display types for AR selection and annotation,” in 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, ISMAR .

Wither, J., Allen, R., Samanta, V., Hemanus, J., Tsai, Y.-T., Azuma, R., et al. (2010). “The Westwood experience: connecting story to locations via mixed reality,” in 9th IEEE International Symposium on Mixed and Augmented Reality 2010: Arts, Media, and Humanities, ISMAR-AMH 2010 - Proceedings , 39–46.

Wither, J., Tsai, Y.-T., and Azuma, R. (2011). Indirect augmented reality. Comput. Graph. (Pergamon) 35, 810–822. doi: 10.1016/j.cag.2011.04.010

Wojciechowski, R., and Cellary, W. (2013). Evaluation of learners' attitude toward learning in ARIES augmented reality environments. Comput. Educ. 68, 570–585. doi: 10.1016/j.compedu.2013.02.014

Wrzesien, M., Bretón-López, J., Botella, C., Burkhardt, J.-M., Alcañiz, M., Pérez-Ara, M., et al. (2013). How technology influences the therapeutic process: evaluation of the patient-therapist relationship in augmented reality exposure therapy and in vivo exposure therapy. Behav. Cogn. Psychother. 41, 505–509. doi: 10.1017/S1352465813000088

Xu, Y., Gandy, M., Deen, S., Schrank, B., Spreen, K., Gorbsky, M., et al. (2008). “BragFish: exploring physical and social interaction in co-located handheld augmented reality games,” in Proceedings of the 2008 International Conference on Advances in Computer Entertainment Technology, ACE 2008 , 276–283.

Xu, Y., Barba, E., Radu, I., Gandy, M., and MacIntyre, B. (2011). “Chores are fun: understanding social play in board games for digital tabletop game design,” in Proceedings of DiGRA 2011 Conference: Think Design Play (Urtecht).

Yamabe, T., and Nakajima, T. (2013). Playful training with augmented reality games: case studies towards reality-oriented system design. Multimedia Tools Appl. 62, 259–286. doi: 10.1007/s11042-011-0979-7

Yeh, K.-C., Tsai, M.-H., and Kang, S.-C. (2012). On-site building information retrieval by using projection-based augmented reality. J. Comput. Civil Eng. 26, 342–355. doi: 10.1061/(ASCE)CP.1943-5487.0000156

Yoo, H.-N., Chung, E., and Lee, B.-H. (2013). The effects of augmented reality-based otago exercise on balance, gait, and falls efficacy of elderly women. J. Phys. Ther. Sci. 25, 797–801. doi: 10.1589/jpts.25.797

Yuan, M. L., Ong, S. K., and Nee, A. Y. C. (2008). Augmented reality for assembly guidance using a virtual interactive tool. Int. J. Product. Res. 46, 1745–1767. doi: 10.1080/00207540600972935

Yudkowsky, R., Luciano, C., Banerjee, P., Schwartz, A., Alaraj, A., Lemole, G., et al. (2013). Practice on an augmented reality/haptic simulator and library of virtual brains improves residents' ability to perform a ventriculostomy. Simul. Healthcare 8, 25–31. doi: 10.1097/SIH.0b013e3182662c69

Zhang, R., Nordman, A., Walker, J., and Kuhl, S. A. (2012). Minification affects verbal- and action-based distance judgments differently in head-mounted displays. ACM Trans. Appl. Percept. 9:14. doi: 10.1145/2325722.2325727

Zhang, J., Sung, Y.-T., Hou, H.-T., and Chang, K.-E. (2014). The development and evaluation of an augmented reality-based armillary sphere for astronomical observation instruction. Comput. Educ. 73, 178–188. doi: 10.1016/j.compedu.2014.01.003

Zhou, Z., Cheok, A., Qiu, Y., and Yang, X. (2007). The role of 3-D sound in human reaction and performance in augmented reality environments. IEEE Trans. Syst. Man Cybern. A Syst. Hum. 37, 262–272. doi: 10.1109/TSMCA.2006.886376

Keywords: augmented reality, systematic review, user studies, usability, experimentation, classifications

Citation: Dey A, Billinghurst M, Lindeman RW and Swan JE II (2018) A Systematic Review of 10 Years of Augmented Reality Usability Studies: 2005 to 2014. Front. Robot. AI 5:37. doi: 10.3389/frobt.2018.00037

Received: 19 December 2017; Accepted: 19 March 2018; Published: 17 April 2018.

Reviewed by:

Copyright © 2018 Dey, Billinghurst, Lindeman and Swan. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Arindam Dey, [email protected]

Help | Advanced Search

Computer Science > Computer Vision and Pattern Recognition

Title: modern augmented reality: applications, trends, and future directions.

Abstract: Augmented reality (AR) is one of the relatively old, yet trending areas in the intersection of computer vision and computer graphics with numerous applications in several areas, from gaming and entertainment, to education and healthcare. Although it has been around for nearly fifty years, it has seen a lot of interest by the research community in the recent years, mainly because of the huge success of deep learning models for various computer vision and AR applications, which made creating new generations of AR technologies possible. This work tries to provide an overview of modern augmented reality, from both application-level and technical perspective. We first give an overview of main AR applications, grouped into more than ten categories. We then give an overview of around 100 recent promising machine learning based works developed for AR systems, such as deep learning works for AR shopping (clothing, makeup), AR based image filters (such as Snapchat's lenses), AR animations, and more. In the end we discuss about some of the current challenges in AR domain, and the future directions in this area.

Submission history

Access paper:.

  • Other Formats

license icon

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

Augmented Reality: A Comprehensive Review

  • Review article
  • Published: 20 October 2022
  • Volume 30 , pages 1057–1080, ( 2023 )

Cite this article

  • Shaveta Dargan 1 ,
  • Shally Bansal 2 ,
  • Munish Kumar   ORCID: orcid.org/0000-0003-0115-1620 1 ,
  • Ajay Mittal 3 &
  • Krishan Kumar 4  

4134 Accesses

16 Citations

3 Altmetric

Explore all metrics

Augmented Reality (AR) aims to modify the perception of real-world images by overlaying digital data on them. A novel mechanic, it is an enlightening and engaging mechanic that constantly strives for new techniques in every sphere. The real world can be augmented with information in real-time. AR aims to accept the outdoors and come up with a novel and efficient model in all application areas. A wide array of fields are displaying real-time computer-generated content, such as education, medicine, robotics, manufacturing, and entertainment. Augmented reality is considered a subtype of mixed reality, and it is treated as a distortion of virtual reality. The article emphasizes the novel digital technology that has emerged after the success of Virtual Reality, which has a wide range of applications in the digital age. There are fundamental requirements to understand AR, such as the nature of technology, architecture, the devices required, types of AR, benefits, limitations, and differences with VR, which are discussed in a very simplified way in this article. As well as a year-by-year tabular overview of the research papers that have been published in the journal on augmented reality-based applications, this article aims to provide a comprehensive overview of augmented reality-based applications. It is hard to find a field that does not make use of the amazing features of AR. This article concludes with a discussion, conclusion, and future directions for AR.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

research paper on augmented reality

Similar content being viewed by others

research paper on augmented reality

Artificial intelligence powered Metaverse: analysis, challenges and future perspectives

Mona M. Soliman, Eman Ahmed, … Aboul Ella Hassanien

research paper on augmented reality

Inclusive AR/VR: accessibility barriers for immersive technologies

Chris Creed, Maadh Al-Kalbani, … Ian Williams

research paper on augmented reality

Eye Tracking in Virtual Reality: a Broad Review of Applications and Challenges

Isayas Berhe Adhanom, Paul MacNeilage & Eelke Folmer

Data Availability

Azuma R, Baillot Y, Behringer R, Feiner S, Julier S, MacIntyre B (2001) Recent advances in Augmented Reality. IEEE Comput Graph Appl 21(6):34–47. https://doi.org/10.1109/38.963459

Article   Google Scholar  

Zhang Z, Weng D, Jiang H, Liu Y, Wang Y (2018) Inverse augmented reality: a virtual agent’s perspective. In: IEEE international symposium on mixed and augmented reality adjunct (ISMAR-Adjunct). pp 154–157

Poushneh A (2018) Augmented reality in retail: a trade-off between user’s control of access to personal information and augmentation quality. J Retail Consum Serv 41:169–176. https://doi.org/10.1016/j.jretconser.2017.12.010

Clark A, Dünser A (2012) An interactive augmented reality coloring book. In: 2012 IEEE Symposium on 3D User Interfaces (3DUI), pp 7–10. https://doi.org/10.1109/3DUI.2012.6184168

Cong W (2013) Links and differences between augmented reality and virtual reality. Break Technol 5:57–61

Google Scholar  

Lu Y, Smith S (2007) Augmented reality E-commerce assistant system: trying while shopping. In: Jacko JA (ed) Human–computer interaction. Interaction platforms and techniques. HCI 2007. Lecture notes in computer science, vol 4551. Springer, Berlin

Wojciechowski R, Walczak K, White and Cellary W (2004) Building virtual and augmented reality museum exhibitions. In: Proceedings of the ninth international conference on 3D web technology—Web3D ’04. pp 135–145

Ong SK, Yuan ML, Nee AYC (2008) Augmented reality applications in manufacturing: a survey. Int J Prod Res 46(10):2707–2742

Article   MATH   Google Scholar  

Zollmann S, Hoppe C, Kluckner S, Poglitsch C, Bischof H, Reitmayr G (2014) Augmented reality for construction site monitoring and documentation. Proc IEEE 102(2):137–154. https://doi.org/10.1109/JPROC.2013.2294314

Patil S, Prabhu C, Neogi O, Joshi AR, Katre N (2016) E-learning system using augmented reality. In: Proceedings of the international conference on computing communication control and automation (ICCUBEA). pp 1–5

Damiani L, Demartini M, Guizzi G, Revetria R, Tonelli F (2018) Augmented and virtual reality applications in industrial systems: a qualitative review towards the industry 4.0 era. IFAC-PapersOnLine 51(11):624–630. https://doi.org/10.1016/j.ifacol.2018.08.388

Cipresso P, Giglioli IAC, Raya MA, Riva G (2018) The past, present, and future of virtual and augmented reality research: a network and cluster analysis of the literature. Front Psychol 9:1–20

Challenor J, Ma M (2019) A review of augmented reality applications for history education and heritage visualisation. Multimodal Technol Interact 3(2):39. https://doi.org/10.3390/mti3020039

Pascoal R, Alturas B, de Almeida A, Sofia R (2018) A survey of augmented reality: making technology acceptable in outdoor environments. In: Proceedings of the 13th Iberian conference on information systems and technologies (CISTI). pp 1–6

Kim SJJ (2012) A user study trends in augmented reality and virtual reality research: a qualitative study with the past three years of the ISMAR and IEEE VR conference papers. In: International symposium on ubiquitous virtual reality. https://doi.org/10.1109/isuvr.2012.17

Wanga CH, Chianga YC, Wanga MJ (2015) Evaluation of an augmented reality embedded on-line shopping system. In: Proceedings of 6th international conference on applied human factors and ergonomics (AHFE 2015)

Chen Y, Wang Q, Chen H, Song X, Tang H, Tian M (2019) An overview of augmented reality technology. IOP Conf Ser J Phys Conf Ser 1237:022082. https://doi.org/10.1088/1742-6596/1237/2/022082

Kamboj D, Wankui L, Gupta N (2013) A review on illumination techniques in augmented reality. In: Fourth international conference on computing, communications and networking technologies (ICCCNT). pp 1–9

Irshad S, Rohaya B, Awang Rambli D (2014) User experience of mobile augmented reality: a review of studies. In: Proceedings of the 3rd international conference on user science and engineering (i-USEr). pp 125–130

Novak-Marcincin J, Janak M, Barna J, Novakova-Marcincinova L (2014) Application of virtual and augmented reality technology in education of manufacturing engineers. In: Rocha Á, Correia A, Tan F, Stroetmann K (eds) New perspectives in information systems and technologies, Volume 2, vol 276. Springer, Cham

Chapter   Google Scholar  

Mekni M, Lemieux A (2014) Augmented reality: applications, challenges and future trends. Appl Comput Sci 20:205–214

Rosenblum L (2000) Virtual and augmented reality 2020. IEEE Comput Graph Appl 20(1):38–39. https://doi.org/10.1109/38.814551

Cruz E, Orts-Escolano S, Donoso F (2019) An augmented reality application for improving shopping experience in large retail stores. Virtual Reality 23:281–291

Chatzopoulos D, Bermejo C, Huang Z, Hui P (2017) Mobile augmented reality survey: from where we are to where we go. IEEE Access 5:6917–6950

Mehta D, Chugh H, Banerjee P (2018) Applications of augmented reality in emerging health diagnostics: a survey. In: Proceedings of the international conference on automation and computational engineering (ICACE). pp 45–51

Yeh S, Li Y, Zhou C, Chiu P, Chen J (2018) Effects of virtual reality and augmented reality on induced anxiety. IEEE Trans Neural Syst Rehabil Eng 26(7):1345–1352

Umeda R, Seif MA, Higa H, Kuniyoshi Y (2017) A medical training system using augmented reality. In: Proceedings of the international conference on intelligent informatics and biomedical sciences (ICIIBMS). pp 146–149

Chandrasekaran S, Kesavan U (2017) Augmented reality in broadcasting. In: IEEE international conference on consumer electronics-Asia (ICCE-Asia). pp 81–83

Nasser N (2018) Augmented reality in education learning and training. In: Proceedings of the joint international conference on ICT in education and training, international conference on computing in Arabic, and international conference on geocomputing. pp 1–7

Ashfaq Q, Sirshar M (2018) Emerging trends in augmented reality games. In: Proceedings of the international conference on computing, mathematics and engineering technologies (iCoMET). pp 1–7

Aggarwal R, Singhal A (2019) Augmented Reality and its effect on our life. In: Proceedings of the 9th international conference on cloud computing, data science & engineering. pp 510–515

Rana K, Patel B (2019) Augmented reality engine applications: a survey. In: Proceedings of the international conference on communication and signal processing (ICCSP). pp 380–384

He et al (2017) The research and application of the augmented reality technology. In: Proceedings of the 2nd information technology, networking, electronic and automation control conference (ITNEC). pp 496–501

Oyman M, Bal D, Ozer S (2022) Extending the technology acceptance model to explain how perceived augmented reality affects consumers’ perceptions. Comput Hum Behav 128:107127. https://doi.org/10.1016/j.chb.2021.107127

Liu Y, Kumar SV, Manickam A (2022) Augmented reality technology based on school physical education training. Comput Electr Eng 99:107807

Giannopulu B, Lee TJ, Frangos A (2022) Synchronised neural signature of creative mental imagery in reality and augmented reality. Heliyon 8(3):e09017. https://doi.org/10.1016/j.heliyon.2022.e09017

Sun C, Fang Y, Kong M, Chen X, Liu Y (2022) Influence of augmented reality product display on consumers’ product attitudes: a product uncertainty reduction perspective. J Retail Consum Serv 64:102828

Menon SS, Holland C, Farra S, Wischgoll T, Stuber M (2022) Augmented reality in nursing education—a pilot study. Clin Simul Nurs 65:57–61

Pimentel D (2022) Saving species in a snap: on the feasibility and efficacy of augmented reality-based wildlife interactions for conservation. J Nat Conserv 66:126151

Yavuz M, Çorbacloğlu E, Başoğlu AN, Daim TU, Shaygan A (2021) Augmented reality technology adoption: case of a mobile application in Turkey. Technol Soc 66:101598

Bussink T, Maal T, Meulstee J, Xi T (2022) Augmented reality guided condylectomy. Br J Oral Maxillofac Surg 60:991

Mohanty BP, Goswami L (2021) Advancements in augmented reality. Mater Today Proc. https://doi.org/10.1016/j.matpr.2021.03.696

Kolivand H, Mardenli I, Asadianfam S (2021) Review on augmented reality technology. In: Proceedings of 14th international conference on developments in esystems engineering (DeSE). pp 7–12. https://doi.org/10.1109/DeSE54285.2021.9719356

Mishra H, Kumar A, Sharma M, Singh M, Sharma R, Ambikapathy A (2021) Application of augmented reality in the field of virtual labs. international conference on advance computing and innovative technologies in engineering (ICACITE). pp 403–405. https://doi.org/10.1109/ICACITE51222.2021.9404705

Liu Y, Sun Q, Tang Y, Li, Y, W. Jiang W, Wu J (2020) Virtual reality system for industrial training. In: 2020 international conference on virtual reality and visualization (ICVRV). pp 338–339

VanHorn K, Çobanoglu MC (2022) Democratizing AI in biomedical image classification using virtual reality, democratizing AI in biomedical image classification using virtual reality. Virtual Reality 26(1):159–171

Lemmens JS, Simon M, Sumter SR (2022) Fear and loathing in VR: the emotional and physiological effects of immersive games. Virtual Reality 26(1):223–234

Rodríguez G, Fernandez DMR, Pino-Mejías MA (2020) The impact of virtual reality technology on tourists’ experience: a textual data analysis. Soft Comput 24:13879–13892. https://doi.org/10.1007/s00500-020-04883-y

Gong M (2021) Analysis of architectural decoration esthetics based on VR technology and machine vision. Soft Comput 25:12477–12489

Lu W, Zhao L, Xu R (2021) Remote sensing image processing technology based on mobile augmented reality technology in surveying and mapping engineering. Soft Comput. https://doi.org/10.1007/s00500-021-05650-3

Lorenz M, Knopp S, Klimant P (2018) Industrial augmented reality: requirements for an augmented reality maintenance worker support system. In: IEEE international symposium on mixed and augmented reality adjunct (ISMAR-Adjunct). pp 151–153. https://doi.org/10.1109/ISMAR-Adjunct.2018.00055

Kim J, Lorenz M, S. Knopp S, Klimant P (2020) Industrial augmented reality: concepts and user interface designs for augmented reality maintenance worker support systems. In: IEEE International symposium on mixed and augmented reality adjunct (ISMAR-Adjunct). pp 67–69. https://doi.org/10.1109/ISMAR-Adjunct51615.2020.00032

Kim J, Lorenz M, Knopp S and Klimant P (2020) Industrial augmented reality: concepts and user interface designs for augmented reality maintenance worker support systems. In: IEEE international symposium on mixed and augmented reality adjunct (ISMAR-Adjunct). pp. 67–69. https://doi.org/10.1109/ISMAR-Adjunct51615.2020.00032

De Souza RF, Farias DL, Flor da Rosa RCL, Damasceno EF (2019) Analysis of low-cost virtual and augmented reality technology in case of motor rehabilitation. In: Proceedings of 21st symposium on virtual and augmented reality (SVR). pp 161–164. https://doi.org/10.1109/SVR.2019.00039

Ping J, Liu Y, Weng D (2019) Comparison in depth perception between virtual reality and augmented reality systems. In: IEEE conference on virtual reality and 3D user interfaces (VR). pp 1124–1125. https://doi.org/10.1109/VR.2019.8798174

Phon DNE, Ali MB, Halim NDA (2014) Collaborative augmented reality in education: a review. In: International conference on teaching and learning in computing and engineering. pp 78–83

Tatwany L, Ouertani HC (2017) A review on using augmented reality in text translation. In: Proceedings of 6th international conference on information and communication technology and accessibility (ICTA). pp 1–6. https://doi.org/10.1109/ICTA.2017.8336044

Kurniawan C, Rosmansyah Y, Dabarsyah B (2019) A systematic literature review on virtual reality for learning. In: Proceedings of the 5th international conference on wireless and telematics (ICWT). pp 1–4

Chen J, Yang J (2009) Study of the art & design based on Virtual Reality. In: Proceedings of the 2nd IEEE international conference on computer science and information technology. pp 1–4

Zhang Y, Liu H, Kang S-C, Al-Hussein M (2020) Virtual reality applications for the built environment: research trends and opportunities. Autom Constr 118:1–19. https://doi.org/10.1016/j.autcon.2020.103311

Boud AC, Haniff DJ, Baber C and Steiner SJ (1999) Virtual reality and augmented reality as a training tool for assembly tasks. In: Proceedings of the IEEE international conference on information visualization. https://doi.org/10.1109/iv.1999.781532

Khan T, Johnston K, Ophoff J (2019) The impact of an augmented reality application on learning motivation of students. Adv Hum-Comput Interact 2019:1–14

Download references

No funding was received.

Author information

Authors and affiliations.

Department of Computational Sciences, Maharaja Ranjit Singh Punjab Technical University, Bathinda, Punjab, India

Shaveta Dargan & Munish Kumar

Arden University, Berlin, Germany

Shally Bansal

Department of Computer Science and Engineering, University Institute of Engineering and Technology, Panjab University, Chandigarh, India

Ajay Mittal

Department of Information Technology, University Institute of Engineering and Technology, Panjab University, Chandigarh, India

Krishan Kumar

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Munish Kumar .

Ethics declarations

Conflict of interest.

The authors declare that they have no conflict of interest.

Ethical Approval

No human and animal participants were used.

Consent to Participate

All authors are agreed to participate.

Consent for Publication

All authors are agreed for publication of this work.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Dargan, S., Bansal, S., Kumar, M. et al. Augmented Reality: A Comprehensive Review. Arch Computat Methods Eng 30 , 1057–1080 (2023). https://doi.org/10.1007/s11831-022-09831-7

Download citation

Received : 11 September 2022

Accepted : 05 October 2022

Published : 20 October 2022

Issue Date : March 2023

DOI : https://doi.org/10.1007/s11831-022-09831-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Find a journal
  • Publish with us
  • Track your research
  • Open access
  • Published: 09 January 2024

Revealing the true potential and prospects of augmented reality in education

  • Yiannis Koumpouros   ORCID: orcid.org/0000-0001-6912-5475 1  

Smart Learning Environments volume  11 , Article number:  2 ( 2024 ) Cite this article

2533 Accesses

Metrics details

Augmented Reality (AR) technology is one of the latest developments and is receiving ever-increasing attention. Many researches are conducted on an international scale in order to study the effectiveness of its use in education. The purpose of this work was to record the characteristics of AR applications, in order to determine the extent to which they can be used effectively for educational purposes and reveal valuable insights. A Systematic Bibliographic Review was carried out on 73 articles. The structure of the paper followed the PRISMA review protocol. Eight questions were formulated and examined in order to gather information about the characteristics of the applications. From 2016 to 2020 the publications studying AR applications were doubled. The majority of them targeted university students, while a very limited number included special education. Physics class and foreign language learning were the ones most often chosen as the field to develop an app. Most of the applications (68.49%) were designed using marker detection technology for the Android operating system (45.21%) and were created with Unity (47.95%) and Vuforia (42.47%) tools. The majority of researches evaluated the effectiveness of the application in a subjective way, using custom-made not valid and reliable tools making the results not comparable. The limited number of participants and the short duration of pilot testing inhibit the generalization of their results. Technical problems and limitations of the equipment used are mentioned as the most frequent obstacles. Not all key-actors were involved in the design and development process of the applications. This suggests that further research is needed to fully understand the potential of AR applications in education and to develop effective evaluation methods. Key aspects for future research studies are proposed.

Introduction

The current epoch is marked by swift advances in Information Technology (IT) and its pervasive applications across all industries. The most prominent technological terms are Augmented Reality (AR), Virtual Reality (VR), and Mixed Reality (MR), which have gained popularity for professional training and specialization. AR has been defined variously by researchers in the fields of computer science and educational technology. Generally, AR is defined as the viewing of the real physical environment, either directly or indirectly, which has been enriched through the addition of computer-generated virtual information (Carmigniani & Furht, 2011 ). Azuma ( 1997 ) described AR as a technology that combines the real with the virtual world, specifically by adding virtual-digital elements to the existing real data. This interactive and three-dimensional information supplements and shapes the user's environment. Azuma ( 1997 ) proposed that AR systems should exhibit three characteristics: (i) the ability to merge virtual and real objects in a real environment, (ii) support real-time interaction, and (iii) incorporate 3D virtual objects. Milgram and Kishino ( 1994 ), to avoid confusion among the terms AR, VR, and MR, presented the reality-virtuality continuum (see Fig.  1 ).

figure 1

Reality—Virtuality Continuum [Adapted from Milgram and Kishino's ( 1994 )]

Figure  1 illustrates that Mixed Reality (MR) lies between the real and virtual environments and includes Augmented Reality (AR) as well as Augmented Virtuality (AV). AR refers to any situation where the real environment is supplemented with computer-generated graphics and digital objects. In contrast, AV, which is closer to the virtual world, augments the virtual environment with real elements (Milgram & Kishino, 1994 ). Unlike VR, AR aims to mitigate the risk of social isolation and lack of social skills among users (Kiryakova et al., 2018 ).

AR is recognized as a novel form of interactive interface that replaces the conventional screens of devices such as laptops, smartphones, and tablets with a more natural interface, enabling interaction with a virtual reality that feels completely natural (Azuma, 1997 ). AR can be classified into four main categories based on its means and objectives:

Marker-based AR : Marker tracking technology uses optical markers (flat structures with long edges and sharp corners, also known as triggers or tags), captures the video input from the camera, and adds 3D effects to the scene. This type of augmented reality is mainly used to collect more information about the object and is widely used in department stores and industries (Schall et al., 2009 ).

Markerless or location-based AR : This technology gets its name because of the readily available features on smartphones that provide location detection, positioning, speed, acceleration and orientation. In this type of AR the device's camera and sensors use GPS, accelerometer, compass, or other location-based information to recognize the user's location and augment the environment with virtual information (Kuikkaniemi et al., 2014 ).

Projection-based AR : This type of AR typically uses advanced projectors or smart glasses to project digital images onto real-world surfaces, creating a mixed reality experience. Changing the movement on the surface of the object activates the display of images. Projection-based AR is used to project digital keyboards onto a desk surface. In some cases, the image produced by projection may not be interactive (Billinghurst & Kato, 2002 ).

Superimposition-based AR : In this type of AR overlay technology replaces an object with a virtual one using visual object recognition. This process usually occurs by partially or completely replacing the view of an object with an augmented view. First Person Shooter (FPS) games are the best example of augmented reality based on superimposition (Billinghurst & Kato, 2002 ).

It's important to note that these categories are not mutually exclusive, and some AR applications may use a combination of these types.

Mobile augmented reality has gained popularity in recent years, thanks to advancements in smartphones and more powerful mobile processors. It has opened up new possibilities for augmented reality experiences on mobile devices (Tang et al., 2015 ). Mobile AR is a technology that allows digital information to be overlaid on the real-world environment through a mobile device, such as a smartphone or tablet. This technology uses the camera and sensors of the mobile device to track the user's surroundings and overlay digital content in real-time. Mobile augmented reality applications can range from simple experiences, such as adding filters to a camera app, to more complex ones, such as interactive games or educational tools that allow users to explore and learn about their environment in a new way. Mobile AR app downloads have been increasing worldwide since 2016 (Fig.  2 ). The global AR market size is projected to reach USD 88.4 billion by 2026 (Markets & Markets, 2023 ).

figure 2

Consumer mobile device augmented reality applications (embedded/standalone) worldwide from 2016 to 2022 (in millions) [Source: Statista, 2023a , 2023b ]

Technological developments have brought about rapid changes in the educational world, providing opportunities for new learning experiences and quality teaching (Voogt & Knezek, 2018 ). It is no surprise that the field of education is increasingly gaining popularity for the suitability of Augmented Reality applications (Dunleavy et al., 2009 ; Radu, 2014 ). In recent years, many researches have been published that highlight the use and effect of AR in various aspects of the educational process, enhancing the pedagogical value of this technology (Dede, 2009 ).

It is worth mentioning the interest observed in recent years by Internet users in the Google search engine, regarding the term "augmented reality in education". According to the Google tool (Google Trends), the chart below shows the number of searches on the Google search engine for Augmented Reality in education from 2015 to the present.

Compared to the past, the use of AR has become considerably more accessible, enabling its application across all levels of education, from preschool to university (Bacca et al., 2014 ; Ferrer-Torregrosa et al., 2015 ). AR has greatly improved the user's perception of space and time, and allows for the simultaneous visualization of the relationship between the real and virtual world (Dunleavy & Dede, 2014 ; Sin & Zaman, 2010 ). Cheng and Tsai ( 2014 ) also noted that AR applications facilitate a deeper understanding of abstract concepts and their interrelationships. Klopfer and Squire ( 2008 ) highlighted the novel digital opportunities offered to students to explore phenomena that may be difficult to access in real-life situations. Consequently, AR applications have become a powerful tool in the hands of educators (Martin et al., 2011 ).

Augmented reality applications provide numerous opportunities for individuals of all ages to interact with both the real and augmented environment in real-time, thereby creating an engaging and interesting learning environment for students (Akçayır & Akçayır, 2017 ). AR apps are received positively by students, as they introduce educational content in playful ways, enabling them to relate what they have learned to reality and encouraging them to take initiatives for their own applications (Jerry & Aaron, 2010 ). The international educational literature highlights several uses of AR, which have been designed and implemented in the teaching of various subjects, including Mathematics, Natural Sciences, Biology, Astronomy, Environmental Education, language skills (Billingurst et al., 2001 ; Klopfer & Squire, 2008 ; Wang & Wang, 2021 ), and even the development of a virtual perspective of poetry or "visual poetry" (Bower et al., 2014 ).

The increasing interest in augmented reality and creating effective learning experiences has led to the exploration of various learning theories that can serve as a guide and advisor for educators considering implementing AR technologies in their classrooms (Klopfer & Squire, 2019 ; Li et al., 2020 ). The pedagogical approaches recorded through the use of appropriate AR educational applications include game-based learning, situated learning, constructivism, and investigative learning, as reported in the literature (Lee, 2012 ; Yuen & Yaoyuneyong, 2020 ).

By examining relevant literature and synthesizing research findings, a systematic review can provide valuable insights into the current state of AR applications in education, their characteristics, and the challenges associated with their implementation in several axes:

Identifying trends and characteristics : It can explore the different types of AR technologies used, their educational purposes, and the target subjects or disciplines. This can provide an overview of the current landscape and inform educators, researchers, and developers about the range of possibilities and potential benefits of AR in education (Liu et al., 2019 ).

Assessing effectiveness : A systematic review can evaluate the effectiveness of AR applications in enhancing learning outcomes. By analyzing empirical studies, it can identify the impact of AR on student engagement, motivation, knowledge acquisition, and retention. This evidence-based assessment can guide educators in making informed decisions about incorporating AR technologies into their teaching practices (Chen et al., 2020 ; Radu, 2014 ).

Examining implementation challenges : AR implementation in educational settings may pose various challenges. These challenges can include technical issues, teacher training, cost considerations, and pedagogical integration. A systematic review can highlight these challenges, providing insights into the barriers and facilitating factors for successful implementation (Bacca et al., 2014 ; Cao et al., 2019 ).

Informing design and development : Understanding the characteristics and challenges of AR applications in education can inform the design and development of new AR tools and instructional strategies. It can help developers and instructional designers address the identified challenges and create more effective and user-friendly AR applications tailored to the specific needs of educational contexts (Kaufmann & Schmalstieg, 2018 ; Klopfer et al., 2008 ).

This paper concludes by offering researchers guidance in the examined domain, presenting the latest trends, future perspectives, and potential gaps or challenges associated with the utilization of augmented reality (AR) in education. Supported by a series of research questions, the paper delves into diverse facets of AR applications, encompassing target audience, educational focus, assessment methods, outcomes, limitations, technological approaches, publication channels, and the evolving landscape of research studies over time. By addressing these questions, the study endeavors to provide a comprehensive understanding of the unique characteristics and trends surrounding AR applications in the educational context.

The paper is structured for easy readability, with the following organization: The "Material and Methods" section outlines the systematic review's methodology, inclusion/exclusion criteria, research questions guiding the analysis, and a list of quality criteria for chosen articles. In the subsequent "Results" section, the selection process results are detailed, aligning with the prior research questions. This section specifically delves into the technological approach, assessment methodology, quality outcomes, and key findings (including scope, outcomes, limitations, and future plans) of each study. Following this, the "Discussion" section offers a thorough analysis of the findings, unveiling opportunities, gaps, obstacles, and trends in AR in education. Lastly, the "Conclusion" section summarizes the systematic review's major findings and offers guidance to researchers pursuing further work in the field.

Materials and methods

In this scientific paper, a systematic literature review was conducted for the period 2016–2020 to determine the characteristics of augmented reality educational applications and whether they can be effectively utilized in various aspects of the educational process. The study followed a Systematic Literature Review (SLR) protocol, which involves identifying, evaluating, and interpreting all available research related to a specific research question, topic, or phenomenon of interest (Kitchenham, 2004 ). The paper is structured according to the PRISMA Checklist review protocol (Moher et al., 2009 ), which outlines the stages of the systematic literature review. The stages of the systematic literature review are framed by the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-analyses), which has a wide application in research that aims to study a topic in depth by examining the research that has already been done and published (Grant & Booth, 2009 ).

The electronic databases Science Direct, Scopus, Google Scholar, Web of Science, MDPI, PubMED, IEEExplore, and ACM Digital Library were searched for scientific articles using keywords (employing Boolean phrases) such as augmented reality, AR, application, education, training, learning, mobile, app, etc., according to PICO (Stone, 2002 ). The keywords used in the queries were as follows: (AR OR “augmented reality”) AND (application OR education OR educational OR teaching OR app OR training OR learning OR mobile OR ICT OR “Information and Communication Technologies” OR tablet OR desktop OR curriculum). The selection of the aforementioned databases was based on considerations of comprehensiveness, interdisciplinarity, quality, international coverage, and accessibility. These databases collectively offer access to peer-reviewed journals and conference proceedings from diverse academic disciplines, ensuring a broad and reliable coverage of AR in education research. Additionally, the inclusion of Google Scholar allows for the identification of open access literature. Their reputation, interdisciplinary nature, and search capabilities further support a comprehensive and credible examination of the topic. The selected databases are known for their frequent updates, enabling the review to capture the latest research and stay up-to-date with the rapidly evolving field of AR in education. Data collection began in January 2021, and inclusion and exclusion criteria for the study are presented below.

Inclusion criteria

Articles involving the use of Augmented Reality applications for educational purpose

Studies published in English

Scientific research from peer-reviewed journals and conferences

Articles published between 2016 and 2020

Exclusion criteria

Research studies that were excluded from this review include theses, theoretical papers, reviews, and summaries that do not provide the entire articles. Additionally, studies that are "locked" and require a subscription or on-site payment for access were also excluded.

At the beginning of the data extraction process, a set of eight research questions was identified to guide the analysis:

RQ1. What is the target audience of the AR application?

RQ2. What educational areas or subjects are being targeted by the application?

RQ3. What type of assessment methods were utilized for the final solution?

RQ4. What were the outcomes achieved through the application of the proposed solution?

RQ5. What limitations or obstacles were noted in relation to the use of the application?

RQ6. What technological approaches were employed in the application's development?

RQ7. What are the primary channels for publishing research articles on AR educational interventions?

RQ8. How has the frequency of research studies on this topic changed over time?

The quality of the finally processed articles was assessed according to a series of criteria (Table  1 ). The CORE Conference Ranking (CORE Rankings Portal—Computing Research and Education, n.d.) and the Journal Citation Reports (JCR) (Ipscience-help.- thomsonreuters.com, 2022) were used for ranking conferences and journals accordingly. The maximum score for an article could be 10 points.

Initially, a total of 3,416 articles were retrieved from the searches. A "clearing" stage was then conducted, consisting of several steps. First, duplicates and non-English articles were removed, resulting in 2731 articles. Second, titles and abstracts were screened, yielding 1363 potentially relevant studies. Third, articles that were not available, as well as reviews and theoretical papers not related to the topic, were eliminated. Finally, the studies that met the inclusion criteria were isolated, resulting in a total of 73 articles. The entire process is illustrated in Fig.  3 . Figure 4 illustrates the quantity of Google searches conducted for the phrase “Augmented reality in education.”

figure 3

PRISMA flowchart

figure 4

Number of Google searches for the term "Augmented reality in education"

Table 2 illustrates the outcomes of the review process of the selected papers in terms of the technological methodology utilized and the characteristics of the assessment phase for the final solution. The analysis of the quality assurance results of the selected papers are presented in Table 4 (see Annex). According to the quality assurance criteria, 52.05% of the selected papers received a score above half of the total score, with a significant number of them (23.29%) scoring above 7.5. One paper achieved the maximum score, three papers scored 9.5, and one paper scored 9. Notably, 6.85% of the examined articles scored within the maximum 10% (total score = 9 to 10) of the rating scale.

Most studies employed a combination of diverse methodologies to evaluate the final solution, with 83.56% of the studies employing a questionnaire, 16.44% employing observation techniques, 16.44% interviewing the participants, and only 4.11% utilizing focus groups for subjective assessment. Objective assessments were developed in only 6.85% of the studies (Andriyandi et al., 2020 ; Bauer et al., 2017 ; Karambakhsh et al., 2019 ; Mendes et al., 2020 ), with two studies utilizing automatic detection of correct results (Andriyandi et al., 2020 ; Karambakhsh et al., 2019 ), and one study using task completion time (Mendes et al., 2020 ). Approximately one third (31.51%) used achievement tests pre- and post-study to evaluate users' performance after the applied intervention. One study used an achievement test solely in the initial phase (Aladin et al., 2020 ), and another (Scaravetti & Doroszewski, 2019 ) only at the end. Concerning subjective assessment, each study employed various instruments depending on the application's characteristics, with custom-made questionnaires being used in almost two-thirds (61.90%) of the articles. The SUS was the most widely used well-known instrument (n = 7, 11.11%), followed by the IMMS (n = 4, 6.35%) and the QUIS (n = 3, 4.76%). The UES, TAM, SoASSES, QLIS, PEURA-E, NASA-TLX, MSAR, IMI, HARUS, and CLS were used in one study each.

Scientific journals were the primary source of publication (98.6%, n = 72), with only one paper (1.4%) presented at a conference. A significant proportion (38.82%) of the articles was published in computer-related sources. The publishing focus was almost equally divided between the education and learning field (18.82%) and engineering (16.47%). The health domain was slightly addressed, with only eight journals (9.41%), followed by sources representing the environment (2.35%). Procedia Computer Science dominated the publishing sector, with 16 articles (21.92%), followed by Computers in Human Behavior (6.85%), the International Journal of Emerging Technologies in Learning (5.48%), and the IOP Conference Series: Materials Science and Engineering (4.11%). The remaining articles (n = 45) were distributed across 39 journals. Notably, over one-third (n = 28, 38.36%) of the studies lacked a JCR ranking. More than half (52.1%) of the reviewed papers were published after 2019 (see Fig.  5 ).

figure 5

Frequency of papers per year

Table 3 provides a taxonomy for the classification and analysis of the studies included, which aids in the synthesis of findings and the detection of research patterns and gaps. This taxonomy can also function as a structured framework, assisting educators and researchers in categorizing, arranging, and comprehending the diverse aspects of applying AR technology in educational contexts. Tables 5 , 6 , and 7 subsequently (see Annex) presents the outcomes of the present study, built upon this taxonomy. The “Article id” in Tables 5 , 6 , and 7 is associated to the one presented in Table  2 .

Table 2 presents the technological approach followed by each project. Almost two thirds (68.49%) of the published studies exploited marker-based AR, superimposition-based was found in 9.59% of the articles, while 5.48% followed the location-based approach. As far as the devices are concerned, the majority uses a smartphone (n = 37, 50.68%) or a tablet (n = 35, 47.95%), while 13.70% (n = 10) exploits a head mounted display. Two studies (2.35%) used an interactive board, one a smart TV, and two a Kinect camera. Almost half of the papers (45.21%) worked on an Android operating system, while 28.77% used the iOS and only 9.59% the Windows one. A great percentage (32.88%) did not report the used operating platform. It has to be noticed, that a study may have use more than one of the mentioned devices or operating systems during the experiments. Regarding the used platform and tools for developing the final solution, Unity was the most common one (n = 35, 47.95%), followed by Vuforia (n = 31, 42.47%), Aurasma (n = 5, 6.85%), ARKit or formerly Metaio (n = 5, 6.85%), and Blippar (n = 2, 2.74%). A great percentage (n = 11, 15.07%) did not provide any details on the used platform and tools. As seen in Table 8 (see Annex), the topics covered by the reviewed articles were widely dispersed.

The majority of the reviewed studies (n = 31, 42.47%) focused on the university level, followed by 26.03% (n = 19) that targeted secondary education, 21.92% (n = 16) primary education, 6.85% (n = 5) early childhood education, 1.37% (n = 1) nursery school, and 1.37% (n = 1) health professionals. Special education was addressed in only six papers (8.22%), while 6.85% (n = 5) did not specify the target population.

For a comprehensive overview, Table 9 (see Annex) outlines the primary outcomes, limitations, and future steps of the reviewed studies concerning the utilized applications.

The present study involved the analysis of both qualitative and quantitative data obtained from a collection of articles. The qualitative data obtained allowed for the identification of the decisions and actions taken by authors in designing and developing educational AR applications, as well as the extent to which these applications have been utilized. Notably, the study's analysis of educational AR applications was not restricted to any specific age group, subject area, or educational context. Rather, the study aimed to examine the full spectrum of educational AR applications, both within formal and informal education settings. Unlike prior investigations, the current study provides a comprehensive overview of research conducted between 2016 and 2020, exploring a diverse range of study designs and methodologies.

Based on the findings, it was discovered that almost all research studies pertaining to the topic at hand were published in scientific journals. Nonetheless, upon closer examination and analysis of the publications, it was noted that 25 of the studies that were published in journals were, in fact, conference proceedings that were later categorized as journals (e.g., Procedia Computer Science, Procedia CIRP, etc.) with no ranking, making up 38.89% of the total. Roughly 43.03% of the journals that were included in the review were of top-quality and ranked Q1. Collectively, 61.11% of the journals had a ranking score (Q1–Q4), and were thus considered as reputable sources. The wide variety of publishing sources (43 in total for the 73 papers examined) suggests that there is no specialized journal or conference dedicated to the area of interest. Additionally, it signifies that there are various ways in which AR can be employed in educational settings, ranging from simple applications such as labeling objects in a classroom to more intricate applications such as simulations. The following examples illustrate the diverse range of AR applications in education:

Visualizing Concepts : AR can be used to visualize abstract concepts such as the solar system, anatomy, and physics. By using AR, learners can see these concepts in 3D, making it easier to understand and remember.

Gamification : AR can be used to create interactive games that teach learners various skills such as problem-solving, critical thinking, and collaboration. These games can be used to make learning more fun and engaging.

Virtual Field Trips : AR can be used to take learners on virtual field trips, allowing them to explore various places and learn about different cultures, history, and geography.

Simulations : AR can be used to create simulations that allow learners to practice real-world scenarios and develop skills such as decision-making and problem-solving. For example, medical students can use AR to simulate surgeries and practice various procedures or to operate a microscope. Engineers also use AR to simulate experiments in mechanical engineering, electronics, electrical engineering and constructions.

The advent of emerging technologies and the development of low-cost devices and mobile phones with high computing power have created opportunities for innovative AR solutions in education. Researchers tend to prefer publishing their studies in journals, which are considered the most prestigious and impactful sources, even though it may take years to publish compared to only a few months in a conference.

The distribution of published articles per year (Fig.  5 ) can be attributed to the appearance of the first commercially available AR glasses in 2014 (Google Glasses), followed by the release of Microsoft's Hololens AR headset in 2016. As a result, a greater number of AR applications in retail emerged after 2017, and the AR industry has continued to develop as the cost of required devices has become more affordable. Based on the results, research related to the use of AR and mobile technology for educational purposes is expected to increase significantly in the coming years. According to a recent report by ResearchAndMarkets.com, the global market for Augmented Reality in education and training is projected to grow from 10.37 billion USD in 2022 to 68.71 billion USD in 2026 at a CAGR of 60.4% (Research & Markets, 2023 ).

In terms of the technological background of the provided solutions, the Android operating system dominated the market in the second quarter of 2018, accounting for 88% of all smartphone sales (Statista, 2023a , 2023b ). This finding is consistent with the research results, which indicated that almost half of the studies developed the application for the Android system. This can be attributed in part to the fact that Android is widely adopted, particularly among children and teachers in most countries, who tend to own cheaper Android smartphones rather than iPhones. However, it is now becoming a trend for any commercial application to target both iOS and Android phones, which explains the 28.77% of apps developed for the iOS operating system. Only a small percentage of the studies (9.58%, n = 7) worked with Windows, indicating a strong trend towards mobile AR technologies. One third of the studies (32.88%) did not specify any operating system.

The augmented reality industry is experiencing significant growth, which can be attributed to the increasing number of mobile users who are adopting this technology. Snap Inc. predicts that by 2025, around 75% of the world's population will be active users of AR technology. In addition, Deloitte Digital x Snap Inc. has reported that 200 million users actively engage with augmented reality on Snapchat on a daily basis, primarily through mobile applications. This trend is supported by the modern citizen profile, which is characterized by continuous mobility, limited free time, and greater reliance on mobile phones than PCs or laptops. According to a Statcounter study ("Desktop vs mobile", 2023 ), 50.48% of web traffic comes from mobile devices. Furthermore, mobile learning is increasingly popular, as evidenced by various studies (Ferriman, 2023 ).

With respect to development platforms and tools, the market is dominated by Unity (47.95%) and Vuforia (42.47%). This can be attributed to the fact that Unity's AR Foundation is a cross-platform framework that allows developers to create AR experiences and then build cross-platform applications for both Android and iOS devices without additional effort. Additionally, Unity is a leading platform for creating real-time 3D content. Vuforia is a software development kit (SDK) that facilitates the creation of AR applications by enabling the addition of computer vision functionalities, which allow the application to recognize objects, images, and spaces.

Marker-based AR was utilized in 68.49% of the studies, as it is simple and effective in providing a seamless user experience. This technology involves using a camera to detect a specific visual marker, such as a QR code, and overlaying digital content onto the marker in real-time. This allows users to interact with the digital content in a more intuitive way, as they can physically move the marker and see the digital content move along with it. Furthermore, marker-based AR has been in use for longer than other forms of AR and has a more established user base. Its popularity has been further enhanced by many companies and brands integrating it into their marketing campaigns and products. Additionally, its accessibility is a contributing factor, as it requires less processing power and hardware compared to other forms of AR, making it easier for users to access and experience on their mobile devices. Markerless AR, which uses GPS and other location data to place virtual content in the real world based on the user's location, is gaining popularity, but only 2.74% of the examined studies used it. There are also markerless AR systems that use machine learning and computer vision to track and overlay digital content onto real-world objects without the need for markers. While marker-based AR is currently the most common type of AR, other forms of AR are rapidly evolving and gaining traction. Nonetheless, the review indicates that markerless AR applications are still in the early stages of development. As AI, machine learning, and computer vision techniques continue to advance, researchers will need to adopt them to improve AR applications in several ways:

Object recognition and tracking : AI algorithms can be used to improve the accuracy of object recognition and tracking in AR applications. Machine learning can be used to train algorithms to recognize specific objects and track their movements in real-time. This can improve the stability of AR overlays and create a more immersive user experience.

Content generation and personalization : Machine learning can be used to generate and personalize AR content for individual users. Algorithms can analyze user behavior and preferences to generate relevant and engaging content in real-time.

Real-time language translation : AI-powered language translation can be integrated into AR applications to enable real-time translation of text and speech.

Spatial mapping : Machine learning algorithms can be used to create detailed 3D maps of the user's environment. This can be used to improve the accuracy and stability of AR overlays and enable more sophisticated AR applications, such as indoor navigation.

Predictive analytics : Machine learning algorithms can be used to provide users with contextual information based on their location, time of day, and other factors, while AI can predict user behavior. This can be used to create a more personalized and relevant AR experience.

The aforementioned aspects can potentially lead to new opportunities for innovation in the field of AR educational applications. These opportunities can be expanded by developing and utilizing virtual assistants and digital avatars within the educational context. Digital avatars and characters created by artificial intelligence can be designed to respond more naturally to users' behavior and emotions, thereby enhancing engagement and interactions and improving the user experience. AI-powered avatars can also facilitate realistic interactions, leading to more immersive and enjoyable learning experiences. Additionally, AI-powered platforms can be used to create interactive training sessions that provide stimulating and engaging learning experiences. For example, a virtual environment can simulate real-life job situations to aid in employee training. Likewise, AI-powered tools can create interactive experiences in which students can explore virtual objects and concepts in real-time.

Based on the research findings, the process of technology assessment is an arduous, challenging, and time-consuming task, but it is necessary in any research endeavor. However, there is no established gold standard for the subjective evaluation of Augmented Reality applications, which creates a vague landscape that forces most researchers (61.90%) to use custom-made scales. Consequently, this renders research results non-comparable. Moreover, many studies do not utilize reliable and valid instruments, making their findings questionable and not generalizable. Out of the examined pool, 35 cases used non-valid scales, 33 cases used non-reliable scales, and 33 cases used neither reliable nor valid scales. The System Usability Scale (SUS) was used seven times, the Intrinsic Motivation Measurement Scale (IMMS) four times, the Questionnaire for User Interaction Satisfaction (QUIS) three times, and all other scales (Unified Theory of Acceptance and Use of Technology – UTAUT, Extension Scale—UES, Technology Acceptance Model—TAM, Socially Adaptive Systems Evaluation Scale—SoASSES, Quality of Life Impact Scale—QLIS, Perceived Usability, and User Experience of Augmented Reality Environments—PEURA-E, National Aeronautics and Space Administration Task Load Index—NASA-TLX, Mixed Reality Simulator Sickness Assessment Questionnaire—MSAR, Intrinsic Motivation Inventory—IMI, Holistic Acceptance Readiness for Use Scale—HARUS, and Collaborative Learning Scale—CLS) were used only once each. In two studies (Conley et al., 2020 ; Saundarajan et al., 2020 ), even though the researchers tested the reliability of the questionnaires used, they did not assess their validity or use any established methodology to evaluate those questionnaires. Based on the presented results, the subjective satisfaction and assessment of AR solutions appear to be a daunting and challenging task. Therefore, there is a pressing need for the development of instruments that can capture the different aspects of a user's satisfaction (Koumpouros, 2016 ). In addition, it is essential to report users' experiences with the technologies used to enhance the completeness of research papers. Privacy protection and confidentiality, ethics approval and informed consent, and transparency of data collection and management are also essential. Legal and policy attention is required to ensure proper protection of user data and to prevent unwanted sharing of sensitive information with third parties (Bielecki, 2012 ). Conducting research involving children or other special categories (such as pupils with disabilities) requires great attention to the aforementioned issues and should follow all recent legislations and regulations, such as the General Data Protection Regulation (European Commission, 2012 ), Directive 95/46/EC (European Parliament, 1995 ), Directive 2002/58/EC (European Parliament, 2002 ), and Charter of Fundamental Right (European Parliament, 2000 ). The study also found that the number of end users participating in the assessment of the final solution is critical in obtaining valid results (Riihiaho, 2000 ). However, this remains a challenge, as only 19.18% of studies used 1 to 20 end users to evaluate the application, 20.55% used 21 to 40, 16.44% used 41 to 60, 9.59% used 61 to 80, and 21.92% used more than 80 end users. Only in four studies did both teachers and students evaluate the provided solution, although it is crucial for both parties to assess the solution used, particularly in the educational context, as they observe and assess the same thing from different perspectives.

In the examined projects, insufficient attention was given to primary and secondary education subjects, with only 21.92% and 26.03% of the efforts analyzed targeting these levels, respectively. Additionally, researchers should focus on subjects that are typically known for being information-intensive and requiring rote memory. The examined projects encountered several issues and limitations, including:

small sample sizes,

short evaluation phases,

lack of generalizable results,

need for end-user training,

absence of control groups and random sampling,

difficulty in determining if the solution has ultimately helped,

considerations of technology-related factors (e.g., cost, size, weight, battery life, compatibility issues, limited field of view from the headset, difficulty in wearing the head-mounted displays, accuracy, internet connection, etc.),

limited number of choices and scenarios offered to end users,

subjective assessment difficulty,

heterogeneity in the evaluation (e.g., different knowledge levels of the end users),

poor quality of graphics,

environmental factors affecting the quality of the application (e.g., light and sound),

quick movements affecting the quality and accuracy of the provided solution,

image and marker detection issues, and

lack of examination of long-term retention of the studied subjects.

In terms of future steps, it is essential to obtain statistically accepted results, which requires a significant number of end users in any research effort. Additionally, it is crucial to carefully examine user subjective and objective satisfaction using existing valid and reliable scales that can capture users' satisfaction in an early research stage (Koumpouros, 2016 ). Researchers should aim to simulate an environment that closely resembles the real one to enable students to generalize and apply their acquired skills and knowledge easily. Other key findings from the examined studies include the need for:

experiments with wider cohorts of participants and subjects,

examination of different age groups and levels,

use of smart glasses,

integration of speech recognition techniques,

examination of reproducibility of results,

use of markerless techniques,

enrichment of AR applications with more multimedia content,

consideration of more factors during evaluation (e.g., collaboration and personal features),

implementation of human avatars in AR experiences,

integration of gesture recognition and brain activity detection,

implementation of eye tracking techniques,

use of smart glasses instead of tablets or smartphones, and

further investigation of the relationship between learning gains, embodiment, and collaboration.

In addition, achieving an advanced Technology Readiness Level (TRL) (European Commission, 2014 ) is always desirable. An interdisciplinary team is considered to be extremely important in effectively meeting the needs of various end users, which can be supported by an iterative strategy of design, evaluation, and redesign (Nielsen, 1993 ). Usability testing and subjective evaluation are challenging but critical tasks in any research project (Koumpouros, 2016 ; Koumpouros et al., 2016 ). The user-friendliness of the provided solution is also a significant concern. Additionally, the involvement of behavioral sciences could greatly assist in the development of a successful project in the field with better adoption rates by end users (Spruijt-Metz et al., 2015 ).

Table 9 (see Annex) shows that AR technologies have been utilized in a variety of disciplines, educational levels, and target groups, including for supporting and enhancing social and communication skills in special education settings. Preliminary results suggest that AR may be beneficial for these target groups, although the limited number of participants, short intervention duration, and non-random selection of participants make generalization of the results challenging. Furthermore, the long-term retention of learning gains remains unclear. Nevertheless, students appear to enjoy using AR for learning and engaging with course material, and AR supports experiential learning, which emphasizes learning through experience, activity, and reflection. This approach to teaching can lead to increased engagement and motivation, improved retention and understanding, development of practical skills, and enhanced critical thinking and problem-solving abilities. In summary, AR has the potential to be a valuable tool for developing a range of skills and knowledge in learners.

An area of interest that warrants further investigation is the amount of time learners spend on each topic when utilizing augmented reality tools as opposed to conventional learning methods. This inquiry may yield valuable insights regarding the efficacy of AR-based

The ease with which students learn the material delivered through AR.

The amount of time required to learn the material when compared to conventional education.

Whether the use of AR enhances students' interest in the topic.

Whether students enjoy studying with AR more than they do with traditional methods.

Whether AR amplifies students' motivation to learn.

interventions. Researchers ought to explore the following five key issues when providing AR-based educational solutions:

It is evident that the aforementioned parameters require at least a control group in order to compare the outcomes of the intervention with those of conventional learning. Additionally, it is essential to consider the duration of the initial intervention and the retesting interval to assess the retention of learning gains. Finally, it is crucial to expand research into the realm of special education and other domains. For example, innovative IT interventions could greatly benefit individuals with autism spectrum disorders and students with intellectual disabilities (Koumpouros & Kafazis, 2019 ). Augmented reality could be proved valuable in minimizing attention deficit during training and improve learning for the specific target groups (Goharinejad et al., 2022 ; Nor Azlina & Kamarulzaman, 2020 ; Tosto et al., 2021 ).

As far as the educational advantages and benefits of AR in education are concerned, AR holds immense potential for enhancing educational outcomes across various educational levels and subject areas:

Enhanced Engagement: AR creates highly interactive and engaging learning experiences. Learners are actively involved in the educational content, which can lead to increased motivation and interest in the subject matter.

Visualization of Complex Concepts: AR enables the visualization of abstract and complex concepts, making them more tangible and understandable. Learners can explore 3D models of objects, organisms, and phenomena, facilitating deeper comprehension.

Experiential Learning: AR supports experiential learning by allowing students to engage with virtual objects, conduct experiments, and simulate real-world scenarios. This hands-on approach enhances practical skills and problem-solving abilities.

Gamification and Game-Based Learning: AR can be used to gamify educational content, turning lessons into interactive games. This approach fosters critical thinking, decision-making, and collaborative skills while making learning enjoyable.

Virtual Field Trips: AR-based virtual field trips transport students to different places and historical eras, providing immersive cultural, historical, and geographical learning experiences.

Simulation-Based Training: Medical and engineering students can benefit from AR simulations that allow them to practice surgeries, experiments, and procedures in a risk-free environment, leading to better skill development.

Personalization of Learning: AR applications can personalize learning experiences based on individual student needs, adapting content and pacing to optimize comprehension and retention.

Enhanced Accessibility: AR can assist learners with disabilities by providing tailored support, such as audio descriptions, text-to-speech functionality, and interactive adaptations to suit various learning styles.

To provide a more comprehensive understanding of AR in education, it is essential to connect it with related research areas:

Gamification and Game-Based Learning: Drawing parallels between AR and gamification/game-based learning can shed light on how game elements, such as challenges and rewards, can be integrated into AR applications to enhance learning experiences.

Virtual Reality (VR) in Education: Contrasting AR with VR can elucidate the strengths and weaknesses of both technologies in educational contexts, helping educators make informed decisions about their integration.

Cross-Disciplinary Approaches: Collaborative research involving experts in AR, gamification, game-based learning, VR, and educational psychology can yield innovative approaches to educational technology, benefiting both learners and educators.

Learning Outcomes and Age-Level Effects: Future studies should delve into the specific learning outcomes facilitated by AR applications in different age groups and educational settings. Understanding the nuanced impact of AR on various learner demographics is crucial.

Subject-Specific Applications: Exploring subject-specific AR applications and their effectiveness can reveal how AR can be tailored to the unique requirements of diverse academic disciplines.

In conclusion, AR in education offers a myriad of educational advantages, including enhanced engagement, visualization of complex concepts, experiential learning, gamification, virtual field trips, and personalized learning. By linking AR research with related fields and investigating its impact on learning outcomes, age-level effects, and subject-specific applications, we can harness the full potential of AR technology to revolutionize education.

Summarizing, AR has positive indications and could significantly help the educational process of different levels and target groups. The innovation of various AR applications lies in the property of 3D visualization of objects—models. In this way, in the field of education, 3D visualization can be used for the in-depth understanding of phenomena by students, in whom the knowledge will be better impressed (Lamanauskas et al., 2007 ). Game-based learning, the Kinect camera or other similar tools and markerless AR should be further exploited in the future. Finally, it should be noted that in order to effectively achieve the design of an educational AR application, it is necessary to take into account the learning environment, the particularities of each student, the axioms of the psychology of the learner and of course all the theories that have been formulated for learning (Cuendet et al., 2013 ). In simpler terms, the use of AR applications in education makes learning experiential for learners and mainly aims to bridge the gap between the classroom and the external environment as well as to increase the ability to perceive reality on the part of students.

Research limitations

Our systematic literature review on AR in education, while comprehensive within its defined scope, has certain limitations that must be acknowledged. Firstly, the review was confined to articles published between 2016 and 2020, which may have excluded some recent developments in the field. Additionally, our focus on English-language publications introduces a potential bias, as valuable research in other languages may have been omitted. These limitations, though recognized, were necessary to streamline the study's scope and maintain a manageable dataset. We acknowledge the significance of incorporating more recent data, and already working to expand our research in future endeavors to encompass the latest developments, ensuring the timeliness and relevance of our findings. However, we believe that the period we examined is crucial, particularly due to the emergence of COVID-19, which significantly accelerated the proliferation of educational apps across various contexts. Hence, we consider this timeframe as a distinct era that warrants separate investigation.

The use of AR interventions shows promise for improving educational outcomes. However, to maximize its practical application, several aspects require further scrutiny. Drawing from an analysis of qualitative and quantitative data on educational AR applications, several recommendations for future research and implementation can be proposed. Firstly, there is a need to explore the impact of AR in special education, considering specific age groups, subject areas, and educational contexts. Additionally, studying the effectiveness of different methodologies and study designs in AR education is crucial. It is important to identify areas where AR can have the greatest impact and design targeted applications accordingly. Investigating the long-term effects of AR in education is essential, including how it influences learning outcomes, knowledge retention, and student engagement over an extended period. Understanding how AR can support students with diverse learning needs and disabilities and developing tailored AR applications for special education settings is also vital. Researchers should adopt appropriate methodologies for studying the impact of AR in education. This includes conducting comparative studies to evaluate the effectiveness of AR applications compared to traditional teaching methods or other educational technologies. Longitudinal studies should be conducted to examine the sustained impact of AR on learning outcomes and engagement by following students over an extended period. Mixed-methods research combining qualitative and quantitative approaches should be employed to gain a deeper understanding of the experiences and perceptions of students and educators using AR in educational settings, using interviews, observations, surveys, and performance assessments to gather comprehensive data. Integration strategies for incorporating AR into existing educational frameworks should be investigated to ensure seamless implementation. This involves exploring strategies for integrating AR into existing curriculum frameworks and enhancing traditional teaching methods and learning activities across various subjects. Providing teacher training and professional development programs to support educators in effectively integrating AR into their teaching practices is important. Additionally, exploring pedagogical approaches that leverage the unique affordances of AR can facilitate active learning, problem-solving, collaboration, and critical thinking skills development. The lack of specialized journals or conferences dedicated to educational AR suggests the need for a platform specifically focused on this area. The diverse range of AR applications in education, such as visualizing concepts, gamification, virtual field trips, and simulations, should be further explored and expanded. With the projected growth of the AR market in education, more research is expected in the coming years. Technological advancements should be leveraged, considering the dominance of the Android operating system, to develop applications that cater to both Android and iOS platforms. Furthermore, leveraging advancements in AI, machine learning, and computer vision can enhance object recognition and tracking, content generation and personalization, real-time language translation, spatial mapping, and predictive analytics in AR applications. Integrating virtual assistants, digital avatars, and AI-powered platforms can provide innovative and engaging learning experiences. Improving AR technology and applications can be achieved by investigating compatibility with different mobile devices and operating systems, exploring emerging AR technologies, and developing reliable evaluation instruments and methodologies to assess user experience and satisfaction. These recommendations aim to address research gaps, enhance the effectiveness of AR in education, and guide future developments and implementations in the field. By focusing on specific areas of investigation and considering the integration of AR within educational frameworks, researchers and practitioners can advance the understanding and application of AR in educational settings.

In conclusion, the utilization of AR interventions in education holds significant practical implications for enhancing teaching and learning processes. The adoption of AR has the potential to transform traditional educational approaches by offering interactive and personalized learning experiences. By incorporating AR technology, educators can engage students in immersive and dynamic learning environments, promoting their active participation and motivation. AR can facilitate the visualization of complex concepts, making abstract ideas more tangible and accessible. Moreover, AR applications can provide real-world simulations, virtual field trips, and gamified experiences, enabling students to explore and interact with subject matter in a way that traditional methods cannot replicate. These practical benefits of AR in education indicate its potential to revolutionize the learning landscape. However, it is important to acknowledge and address the limitations and challenges associated with AR interventions in education. Technical constraints, such as the need for compatible devices and stable connectivity, may hinder the widespread implementation of AR. Moreover, ethical considerations surrounding data privacy and security must be carefully addressed to ensure the responsible use of AR technology in educational settings. Additionally, potential barriers, such as the cost of AR devices and the need for appropriate training for educators, may pose challenges to the seamless integration of AR in classrooms. Understanding and mitigating these limitations and challenges are essential for effectively harnessing the benefits of AR interventions in education. While AR interventions offer tremendous potential to enhance education by promoting engagement, personalization, and interactive learning experiences, it is crucial to navigate the associated limitations and challenges in order to fully realize their practical benefits. By addressing these concerns and continuing to explore innovative ways to integrate AR into educational contexts, we can pave the way for a more immersive, effective, and inclusive educational landscape. Our systematic review highlights the substantial potential of AR in reshaping educational practices and outcomes. By harnessing the educational advantages of AR and forging connections with related research areas such as gamification, game-based learning, and virtual reality in education, educators and researchers can collaboratively pave the way for more engaging, interactive, and personalized learning experiences. As the educational landscape continues to evolve, embracing AR technology represents a promising avenue for enhancing the quality and effectiveness of education across diverse domains.

Availability of data and materials

All data generated or analysed during this study are included in this published article.

Abbreviations

Artificial Intelligence

Augmented Reality

Augmented reality-based video modeling storybook

Augmented Virtuality

Autism Spectrum Disorder

Collaborative Learning Scale

Computing Research and Education

Custom Made

Degrees of Freedom

Educational Magic Toys

Field of view

First Person Shooter

Focus group

Head-mounted display

Holistic Acceptance Readiness for Use Scale

Information and Communication Technologies

Information Technology

Intrinsic Motivation Inventory

Intrinsic Motivation Measurement Scale

Journal Citation Reports

Mixed Reality

Mixed Reality Simulator Sickness Assessment Questionnaire

National Aeronautics and Space Administration Task Load Index

Perceived Usability User Experience of Augmented Reality Environments

Problem-based Learning

Quality of Life Impact Scale

Questionnaire for User Interaction Satisfaction

Smart Learning Companion

Socially Adaptive Systems Evaluation Scale

Socioeconomic status

Software development kit

System Usability Scale

Systematic Literature Review

Technology Acceptance Model

Technology Acceptance Model survey

Technology Readiness Level

Unified Theory of Acceptance and Use of Technology

User Engagement Scale

Virtual Reality

Akçayır, M., & Akçayır, G. (2017). Advantages and challenges associated with augmented reality for education: A systematic review of the literature. Educational Research Review, 20 , 1–11.

Article   Google Scholar  

Akçayır, M., Akçayır, G., Pektaş, H. M., & Ocak, M. A. (2016). Augmented reality in science laboratories: The effects of augmented reality on university students’ laboratory skills and attitudes toward science laboratories. Computers in Human Behavior, 57 , 334–342.

Abd Majid, N. A., & Abd Majid, N. (2018). Augmented reality to promote guided discovery learning for STEM learning. International Journal on Advanced Science, Engineering and Information Technology, 8 (4–2), 1494–1500.

Aebersold, M., Voepel-Lewis, T., Cherara, L., Weber, M., Khouri, C., Levine, R., & Tait, A. R. (2018). Interactive anatomy-augmented virtual simulation training. Clinical Simulation in Nursing, 15 , 34–41.

Aladin, M. Y. F., Ismail, A. W., Salam, M. S. H., Kumoi, R., & Ali, A. F. (2020). AR-TO-KID: A speech-enabled augmented reality to engage preschool children in pronunciation learning. In IOP conference series: Materials science and engineering (Vol. 979, No. 1, p. 012011). IOP Publishing.

Alhumaidan, H., Lo, K. P. Y., & Selby, A. (2018). Co-designing with children a collaborative augmented reality book based on a primary school textbook. International Journal of Child-Computer Interaction, 15 , 24–36.

Aljojo, N., Munshi, A., Zainol, A., Al-Amri, R., Al-Aqeel, A., Al-khaldi, M., & Qadah, J. (2020). Lens application: Mobile application using augmented reality.

Altmeyer, K., Kapp, S., Thees, M., Malone, S., Kuhn, J., & Brünken, R. (2020). The use of augmented reality to foster conceptual knowledge acquisition in STEM laboratory courses: Theoretical background and empirical results. British Journal of Educational Technology, 51 (3), 611–628.

Andriyandi, A. P., Darmalaksana, W., Adillah Maylawati, D. S., Irwansyah, F. S., Mantoro, T., & Ramdhani, M. A. (2020). Augmented reality using features accelerated segment test for learning tajweed. TELKOMNIKA (telecommunication Computing Electronics and Control), 18 (1), 208–216. https://doi.org/10.12928/TELKOMNIKA.V18I1.14750

Ayer, S. K., Messner, J. I., & Anumba, C. J. (2016). Augmented reality gaming in sustainable design education. Journal of Architectural Engineering, 22 (1), 04015012.

Azuma, R. T. (1997). A survey of augmented reality. Presence: Teleoperators & Virtual Environments, 6 (4), 355–385.

Bacca, J., Baldiris, S., Fabregat, R., Graf, S., & Kinshuk. (2014). Augmented reality trends in education: A systematic review of research and applications. Educational Technology & Society, 17 (4), 133–149.

Google Scholar  

Badilla-Quintana, M. G., Sepulveda-Valenzuela, E., & Salazar Arias, M. (2020). Augmented reality as a sustainable technology to improve academic achievement in students with and without special educational needs. Sustainability, 12 (19), 8116.

Bal, E., & Bicen, H. (2016). Computer hardware course application through augmented reality and QR code integration: Achievement levels and views of students. Procedia Computer Science, 102 , 267–272.

Bauer, A., Neog, D. R., Dicko, A. H., Pai, D. K., Faure, F., Palombi, O., & Troccaz, J. (2017). Anatomical augmented reality with 3D commodity tracking and image-space alignment. Computers & Graphics, 69 , 140–153.

Bazarov, S. E., Kholodilin, I. Y., Nesterov, A. S., & Sokhina, A. V. (2017). Applying augmented reality in practical classes for engineering students. In IOP conference series: Earth and environmental science (Vol. 87, No. 3, p. 032004). IOP Publishing.

Bibi, S., Munaf, R., Bawany, N., Shamim, A., & Saleem, Z. (2020). Smart learning companion (SLAC). International Journal of Emerging Technologies in Learning (iJET), 15 (16), 200–211.

Bielecki, M. (2012). Leveraging mobile health technology for patient recruitment: An emerging opportunity . Northbrook, IL: Blue Chip Patient Recruitment.

Billinghurst, M., & Kato, H. (2002). Collaborative augmented reality. Communications of the ACM, 45 (7), 64–70.

Billinghurst, M., Kato, H., & Poupyrev, I. (2001). The MagicBook: A transitional AR interface. Computers & Graphics, 25 (5), 745–753.

Bower, M., Howe, C., McCredie, N., Robinson, A., & Grover, D. (2014). Augmented reality in education–cases, places and potentials. Educational Media International, 51 (1), 1–15.

Bursali, H., & Yilmaz, R. M. (2019). Effect of augmented reality applications on secondary school students’ reading comprehension and learning permanency. Computers in Human Behavior, 95 , 126–135.

Cabero-Almenara, J., & Roig-Vila, R. (2019). The motivation of technological scenarios in augmented reality (AR): Results of different experiments. Applied Sciences, 9 (14), 2907.

Cao, Y., Zhang, S., Zhang, Y., & Li, X. (2019). Challenges and opportunities of augmented reality in education: A systematic review. Interactive Learning Environments, 27 (8), 1059–1074.

Carlson, K. J., & Gagnon, D. J. (2016). Augmented reality integrated simulation education in health care. Clinical Simulation in Nursing, 12 (4), 123–127.

Carmigniani, J., Furht, B., Anisetti, M., Ceravolo, P., Damiani, E., & Ivkovic, M. (2011). Augmented reality technologies, systems and applications. Multimedia Tools and Applications, 51 (1), 341–377.

Chang, S. C., & Hwang, G. J. (2018). Impacts of an augmented reality-based flipped learning guiding approach on students’ scientific project performance and perceptions. Computers & Education, 125 , 226–239.

Chen, C. H., Lee, I. J., & Lin, L. Y. (2016). Augmented reality-based video-modeling storybook of nonverbal facial cues for children with autism spectrum disorder to improve their perceptions and judgments of facial expressions and emotions. Computers in Human Behavior, 55 , 477–485.

Chen, C. M., Cheng, B., & Chang, C. H. (2020). A systematic review of research on augmented reality in education: Advantages and applications. Educational Research Review, 30 , 100326.

Cheng, J., Wang, Y., Tjondronegoro, D., & Song, W. (2018). Construction of interactive teaching system for course of mechanical drawing based on mobile augmented reality technology. International Journal of Emerging Technologies in Learning (IJET), 13 (2), 126–139.

Cheng, K.-H., & Tsai, C.-C. (2014). Children and parents’ reading of an augmented reality picture book: Analyses of behavioral patterns and cognitive attainment. Computers & Education, 72 , 302–312. https://doi.org/10.1016/j.compedu.2013.12.003

Cieza, E., & Lujan, D. (2018). Educational mobile application of augmented reality based on markers to improve the learning of vowel usage and numbers for children of a kindergarten in Trujillo. Procedia Computer Science, 130 , 352–358.

Collado, R. C., Caluya, N. R., & Santos, M. E. C. (2019). Teachers’ evaluation of MotionAR: An augmented reality-based motion graphing application. Journal of Physics: Conference Series (Vol. 1286, No. 1, p. 012051). IOP Publishing.

Conley, Q., Atkinson, R. K., Nguyen, F., & Nelson, B. C. (2020). MantarayAR: Leveraging augmented reality to teach probability and sampling. Computers & Education, 153 , 103895.

CORE Rankings Portal - Computing Research & Education. (n.d.) Retrieved from http://www.core.edu.au/conference-portal .

Crăciun, D., & Bunoiu, M. (2017). Boosting physics education through mobile augmented reality. In AIP Conference Proceedings (Vol. 1916, No. 1, p. 050003). AIP Publishing LLC.

Cuendet, S., Bonnard, Q., Do-Lenh, S., & Dillenbourg, P. (2013). Designing augmented reality for the classroom. Computers & Education, 68 , 557–569.

Dalim, C. S. C., Sunar, M. S., Dey, A., & Billinghurst, M. (2020). Using augmented reality with speech input for non-native children’s language learning. International Journal of Human-Computer Studies, 134 , 44–64.

Deb, S., & Bhattacharya, P. (2018). Augmented Sign Language Modeling (ASLM) with interaction design on smartphone-an assistive learning and communication tool for inclusive classroom. Procedia Computer Science, 125 , 492–500.

Dede, C. (2009). Immersive interfaces for engagement and learning. Science, 323 (5910), 66–69.

Dunleavy, M., & Dede, C. (2014). Augmented reality teaching and learning. In J. M. Spector, M. D. Merrill, J. Elen, & M. J. Bishop (Eds.), Handbook of research on educational communications and technology (4th ed., pp. 735–745). Springer.

Dunleavy, M., Dede, C., & Mitchell, R. (2009). Affordances and limitations of immersive participatory augmented reality simulations for teaching and learning. Journal of Science Education and Technology, 18 (1), 7–22.

Elivera, A., & Palaoag, T. (2020). Development of an augmented reality mobile application to enhance the pedagogical approach in teaching history. In IOP conference series: materials science and engineering (vol. 803, no. 1, p. 012014). IOP Publishing.

European Commission. (2012). General data protection regulation: Proposal for a regulation of the European parliament and of the council. From EC justice. Data protection. Retrieved from ec.europa.eu/justice/data-protection/document/review2012/com_2012_11_en.pdf.

European Commission (2014). HORIZON 2020—WORK PROGRAMME 2014-2015, general annexes, extract from part 19—commission decision C(2014)4995. Retrieved from https://ec.europa.eu/research/participants/data/ref/h2020/wp/2014_2015/annexes/h2020-wp1415-annex-g-trl_en.pdf .

European Parliament (2000). Charter of fundamental rights of the european union (2000/C 364/01). Official Journal of the European Communities. Retrieved from www.europarl.europa.eu/charter/pdf/text_en.pdf .

European Parliament and the Council of 12 July 2002 (2002). Directive 2002/58/EC: Processing of personal data and the protection of privacy in the electronic communications sector. Retrieved from eur-lex.europa.eu/eli/dir/2002/58/oj.

European Parliament and the Council of 24 October 1995 (1995). Directive 95/46/EC: Protection of individuals with regard to the processing of personal data and the free movement of such data. Retrieved from data.europa.eu/eli/dir/1995/46/oj.

Ferrer-Torregrosa, J., Torralba, J., Jimenez, M. A., García, S., & Barcia, J. M. (2015). AR-BOOK: Development and assessment of a tool based on augmented reality for anatomy. Journal of Science Education and Technology, 24 (1), 119–124.

Ferriman, J. (2023). 7 random mobile learning stats. LearnDash. https://www.learndash.com/7-random-mobile-learning-stats/ .

Fidan, M., & Tuncel, M. (2019). Integrating augmented reality into problem based learning: The effects on learning achievement and attitude in physics education. Computers & Education, 142 , 103635.

Gargrish, S., Mantri, A., & Kaur, D. P. (2020). Augmented reality-based learning environment to enhance teaching-learning experience in geometry education. Procedia Computer Science, 172 , 1039–1046.

Goharinejad, S., Goharinejad, S., Hajesmaeel-Gohari, S., et al. (2022). The usefulness of virtual, augmented, and mixed reality technologies in the diagnosis and treatment of attention deficit hyperactivity disorder in children: an overview of relevant studies. BMC Psychiatry 22 (4). https://doi.org/10.1186/s12888-021-03632-1 .

Grant, M. J., & Booth, A. (2009). A typology of reviews: An analysis of 14 review types and associated methodologies. Health Information & Libraries Journal, 26 (2), 91–108.

Harun, F., Tuli, N., & Mantri, A. (2020). Experience fleming’s rule in electromagnetism using augmented reality: analyzing impact on students learning. Procedia Computer Science, 172 , 660–668.

Henssen, D. J., van den Heuvel, L., De Jong, G., Vorstenbosch, M. A., van Cappellen van Walsum, A. M., Van den Hurk, M. M., et al. (2020). Neuroanatomy learning: Augmented reality vs. cross‐sections. Anatomical sciences education, 13 (3), 353–365.

Ibáñez, M. B., Portillo, A. U., Cabada, R. Z., & Barrón, M. L. (2020). Impact of augmented reality technology on academic achievement and motivation of students from public and private Mexican schools. A case study in a middle-school geometry course. Computers & Education, 145 , 103734.

Iftene, A., & Trandabăț, D. (2018). Enhancing the attractiveness of learning through Augmented Reality. Procedia Computer Science, 126 , 166–175.

Ipscience-help.thomsonreuters.com (2019). 2018 JCR data release. Retrieved from http://ipscience-help.thomsonreuters.com/incitesLiveJCR/8275-TRS.html .

Jerry, T. F. L., & Aaron, C. C. E. (2010). The impact of augmented reality software with inquiry-based learning on students' learning of kinematics graph. In 2010 2nd international conference on education technology and computer (vol. 2, pp. V2–1). IEEE.

Joo-Nagata, J., Abad, F. M., Giner, J. G. B., & García-Peñalvo, F. J. (2017). Augmented reality and pedestrian navigation through its implementation in m-learning and e-learning: Evaluation of an educational program in Chile. Computers & Education, 111 , 1–17.

Karambakhsh, A., Kamel, A., Sheng, B., Li, P., Yang, P., & Feng, D. D. (2019). Deep gesture interaction for augmented anatomy learning. International Journal of Information Management, 45 , 328–336.

Kaufmann, H., & Schmalstieg, D. (2018). Physics-based user interfaces for augmented reality. ACM Transactions on Computer-Human Interaction, 25 (5), 32.

Khan, D., Ullah, S., Ahmad, W., Cheng, Z., Jabeen, G., & Kato, H. (2019). A low-cost interactive writing board for primary education using distinct augmented reality markers. Sustainability, 11 (20), 5720.

Kiryakova, G., Angelova, N., & Yordanova, L. (2018). The potential of augmented reality to transform education into smart education. TEM Journal, 7 (3), 556.

Kitchenham, B. (2004). Procedures for performing systematic reviews. Keele, UK, Keele University, 33 (2004), 1–26.

Klopfer, E., & Squire, K. (2008). Environmental Detectives: The development of an augmented reality platform for environmental simulations. Educational Technology Research and Development, 56 (2), 203–228.

Klopfer, E., & Squire, K. (2019). Augmented reality and learning: A critical review. In J. Voogt & G. Knezek (Eds.), International handbook of information technology in primary and secondary education (pp. 325–338). Springer.

Klopfer, E., Squire, K., & Jenkins, H. (2008). Environmental detectives: PDAs as a window into a virtual simulated world. In Handbook of research on effective electronic gaming in education (pp. 143–166). Information Science Reference.

Koumpouros, Y. (2016). A systematic review on existing measures for the subjective assessment of rehabilitation and assistive robot devices. Journal of Healthcare Engineering, 2016 , 1–10. https://doi.org/10.1155/2016/1048964

Koumpouros, Y., & Kafazis, T. (2019). Wearables and mobile technologies in autism spectrum disorder interventions: A systematic literature review. Research in Autism Spectrum Disorders, 66 , https://doi.org/10.1016/j.rasd.2019.05.005 .

Koumpouros, Y., Papageorgiou, E., Karavasili, A., & Koureta, F. (2016). PYTHEIA: A scale for assessing rehabilitation and assistive robotics. World Academy of Science, Engineering and Technology, International Journal of Medical, Health, Biomedical, Bioengineering and Pharmaceutical Engineering, 10 (11), 522–526.

Kuikkaniemi, K., Turunen, M., Hakulinen, J., & Salo, K. (2014). Exploring user experience and user engagement in free mobile applications. International Journal of Mobile Human-Computer Interaction (IJMHCI), 6 (2), 35–50.

Kurniawan, M. H., & Witjaksono, G. (2018). Human anatomy learning systems using augmented reality on mobile application. Procedia Computer Science, 135 , 80–88.

Lamanauskas V., Vilkonis R. & Klangauskas A. (2007). Using information and communication technology for learning purposes: Students position on the issue. Europe Needs More Scientists—the Role of Eastern and Central European Symposium, 8–11 November 2006, Tartu, Estonia, 151–164.

Layona, R., Yulianto, B., & Tunardi, Y. (2018). Web based augmented reality for human body anatomy learning. Procedia Computer Science, 135 , 457–464.

Lee, K. M. (2012). Augmented reality in education and training. TechTrends, 56 (2), 13–21.

Li, Z., Huang, R., Li, G., & Song, Y. (2020). Augmented reality in education: A systematic review and synthesis of literature. Educational Research Review, 30 , 100326.

Lin, C. Y., Chai, H. C., Wang, J. Y., Chen, C. J., Liu, Y. H., Chen, C. W., & Huang, Y. M. (2016). Augmented reality in educational activities for children with disabilities. Displays, 42 , 51–54.

Liu, D. Y., Navarrete, C. C., & Chang, Y. C. (2019). Trends in augmented reality research: A systematic review of journal publications from 2008 to 2017. IEEE Access, 7 , 1019–1035.

López-García, A., Miralles-Martínez, P., & Maquilón, J. (2019). Design, application and effectiveness of an innovative augmented reality teaching proposal through 3P model. Applied Sciences, 9 (24), 5426.

Lorusso, M. L., Giorgetti, M., Travellini, S., Greci, L., Zangiacomi, A., Mondellini, M., & Reni, G. (2018). Giok the alien: An ar-based integrated system for the empowerment of problem-solving, pragmatic, and social skills in pre-school children. Sensors, 18 (7), 2368.

Macariu, C., Iftene, A., & Gîfu, D. (2020). Learn chemistry with augmented reality. Procedia Computer Science, 176 , 2133–2142.

Mahmood, F., Mahmood, E., Dorfman, R. G., Mitchell, J., Mahmood, F. U., Jones, S. B., & Matyal, R. (2018). Augmented reality and ultrasound education: Initial experience. Journal of Cardiothoracic and Vascular Anesthesia, 32 (3), 1363–1367.

Markets and Markets (2023). Augmented reality market industry report, size, segment, key players, scope, 2030. Retrieved April 11, 2023, from https://www.marketsandmarkets.com/Market-Reports/augmented-reality-market-82758548.html .

Martin, S., Diaz, G., Sancristobal, E., Gil, R., Castro, M., & Peire, J. (2011). New technology trends in education: Seven years of forecasts and convergence. Computers & Education, 57 (3), 1893–1906.

Mendes, H. C. M., Costa, C. I. A. B., da Silva, N. A., Leite, F. P., Esteves, A., & Lopes, D. S. (2020). PIÑATA: Pinpoint insertion of intravenous needles via augmented reality training assistance. Computerized Medical Imaging and Graphics, 82 , 101731.

Milgram, P., & Kishino, F. (1994). A taxonomy of mixed reality visual displays. IEICE Transactions on Information and Systems, 77 (12), 1321–1329.

Moher, D., Liberati, A., Tetzlaff, J., Altman, D. G., & Prisma Group. (2009). Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Medicine, 6 (7), e1000097.

Montoya, M. H., Díaz, C. A., & Moreno, G. A. (2016). Evaluating the effect on user perception and performance of static and dynamic contents deployed in augmented reality based learning application. Eurasia Journal of Mathematics, Science and Technology Education, 13 (2), 301–317.

Moreno-Guerrero, A. J., Alonso García, S., Ramos Navas-Parejo, M., Campos-Soto, M. N., & Gómez García, G. (2020). Augmented reality as a resource for improving learning in the physical education classroom. International Journal of Environmental Research and Public Health, 17 (10), 3637.

Moro, C., Štromberga, Z., Raikos, A., & Stirling, A. (2017). The effectiveness of virtual and augmented reality in health sciences and medical anatomy. Anatomical Sciences Education, 10 (6), 549–559.

Mota, J. M., Ruiz-Rube, I., Dodero, J. M., & Arnedillo-Sánchez, I. (2018). Augmented reality mobile app development for all. Computers & Electrical Engineering, 65 , 250–260.

Mourtzis, D., Zogopoulos, V., & Vlachou, E. (2018). Augmented reality supported product design towards industry 4.0: A teaching factory paradigm. Procedia Manufacturing, 23 , 207–212.

Mylonas, G., Triantafyllis, C., & Amaxilatis, D. (2019). An augmented reality prototype for supporting IoT-based educational activities for energy-efficient school buildings. Electronic Notes in Theoretical Computer Science, 343 , 89–101.

Nguyen, N., Muilu, T., Dirin, A., & Alamäki, A. (2018). An interactive and augmented learning concept for orientation week in higher education. International Journal of Educational Technology in Higher Education, 15 (1), 1–15.

Nielsen, J. (1993). Usability engineering . Morgan Kaufmann Publishers Inc https://doi.org/10.1145/1508044.1508050 .

Nor Azlina Ab Aziz & Kamarulzaman Ab Aziz (2020). Application of augmented reality in education of attention deficit hyperactive disorder (ADHD) children. Neurological Disorders and Imaging Physics, Volume 4, Chapter 10, https://iopscience.iop.org/book/edit/978-0-7503-1822-8/chapter/bk978-0-7503-1822-8ch10.pdf .

Pombo, L., & Marques, M. M. (2017). Marker-based augmented reality application for mobile learning in an urban park: Steps to make it real under the EduPARK project. In 2017 International symposium on computers in education (SIIE) (pp. 1–5). IEEE.

Radu, I. (2014). Augmented reality in education: A meta-review and cross-media analysis. Personal and Ubiquitous Computing, 18 (6), 1533–1543.

Research and Markets (2023). Augmented reality in training and education global market report 2023. Research and Markets - Market Research Reports - Welcome. Retrieved April 11, 2023, from https://www.researchandmarkets.com/reports/5735149/augmented-reality-in-training-education-global .

Riihiaho, S. (2000). Experiences with usability evaluation methods. Helsinki University of Technology Laboratory of Information Processing Science.

Rossano, V., Lanzilotti, R., Cazzolla, A., & Roselli, T. (2020). Augmented reality to support geometry learning. IEEE Access, 8 , 107772–107780.

Sáez-López, J. M. S. L., Sevillano-García, M. L. S. G., Pascual-Sevillano, M. Á. P. S., Sáez-López, J. M., Sevillano-García-García, M. L., & de los Ángeles Pascual-Sevillano, M. (2019). Application of the ubiquitous game with augmented reality in Primary Education. Comunication Media Education Research Journal , 27 (2).

Safar, A. H., Al-Jafar, A. A., & Al-Yousefi, Z. H. (2016). The effectiveness of using augmented reality apps in teaching the English alphabet to kindergarten children: A case study in the State of Kuwait. EURASIA Journal of Mathematics, Science and Technology Education, 13 (2), 417–440.

Sahin, N., & Ozcan, M. F. (2019). Effects of augmented reality in teaching old Turkish Language mementoes on student achievement and motivation. Contemporary Educational Technology, 10 (2), 198–213.

Santos, M. E. C., Taketomi, T., Yamamoto, G., Rodrigo, M. M. T., Sandor, C., & Kato, H. (2016). Augmented reality as multimedia: The case for situated vocabulary learning. Research and Practice in Technology Enhanced Learning, 11 (1), 1–23.

Sargsyan, N., Bassy, R., Wei, X., Akula, S., Liu, J., Seals, C., & White, J. (2019). Augmented reality application to enhance training of lumbar puncture procedure: Preliminary results. In Proceedings of 32nd international conference on (Vol. 63, pp. 189–196).

Saundarajan, K., Osman, S., Kumar, J., Daud, M., Abu, M., & Pairan, M. (2020). Learning algebra using augmented reality: A preliminary investigation on the application of photomath for lower secondary education. International Journal of Emerging Technologies in Learning (iJET), 15 (16), 123–133.

Savitha, K.K, & Renumol, V.G. (2019). Effects of integrating augmented reality in early childhood special education. International Journal of Recent Technology and Engineering, 8 (3), 2277–3878.

Scaravetti, D., & Doroszewski, D. (2019). Augmented Reality experiment in higher education, for complex system appropriation in mechanical design. Procedia CIRP, 84 , 197–202.

Schall, G., Jetter, H.-C., & Reitmayr, G. (2009). Towards mobile augmented reality for spatially aware computing. Virtual Reality, 13 (4), 223–234.

Sin, A. K., & Zaman, H. B. (2010). Live solar system (LSS): Evaluation of an Augmented Reality book-based educational tool. In 2010 International symposium on information technology (vol. 1, pp. 1–6). IEEE.

Sonntag, D., Albuquerque, G., Magnor, M., & Bodensiek, O. (2019). Hybrid learning environments by data-driven augmented reality. Procedia Manufacturing, 31 , 32–37.

Spruijt-Metz, D., Hekler, E., Saranummi, N., Intille, S., Korhonen, I., Nilsen, W., et al. (2015). Building new computational models to support health behavior change and maintenance: New opportunities in behavioral research. Translational Behavioral Medicine, 5 , 335–346. https://doi.org/10.1007/s13142-015-0324-1

StatCounter (2023). Desktop vs mobile vs tablet market share worldwide. StatCounter Global Stats. https://gs.statcounter.com/platform-market-share/desktop-mobile-tablet .

Statista (2023a). mHealth (mobile health) industry market size projection from 2012 to 2020 (in billion U.S. dollars). Retrieved from https://www.statista.com/statistics/295771/mhealth-global-market-size/ .

Statista (2023b). Smartwatch OS share worldwide 2022 | Statistic. Retrieved from https://www.statista.com/statistics/750328/worldwide-smartwatch-market-share-by-platform/ .

Stone, P. W. (2002). Popping the (PICO) question in research and evidence-based practice. Applied Nursing Research . https://doi.org/10.1053/apnr.2002.34181

Sudarmilah, E., Irsyadi, F. Y. A., Purworini, D., Fatmawati, A., Haryanti, Y., Santoso, B., & Ustia, N. (2020). Improving knowledge about Indonesian culture with augmented reality gamification. In IOP conference series: Materials science and engineering (Vol. 830, No. 3, p. 032024). IOP Publishing.

Sungkur, R. K., Panchoo, A., & Bhoyroo, N. K. (2016). Augmented reality, the future of contextual mobile learning. Interactive Technology and Smart Education .

Tang, A., Owen, C. B., Biocca, F., & Mou, W. (2015). Examining the role of presence in mobile augmented reality through a virtual reality comparison. Computers in Human Behavior, 45 , 307–320.

Thees, M., Kapp, S., Strzys, M. P., Beil, F., Lukowicz, P., & Kuhn, J. (2020). Effects of augmented reality on learning and cognitive load in university physics laboratory courses. Computers in Human Behavior, 108 , 106316.

Tosto, C., Hasegawa, T., Mangina, E., et al. (2021). Exploring the effect of an augmented reality literacy programme for reading and spelling difficulties for children diagnosed with ADHD. Virtual Reality, 25 , 879–894. https://doi.org/10.1007/s10055-020-00485-z

Tsai, C. C. (2020). The effects of augmented reality to motivation and performance in EFL vocabulary learning. International Journal of Instruction, 13 (4).

Turkan, Y., Radkowski, R., Karabulut-Ilgu, A., Behzadan, A. H., & Chen, A. (2017). Mobile augmented reality for teaching structural analysis. Advanced Engineering Informatics, 34 , 90–100.

Uiphanit, T., Unekontee, J., Wattanaprapa, N., Jankaweekool, P., & Rakbumrung, W. (2020). Using augmented reality (AR) for enhancing Chinese vocabulary learning. International Journal of Emerging Technologies in Learning (IJET), 15 (17), 268–276.

Vega Garzón, J. C., Magrini, M. L., & Galembeck, E. (2017). Using augmented reality to teach and learn biochemistry. Biochemistry and Molecular Biology Education, 45 (5), 417–420.

Voogt, J., & Knezek, G. (Eds.). (2018). International handbook of information technology in primary and secondary education . Springer.

Wang, Y. H. (2017). Exploring the effectiveness of integrating augmented reality-based materials to support writing activities. Computers & Education, 113 , 162–176.

Wang, C., & Wang, A. (2021). Exploring the effects of augmented reality on language learning: A meta-analysis. Educational Technology & Society, 24 (2), 105–119.

Yilmaz, R. M. (2016). Educational magic toys developed with augmented reality technology for early childhood education. Computers in Human Behavior, 54 , 240–248.

Yip, J., Wong, S. H., Yick, K. L., Chan, K., & Wong, K. H. (2019). Improving quality of teaching and learning in classes by using augmented reality video. Computers & Education, 128 , 88–101.

Yuen, S. C. Y., & Yaoyuneyong, G. (2020). The use of augmented reality apps in K-12 education: A systematic review. Journal of Educational Technology & Society, 23 (4), 133–150.

Zhou, X., Tang, L., Lin, D., & Han, W. (2020). Virtual & augmented reality for biological microscope in experiment education. Virtual Reality & Intelligent Hardware, 2 (4), 316–329.

Download references

Acknowledgements

I would like to thank Ms Vasiliki Tsirogianni for helping in the collection of the initial pool of papers.

Not applicable.

Author information

Authors and affiliations.

Department of Public and Community Health, University of West Attica, Athens, Greece

Yiannis Koumpouros

You can also search for this author in PubMed   Google Scholar

Contributions

YK had the idea for the article, performed the literature search and data analysis, and drafted and critically revised the work.

Corresponding author

Correspondence to Yiannis Koumpouros .

Ethics declarations

Competing interests.

The authors have no relevant financial or non-financial interests to disclose.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Koumpouros, Y. Revealing the true potential and prospects of augmented reality in education. Smart Learn. Environ. 11 , 2 (2024). https://doi.org/10.1186/s40561-023-00288-0

Download citation

Received : 20 June 2023

Accepted : 18 December 2023

Published : 09 January 2024

DOI : https://doi.org/10.1186/s40561-023-00288-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Augmented reality
  • Mixed reality

research paper on augmented reality

  • Open access
  • Published: 16 September 2020

Systematic review and meta-analysis of augmented reality in medicine, retail, and games

  • Pranav Parekh 1 ,
  • Shireen Patel 1 ,
  • Nivedita Patel 1 &
  • Manan Shah 2  

Visual Computing for Industry, Biomedicine, and Art volume  3 , Article number:  21 ( 2020 ) Cite this article

26k Accesses

80 Citations

10 Altmetric

Metrics details

This paper presents a detailed review of the applications of augmented reality (AR) in three important fields where AR use is currently increasing. The objective of this study is to highlight how AR improves and enhances the user experience in entertainment, medicine, and retail. The authors briefly introduce the topic of AR and discuss its differences from virtual reality. They also explain the software and hardware technologies required for implementing an AR system and the different types of displays required for enhancing the user experience. The growth of AR in markets is also briefly discussed. In the three sections of the paper, the applications of AR are discussed. The use of AR in multiplayer gaming, computer games, broadcasting, and multimedia videos, as an aspect of entertainment and gaming is highlighted. AR in medicine involves the use of AR in medical healing, medical training, medical teaching, surgery, and post-medical treatment. AR in retail was discussed in terms of its uses in advertisement, marketing, fashion retail, and online shopping. The authors concluded the paper by detailing the future use of AR and its advantages and disadvantages in the current scenario.

Introduction

Significant advances in technology in contemporary times have made many things possible such as creating virtual worlds or enhancing existing real-world objects and scenarios through multiple sensory modes [ 1 ]. Augmented reality (AR) and virtual reality (VR) have the capability to alter the way entertainment, shopping, health activities, recreation, etc. are perceived [ 2 ]. Although VR and AR are often assumed to be the same, they are considerably different. AR, also termed mixed reality [ 3 , 4 ], is a mapping of virtual objects onto the real world whose elements are augmented using sensory inputs. VR is a complete immersion in an artificial environment created using software [ 5 ]. This environment is presented to and accepted by the user as a real environment. This difference forms the basis of the functioning of virtual technology. Both AR and VR are often combined to attain specific goals [ 6 ].

AR, which was commercialized long ago, has played a major role in reshaping the existing manners of performing activities. However, owing to certain challenges, the technology did not achieve the expected results in the early days [ 7 ]. Investors were hesitant to invest heavily in this field because they believed that the augmented world was yet to be adequately developed to yield the desired outputs [ 8 ]. However, many industries are gradually recognizing the need to invest in AR to remain at the top of the ladder and expand their brand, by attracting more customers with something new and innovative, as mentioned in ref. [ 9 ]. Since the introductory stage of AR, gaming has been its primary application. However, according to the report drafted by Goldman Sachs in 2016, AR is expected to improve retail, healthcare, and real estate markets in the coming years [ 10 ]. AR is used by various industries for product design; according to ref. [ 11 ], immersive service prototyping is in significant demand in the service design sector. AR has also been used in academics [ 12 ], aeronautics [ 13 ], and military [ 14 ]. It has a substantial potential to make every aspect of living enjoyable, easier, and more creative [ 15 ].

AR technologies are broadly classified into hardware, mainly consisting of varied displays and sensors, and the software algorithms required for integrating the augmentations with the real world. These technologies are used in several fields such as tourism and hospitality [ 16 ], education, medicine, retail, and gaming and entertainment. Hardware and software were integrated in the field of AR-based prototyping methods [ 17 ]. Integration is accomplished by accurately mapping a functional hardware prototype onto a virtual display. AR displays include optical projection systems, monitors, handheld devices, head mounted display (HMD) or head-up display (HUD), and eye tap. A handheld AR system was created to track optical markers in real time [ 18 ]. An optical projection system was generated via a mouse [ 19 ], enabling the configuration of input devices along with AR displays. HMD displays are described in ref. [ 20 ] as real-time three-dimensional interactive displays that allow free head motion and full body mobility; according to ref. [ 21 ], they are used widely as modelers. A usage method for HUD was provided by incorporating it into a laminated windshield [ 22 ], and it was patented. Spatial AR, the branch of AR that does not require displays to function, was studied in detail in ref. [ 23 ]. The authors of that study provided examples such as shader lamps, iLamps and mobile projectors, Being There, HoloStations, and smart projectors.

This paper presents a review of the use of AR in three applications: gaming, medicine, and retail. Gaming has been the leading sector in the use of AR, as a result of which gamers have experienced immense creativity, innovation, and unforgettable experiences. Gamers find AR-enhanced games better and more thrilling because of the engaging experience provided by the technology.

The use of AR in the medical industry has grown over the years. It has proven to be helpful to both doctors and patients. Patients can be educated about their diseases through AR, and the technology can also be used for complex surgeries, helping doctors to perform them with high accuracy. AR has also been used in the retail industry, and several companies have started investing in AR to create apps and amazing experiences to promote and sell their products. In-store technology, as well as online AR technology, has changed the way people shop. Different sectors of fashion that have been affected by AR and experienced retail change are discussed in this paper. AR has impacted our lives in previously unimaginable ways. Thus, it could be said that AR is the future of gaming, retail, and medicine. The expansion of the AR technology in these three sectors and its acceptance by the public was analyzed in this study. Surveys were performed and feedback from various customers was scrutinized to understand their perception of the new technology.

AR in entertainment

The future of entertainment is likely to be influenced by advanced technologies such as AR [ 24 , 25 ]. Mobile technological devices have made it possible for the entertainment industry to change the way people interact and engage with games, sports, tours, performances, among other activities. AR combines real and virtual worlds in 3D while being interactive [ 26 , 27 , 28 ].

In addition to redefining traditional gaming, AR is also already being used to increase the effectiveness of multimedia presentations and videos. However, it can be extended to a considerably greater array of entertainment fields, such as the way we listen to music and the way that we travel. Interface and visualization technology, along with some basic enabling technologies, are being incorporated to achieve heterogeneous and tangible interfaces [ 29 ]. AR may also be used collaboratively to display personalized information to each user. Further, it enhances broadcasting in sports events, concerts, and other events by highlighting or inserting information.

Ivan Sutherland was the creator of the first complete AR system with simple graphics [ 30 ] and a very heavy HMD. Subsequently, AR use in the entertainment industry has made tremendous advances, considering the latest well-known hit, which is an example of location-based gaming. AR completely changes users’ interaction, encouraging people to walk outside and read more, by transforming their books into an AR play space, whereas non-AR experiences limit users to a screen.

Most AR entertainment systems have software components that run on the device such as local game control and user tracking; server connection, which is often necessary in cases where there are shared resources, location-driven games, and where constant synchronization is required, is also used [ 31 ]. Although every system has its unique architecture, real-time performance can be achieved using cloud [ 19 ]. This data and workflow flowchart is depicted in Fig.  1 , shown specifically for AR mobile systems. As shown in Fig.  2 , every architecture mainly comprises three parts: layers that allow the integration of diverse hardware, application container, which is also a run-time context that contains application logic, including things such as navigation and assembly, and workflow abstraction layer, which is where all of the computational tasks occur, whether on the device or on cloud. The results of all these tasks are integrated with real contents and presented on the displays, which users interact with.

figure 1

Data and workflow for mobile AR

figure 2

Framework for AR in mobile games

There are mainly two types of AR systems used for the purpose of entertainment. The first is marker-based applications, which are based on image recognition. This technology uses black and white markers that are used to detect the augmented object. To illustrate, the camera in a phone is first pointed at any marker’s position; after the marker has been identified, the required or embedded digital content is superimposed on the marker on top of the real object. Here, the images are to be coded into the system beforehand, making them easier to detect. Most AR apps seen in the market are marker-based. One of the most popular marker-based applications is Snapchat, which has attracted almost all of the population, and is very popular among the youth.

The second is location-based applications, which work without markers. The technology makes use of global positioning system (GPS) or some digital compass that helps in detecting the user’s position, following which the real-world physical objects are replaced with or incorporated with the augmented objects. Such applications enable users to find the best restaurants nearby or locate their cars in parking lots. They can also be used in games that require the player’s location (Fig.  3 ).

figure 3

a Location-based game: Pokémon Go (Source: Forbes.com ); b Marker-based application: Snapchat (Source: Vox.com )

Some popular software used to create AR applications are Unity, Vuforia, ARToolKit, Google ARCore, Android studio, and AR Spark studio. Unity is the most popular game engine used for developing games and AR apps. The aforementioned softwares are generally used by professionals and regular programmers. Figure  4 illustrates how a simple entertainment application that allows the user to project 3D animals into reality was implemented using Android studio.

figure 4

Implementation of simple application using Android studio, which allows users to project 3D animals into reality

The biggest application in entertainment is gaming. Although AR games may be limited in physical aspects and face-to-face communication, there is increased collaborative gaming and relationship building through remote multiplayer games [ 32 ]. There is significant potential for the emotional and mental aspects of a game, and AR makes it possible to create all types of scenarios and supports highly complex puzzles, models, and virtual opponents.

Despite the popularity of computer games, pervasive gaming, defined as gaming which increases physical movement and social interactions, enhances gamer experience [ 33 ]. This type of gaming focuses on the aspect of bringing virtual gaming back to the real world. One of the main goals of pervasive computing is to develop context-aware applications that analyze and collect information from the environment, as a result of which users alter their conduct accordingly [ 34 ]. This is achieved by using pervasive computing in combination with technologies, such as smart toys, and creating location-aware games that use the architecture that we currently live in as a game board.

A local collaborative environment study, as shown in Fig.  5 , in which multiple users could interact with the environment and communicate with each other at the same time using see-through HMDs and face-snapping, which allows fast and precise direct object manipulation, was conducted [ 35 ]. The gaming space was subdivided into spatial regions, and a layering concept was introduced for individual views and privacy management. It was observed that numerous board games and console games fit into this model, as a result of which they provide additional benefits and protect individualism and privacy.

figure 5

AR-based gaming

Further, an emerging trend of serious games, which are computer games meant for non-leisure and educational purposes, has also been observed. They are different and better than traditional games because they may be used for simulations in areas, such as medicine, military operations, and education, thereby linking entertainment and work [ 36 ]. As case study, two AR games were used; the AR Puzzle, a puzzle game based on the City University campus in London, and AR Breakout, an old arcade game that was moved to a tangible environment. Based on the results collected, it could be said that compared to video games, AR games are easier to adapt to. It was observed that AR Puzzle turned out to be a very interesting and effective learning tool. In general, it was observed that tangible AR interactions were preferred by people over traditional ways of playing games.

AR games are also gaining momentum as learning guides, considering the younger generations’ immense use of media. They impact motivation and knowledge acquisition. As mentioned in ref. [ 37 ], real-world games, based on real and virtual elements, along with highly augmented computing functionality, create exciting and fun gaming experiences, potentially leading to high learning motivation. The ability of games to promote teamwork, collaboration, social interaction, and cooperation in a learning environment is frequently emphasized [ 38 ]. According to nine studies, the use of AR games in learning boosts learning performance and increases student motivation and enjoyment by 58% and 10%. However, the limitations of these systems, such as lack of interdisciplinary programs and students being distracted by the virtual novelty factors, persist [ 39 ]. AR-based learning has substantial potential, if proper approaches are developed based on study and analysis.

AR broadcasting is divided into two crucial elements: AR tracking and AR display. Although AR display techniques for broadcasting purposes are still nascent, they are used to project content into three-dimensional space. They are of three forms: head-mounted, monitor-based, and projection-based. Current AR tracking approaches are classified into three types: model-based, marker-based, and tracking without prior knowledge. Technologies, such as cameras, infrared sensors, hybrid sensors, and 2D and 3D markers, can all be implemented to identify a pattern and track its position in the real world. Robotic cameraman systems have been proposed to increase the quality of broadcasting systems and replace human operators [ 40 ]. It has been shown that robotic cameramen facilitate more precise and advanced interaction with virtual elements, and through zooming and multiple angle views, improve the performance of AR broadcasting in all sorts of environments.

An enhanced AR system displays statistical players’ information on captured images of a sports game [ 41 ]. It is an image enhancement technique based on an algorithm that implements multi-scale retinex. It was designed to improve the accuracy of player detection during adverse conditions such as intense sunlight. This is followed by face detection performed using adaptive boosting and Haar features for feature extraction and classification. Discriminant analysis and the nearest-neighbor classifier are used for classification. The system can also display player statistics. This model was tested on several images in immensely diverse conditions, and it was concluded that it could be extended to all types of sports where the inputs are images and the desired output is information displayed around recognized players.

A haptically enhanced broadcasting system that uses AR techniques to synthesize videos in real time, a multimedia streaming technology, and haptics were also implemented. The system operation sequence has four different stages: scene capture, haptic editing, data transmission, and display with haptic interaction. It can be used for creating haptic effects for cartoons and in the context of live sports broadcasts. The most noticeable feature of haptics is the sense of being social presence at the location displayed remotely. In live broadcasting, haptic interaction can enable an audience to take part in communication and discussions with those viewing the same program. Haptic interaction refers to the technology that creates an experience of touch through vibrations, motions, and the application of forces [ 42 ].

AR makes spectator sports more entertaining because of the additional information provided to the viewers. An AR-based sports system involves two major steps: homography estimation and automatic player detection, as described in ref. [ 43 ]. A marker-based approach that uses image patterns was designed for homography estimation, and a markerless approach that works for natural images with distinctive local patterns was designed for automatic player detection. For baseball fields, contours must be extracted and geometric primitives, estimated. For the player detection methods, an algorithm based on Adaboost learning, which is both fast and robust, was used. However, it failed to detect players sometimes. This system, which is based on still images captured using mobile phones, was implemented on mobile platforms. It made it possible to accept all the images taken from different angles, with large variations in the size and pose of the players, and different lighting conditions in the playground. Photos were taken with Apple iPhone 3GS, and a PC with an Intel 2.67 GHz Core I7 CPU was used to test the algorithm. In addition, Table  1 also discusses AR games, their advantages, and what technology was used to make them.

AR in medicine

As mentioned previously, the use of AR to enhance natural environments and alter the perceptions of reality is being exploited in various fields such as entertainment, education, retail, and marketing [ 52 , 53 , 54 ]. It is also being applied to the field of medicine. AR has been defined as a real-time indirect or direct view of the surrounding world that has been augmented with computer-generated virtual information [ 55 ]. AR is indeed highly beneficial to the medical field; however, considerable effort and care must be taken to reap its benefits. The use and function of AR in medicine depends on the skill of the technician, as well as that of the doctors and medical teachers involved. AR systems are also extremely costly, compared to the normal medical methodologies. Hence, to reap maximum benefits, the AR systems must be deployed with significant care and accuracy [ 56 , 57 ].

Ref. [ 58 ] discusses the importance of AR and VR in the fields of medical anatomy and health sciences. The purpose of this research was to assess whether medical students who used VR and AR were more effective than those that used other mobile applications. Fifty-nine participants were randomly assigned three learning modes: VR, tablet-based applications, and AR. The senses of the users using VR are fully immersed in a virtual environment that mimics the properties of the real world through HMDs, stereo headphones, high-resolution motion tracking systems. AR, on the other hand, is used to superimpose digital models on the real world. 3D tablet displays are used mainly for user interaction. Using these teaching modes, a lesson on skull anatomy was conducted. The anatomical knowledge of the medical students was assessed through a repetition of experiments with different lessons. It was noted that both AR and VR were more beneficial, as they promoted increased engagement of the medical students.

On the other hand, ref. [ 59 ] conducted a review that evaluated the past, present, and future of the usage of computer-aided AR in surgeries. Computer-aided AR, also known as computer-aided drawing is a drawing tool that allows the user to make accurate data models using AR. The review centered on the different types of surgeries where AR can be used as a display or a model. A systematic review of the effectiveness of AR applications in medical training yields a promising outlook as well [ 60 ]. The training applications were assigned to three different categories: echocardiography training, laparoscopic surgery, and AR and VR training for neurosurgical procedures. This literature suggests that although AR may have gained scientific interest, no recorded evidence suggests that AR can transfer information to the user seamlessly and promisingly.

Medical displays and accurate medical imaging technology are significant because they enable physicians to fully exploit rich sources of heterogeneous intraoperative and preoperative data (as shown in Fig. 6 which depicts intraoperative brain imaging system). Ref. [ 61 ] discussed these advanced medical displays and also established a relation between the subsets of such bodies of work, to give an idea of the challenges that may occur during the application of such displays. They discussed AR technologies, such as HMD-based AR systems, augmented optics, augmented windows, monitors, and endoscopes, and their specific applications in the medical field. In the study, the solutions that can be provided by AR were acknowledged, and its use in the workflow was encouraged. HMD-based AR headsets consist of OLED microdisplays on which AR systems, such as augmented optics and windows, can run.

figure 6

Brain imaging and brain surgery using AR

Surgeons are often the earliest adopters of technical tools that can enhance the surgical and patient experience. The application and limitations of a digital surgical environment that uses AR and VR were discussed by ref. [ 62 ]. The applications include operative benefits, broadcasting and recording of surgery, anatomical evaluation, telementoring, and provision of medical education. Limited battery life, large devices, and cumbersome cables are the limitations of the technology. However, it has been stated that significant progress will be made in the coming generations with the development of these tools, which may potentially lead to an increase in their usage as surgical loupes.

An ophthalmic AR environment was developed to allow for more accurate laser treatment for ophthalmic diseases, telemedicine, and real-time image analysis, measurement, and comparison [ 63 ]. The system was designed around a standard slit lamp biomicroscope. A camera, interfaced with the biomicroscope, was used to capture the images, which were then sent to a video capture board. The image was processed using a single computer workstation, and fast registration algorithms were applied to it. The output given by the computer was a VGA resolution video display with adjustable contrast and brightness attached to the oculars of the slit lamp microscope.

A medical AR system performs three tasks: camera or instrument tracking, patient registration, and creation of preoperative planning data. A video see-through system for medical AR was described in ref. [ 64 ]. The system was based on a VectorVision image-guided surgery device. They demonstrated that their system could perform all the above mentioned tasks. VectorVision is an optical tracking IGS platform consisting of two infrared cameras, a PC, and a touch screen display. A vector vision link is a TCP/IP-based interface that is integrated and used with the vector vision cranial system. The tests showed that an augmented video stream with an average frame rate of 10 fps was generated by the augmented video stream using a 640 × 480 pixel webcam. Furthermore, they recorded a latency period of approximately 80 ms, and the camera tracking method exhibited good accuracy. Hence, they provided a novel approach for realizing AR applications in the medical field.

A specific technology that is used extensively for visualization is the HMD. Ref. [ 65 ] discussed AR visualization performed through the use of a head-mounted operating binocular required in the field of medicine. The head-mounted operating binocular is a somewhat modified version of the HMD. The radioscope was adopted because it is a miniature and cost-effective system that can be conveniently deployed for visualization. In this study, a basic design of the modified HMD was displayed, and the results of a detailed laboratory study for photogrammetric calibration of the varioscope’s computer display to a real-world scene, was presented. The location of a position measurement probe of an optical tracking system was transformed to the binocular display with an error of less than 1 mm in the real world in 56% of all cases. In other cases, the error was found to be less than 2 mm. Hence, we can conclude that sufficient accuracy was achieved, such that it could be applied for a wide range of CAS applications.

A haptic AR environment was used to design cranial implants, as described in ref. [ 66 ]. A haptic AR environment conveys the sense of touch to the user; ‘haptic’, in general, refers to any technology that provides the experience of touch through motions, vibrations, and forces. The data obtained from the patient CT was used to create virtual 3D cranial models that were superimposed over their hands. Through such an environment, the medical cranial sculptor could feel and view the model. The personal augmented reality immersive system (PARIS), a new prototype display system, was also used alongside the models. The PARIS system creates the illusion of a 3D tool that can be held by the sculptor. Neurosurgeons, paleontologists, and radiologists have expressed interest in utilizing the system.

Ref. [ 67 ] (Fig. 7 ) presented a paper on an AR system for thermal ablation of the liver. This system was first evaluated on an abdominal phantom, and subsequently, on patients in an operating room. The preoperative image of patients and the needle position that a medical practitioner manipulates were registered in a common coordinate system. The feature points were extracted and processed through validated algorithms. The experiment showed that a predefined target with an accuracy of 2 mm could be achieved at an insertion time of less than a minute. The output inspired confidence that the system provided accurate guidance and information during the complete breathing cycle of the patient.

figure 7

Hepatic surgery using AR

A rehabilitation system for hand and arm movement was implemented through a spatial AR system, as described in ref. [ 68 ]. The system created a virtual audio and visual experience by tracking a subject’s hand during rehabilitation-related tasks that involve the elbow, shoulder, and wrist movements. Real-time data and photos were sent to the clinic for further evaluation and assessment of the system. The study proved that the system was functional through the application of the technology in the laboratory. The system made it possible to incorporate real objects into tasks, as desired. It also controlled for external objects and ensured the safety and comfort of the patients. Another advantage provided by the system was that the tasks could be modified by a therapist based on the needs of the patient. The system they outlined depicted a performance-driven exercise program for stroke rehabilitation.

Apart from medical teaching and anatomy, medical surgery is an essential use case of AR in medicine as well. Table  2 shows such essential examples of AR in surgery.

AR in retail

AR significantly impacts how companies compete with one other in the technologically advancing environment. AR, as a result of its growing acceptance rate over the years, has heavily influenced brand awareness and expansion. The concept of AR in retail is anything but new. According to ref. [ 77 ], some of the largest firms, such as Coca-Cola, McDonald’s, and General Electric, have invested in AR for better retail experience and more innovative ways of marketing their products. The sales department of Coca-Cola collaborated with Augment to deploy AR to build a software that could help visualize the look of coolers in retail stores. This will help B2B customers make better product choices. Trigger developed an AR app for McDonald’s by bringing selected few animated figures and characters to life for an interactive experience for children. The software platform used was Vuforia. The main aim of the app was to feature characters from DreamWorks movies, such as How to Train Your Dragon and Mr. Peabody and Sherman, on an AR platform to make kids experience healthy fun. The surface around the Happy Meal box would come to life, and a garden filled with cherries, apples, tomatoes, and carrots would emerge.

The best way to compete healthily in the market is to build strong relations with customers and gain their loyalty by enhancing their engagement with the products. Ref. [ 77 ] talks about three different types of consumer engagement facilitated by AR. User-brand engagement occurs between a customer and the product that he/she wishes to buy. This type of engagement could be made as immersive as possible, allowing the users to manipulate and interact with the technology. User-user engagement helps customers interact with each other based on the AR content. They can modify each other’s digital data, resulting in the strengthening of their bond, as well as their individual relation with the company. User-bystander engagement enables customers to make artifacts of their experience with AR and share them on a social platform, thus leading to the advertisement of the product, which in turn benefits the company.

As mentioned in ref. [ 78 ], AR has expanded into various forms such as HMDs, mobile applications, contact lenses, and devices. One such smart device is the Memory Mirror, set up by Neiman Markus, which helps customers look at outfits from different angles and compare the various selected outfits simultaneously (Fig.  8 ).

figure 8

a AR Use by Coca-Cola (source: augment.com ); b AR Applied to McDonald’s Happy Meal box (source: triggerglobal.com )

It has been demonstrated that AR can help build customer relations and boost sales by reducing the risks that customers face while purchasing a product. Ref. [ 79 ] discusses how AR can improve customer insights, make the shopping experience enjoyable, and reduce the customer perceived risks. The perceived risk is the uncertainty faced by the customers or the negative results they might get from the purchase of a product. A research model indicating that AR can indeed help in reducing the various risks was proposed. The entire model consisted of six different dimensions of risks to be eliminated or reduced through the use of AR. These include social, financial, psychological, performance, physical, and time risks. Thus, it has been assumed in this paper that AR can reduce the perceived risks but empirical proof is yet to be given.

According to ref. [ 80 ], retailers have lost sales to online shopping over the years. However, with the introduction of AR, retailers can reinvent the customer experience and make it far more interesting than traditional shopping. Furthermore, this study also centered on price optimization. Loyalty programs help retailers keep track of customer identification and provide the customers with discounts in return for their data. Thus, integrating AR with loyalty program data could help retailers optimize the prices of products, according to a specific customer. Such personalized shopping experiences could improve the customer experience; further, such AR systems could help the customers navigate easily through the products that are affordable to them. Thus, with the ease brought on in the shopping experience, the customers might prefer going to stores to shopping online. However, an increase in use of AR in online shopping apps has been also gaining momentum for example a jewellery app, as shown in Fig.  9 .

figure 9

AR-based mobile app for online shopping

Using an app launched by a Swedish eye retailer, Synsam, as shown in Fig.  10 [ 81 ]. studied the impact of an AR-based smartphone application on the customer’s product purchase intentions. The app helped the customers try out different eyeglasses without actually putting them on physically. Their survey aimed to investigate whether the digital experience had a positive effect on the decision to purchase and the determinants that led to it. It was reported that many people found the experience very helpful and fun. It was observed that the females enjoyed trying out different pieces of eyewear using the selfie feature, whereas, the males were more fascinated by the technological side of AR. However, there were people who felt that going to a store and trying on the eyewears physically before buying them was better. However, a significant percentage agreed that AR was a useful technology for buying products.

figure 10

Virtually trying on glasses using synsam AR application

The introduction of virtual fitting rooms (VFRs) has taken AR to new heights [ 82 ]. These VFRs enable a person to try on outfits without actually being present in the store. This concept can also be used for in-store shopping, making the customer experience fun and easy. A combination of a variety of technologies, such as natural interaction (NI), 3D scanning, 3D models, and omnipresent social networking features, have been used to make the idea of VFRs a considerable success. NI enables users to interact with the augmented environment using hand gestures, speech, and body language. 3D sensors are used to scan a user’s body to create a 3D avatar-type model, which is then integrated with other data, such as gender and different retailers. Customers can be granted access to a variety of clothing, creating a real-time shopping experience. As shopping can be-time consuming and exhausting, adopting such innovative ways can make the customer experience more interactive and fun and less tiring. Furthermore, the biggest obstacle to online shopping concerning whether the garment would fit or not, can be eliminated by such future VFRs. Figure  11 depicts the possible look of a VFR.

figure 11

VFR (source: ref. [ 82 ])

Ref. [ 83 ] presented an AR-based virtual trial room that allowed the user to try on clothes virtually. This study aimed to enhance the online shopping experience and reduce the time spent on in-store shopping by decreasing the queuing time. In this method, a human was detected from the background using light variations. Frame extraction, blurring, Red, Green, Blue (RGB) to Hue, Saturation, value (HSV) conversion, current frame subtraction, thresholding, Binary Large Object detection, gesture estimation, and post-processing were the steps that made up the desired system (Fig.  12 ). Relevant frames and data were extracted from the camera input, following which the Gaussian blur was applied, to remove unnecessary image noise. This was followed by the RGB to HSV conversion, to achieve greater accuracy and image registration, where different sets of data were transformed into one coordinate system. Then, frame subtraction was performed to reduce the background noise and emphasize the foreground details. The gesture estimation step familiarized the system with general gesture functionalities such as “try next cloth”, or ‘dislike’, or ‘like’. The final post-processing step was necessary, to add final touches to the output. This model could be further enhanced by adding social features that enabled users to take pictures and share them with friends or family.

figure 12

Working of virtual trial room based on steps mentioned in ref. [ 83 ]

Ref. [ 84 ] discussed how AR had impacted the customer experience; with the aim of integrating psychological, technological, and behavioral perspectives, an embodiment-presence-interactivity cube was proposed based on a variety of existing technologies. AR has an important role to play at every stage of the customer experience. In the pre-experience stage, customers obtain detailed information on the products, which enhances their decision-making. In the experience stage, the shopping experience is made more immersive and enjoyable for the users. The post-experience stage enables customers to evaluate their experience, create content, and share with other people. This leads to customer loyalty and brand awareness.

The effects of AR on customer behavior in purchasing a fashion product have been mentioned in ref. [ 85 ]. An experiment involving 162 participants aged between 18 and 35 was conducted. They took the help of an AR app of a makeup retail brand, which enabled the participants to apply different makeup to their faces using a virtual mirror. The entire experiment session included interaction with the app, following which questions were posed to the customers on their experience and purchase intention. It was observed that people who had experienced augmentation shared positive feedbacks on their experience and purchase intention. The customers who were hedonically motivated experienced more positive emotional response. The present model can be extended to include more features and yield more outcomes beyond purchase intention such as customer satisfaction and loyalty.

Ref. [ 86 ] discussed how fashion retail has evolved and how the emergence and growth of technology is expanding the fashion retail market. In the past, retailers had to create large portfolios and have a big store space to gain the attention of the users. Further, they explained how the introduction of online shopping was a revolutionary step that changed the scenario of a shopping experience. At present, with the advancement and acceptance of AR, brands are including AR in their strategy for remaining at the top. They proceeded to discuss omni-channel retailing, which consists of a cross-channel customer experience, through which a user can access multiple retail channels and also use multiple devices. Ref. [ 87 ] also discussed the penetration of the fashion industry by AR, leading to its growth in technological aspects. Further, ref. [ 87 ] mentioned the acceptance scenario of the AR technologies. The technology acceptance model (TAM) was the first model to focus on the insights of why the customers may accept or reject a new technology. The TAM has been used several times for research purposes, as it is a very helpful model in determining user acceptance. Ref. [ 88 ] also reviewed the implementation of AR by retailers, its applications, and consumer acceptance. The TAM model has been used to determine user acceptance and highlight the need for efficient and consumer-friendly devices in the future for retail growth.

Table  3 discusses different AR applications and devices created for retail experience, and surveyed upon by their respective developers along with the kind of response the technology received.

Applying AR in fight against COVID-19 crisis

The COVID-19 virus has spread to the entire world. It has caused a significant number of deaths and significantly changed the lives of the people who have been affected by it. Many countries have gone into lockdown to prevent the spread of the virus, thereby resulting in economic collapse within them. Many businesses have shut down, and schools have been closed. A lot of measures have been taken to reduce the effects of the virus and, expectedly, the scientific community is developing various technological methods that can benefit the society, amidst these trying times. For example, a framework for change was proposed for medical education [ 96 ], and ref. [ 97 ] discussed the monitoring of hospitals and clinics through technological methods. AR can be very useful for navigating life during the crisis. Sodar, an AR application that upholds social distancing by helping individuals maintain a distance of 2 m from other people, was launched by Google (Fig.  13 ). Such an application will prove to be very useful once the lockdown ends and people start to go outside again.

figure 13

a Launch of sodar application; b 2-m radius being displayed after camera is pointed toward area

Case Western Reserve University, in collaboration with Cleveland Clinic, developed an AR app called HoloAnatomy, which helps medical students to learn about the human body in 3D. HoloAnatomy teaches students anatomy using Microsoft Hololens (Fig.  14 ). The students can learn about the smallest details in the human body without having to dissect cadavers. Such online educational AR apps can be extremely useful in the current COVID-19 crisis.

figure 14

HoloAnatomy AR system (source: CWRU website)

AR can also be used to develop long-distance healthcare systems for managing the pain and wellbeing of patients suffering from chronic pain and health care issues due to the COVID-19 outbreak. Telemedicine and web-based systems are some existing prevalent approaches. Telemedicine refers to the short message services, video conferencing, and telephone consultation. Web-based systems, such as PAIN OUT in Europe and CHOIR in the United States [ 98 ], make it possible to review patients before appointments. However, they depend on customer inputs and lack functionality. AR involves a projection being mapped onto the physical world to improve perception and give acute vision to doctors to facilitate the speedy recovery of patients [ 99 ].

AR can also be beneficial in the surgical field, as well. Virtual technology is responsible for saving lives and safeguarding surgical practices during the pandemic; the Proxemie platform is one such example [ 100 ]. The Proxemie platform connects surgeons to a live environment through which experts can provide support to their colleagues and supervise procedures. Proxemie’s AR telehealth solution is used for conducting multidisciplinary meetings to assess patients. The platform also provides a surgical library that provides useful information on surgeries. Hence, it is an extremely essential platform and a suitable example of the usefulness of AR in the current pandemic.

However, a more significant challenge awaits the society at the end of the pandemic. The road to recovery from the pandemic will be extremely difficult. AR software and hardware shall be used to mitigate such effects even after the pandemic is long over. AR can be used to impact practical knowledge because the processes of learning and implementation. Using an AR headset, a skilled technician can seamlessly guide fellow workers and teach students. Companies can also train their workforce using AR, thereby improving their workflow and the economy. For example, Microsoft Hololens 2 AR headset can be used by companies to guide their employees (Fig.  15 ). It provides hands-free visual assistance and data, along with robust security and collaboration with other Microsoft apps. Companies that depend on on-site technical maintenance for their cash flow need AR solutions as well. AR-assisted service prevents physical contact and encourages social distancing, thereby satisfying the requirements of the present and the near future.

figure 15

Dynamic 365 remote assist on Microsoft Hololens 2 AR headset

Another important area where AR can prove necessary is in retail, from the customer’s perspective. Whether online or in-store, people will never buy products without being sure if a particular product fits them. AR-based technologies, such as VFRs and mobile apps, can enable the customers to try out clothes, jewelry, makeup, sunglasses, or shoes, without actually trying any of these products in reality. Such AR-based solutions will help people practice social distancing. Digital and safe shopping experiences are among the current needs of customers. As explained previously regarding various AR systems used in retail, such applications can prove to be really useful in the coming days, thus boosting the AR market.

Challenges and future scope

Before AR can be accepted by everyone on a large scale, it is important to note that AR faces a large number of challenges that must be overcome for it to thrive. Every technology consists of a well-defined business model based on which investments are generated. However, for AR, there is no particularly defined business model that can work long-term. It is also very early to evaluate the profitability of an AR-based business because the technology is still in its development stages. Further, because of the lack of AR development and application design standards, AR technology faces a problem relating to compatibility with the overall scenario. Security and privacy are also major concerns in the AR industry. Poor content quality, in addition to some technical software and hardware limitations in each game design, is an ongoing challenge in AR gaming. For specific surgeries and in the medical field, accuracy is of prime importance because it is essential for surgeons to have tangible information on how and when the technology is used [ 101 ]. For fashion retail, scant research exists on AR, and its impact on the industry has not yet been realized significantly. Hence, many brands still hesitate to invest in AR.

Despite the numerous challenges, AR has an enormous scope in the near future to transform many industries. On overcoming the above mentioned obstacles, AR could have the power to revolutionize the entire market in every aspect. It has tremendous potential in areas, such as education, medicine, military, construction, automobile, travel, retail, art, and architecture [ 102 ]. AR is a futuristic technology that will change and reshape a number of business strategies developed by organizations. With increase in market competition, customers trust only companies who offer good quality products and extraordinary service. This means that many companies will prioritize incorporating AR, as it promises a personalized experience with products, which would attract more customers. It is also conjectured that the mobile AR technology, which will rise in the coming years, would lead to greater social acceptance. As many people are familiar with operating mobile phones, it would be easier for them to adapt to new technology. Further, as mentioned in the previous section, AR can be very helpful in the current COVID-19 crisis, as it would be, in similar situations that may arise in the future. Figure  16 represents the estimation of the projected AR/VR scenario in different sectors in 2025. However, this report was given by Goldman Sachs in 2016. Considering the present COVID-19 situation and the likely post-lockdown scenario, it appears that people will still be hesitant to use the entertainment, retail, or medical facilities freely. This necessitates the use of AR to provide a fully immersive experience to the customers in almost every field. Hence, it would be wise to say that the AR/VR estimation for 2025 could supersede Goldman Sachs’ 2016 prediction.

figure 16

Estimated scenario of AR/VR in different sectors in 2025 (source: Goldman Sachs Global Investment Research, 2016)

AR provides unique entertainment options that are not available with common types of digital media. With new research, future AR systems are bound to be significantly more advanced, compared to the currently available ones. Owing to AR, interactivity and content quality are noticeably different, and personalization is possible. The technology is new, and despite having been around for a considerable amount of time, it has not been fully and functionally incorporated in day-to-day activities such as retail and medicine owing to concerns such as technology, social acceptance, and usability. However, upon overcoming these challenges, AR has the ability to redefine gaming through enhanced content in real time. The use of AR in medicine may change the way surgeries are performed. Medical training and post-surgical treatments can be performed with ease using AR displays. As consumers desire new innovations that may simplify shopping experiences and make them more comfortable, they are most likely to welcome AR with excitement. We have also studied the existing AR solutions that are being implemented and have discussed its importance to recovery from the pandemic. Hence, AR is playing a very important role in providing users with technology experience like never before in almost all areas.

The most recent inventions are proofs of the growing improvements in AR. AR in gaming can be seen in Pokémon Go, which also makes use of GPS, and is therefore a location-based application. Snapchat, on the other hand, is an example of a marker-based application, which uses image recognition in addition to AR. There are many AR-based software development kits and the factors determining the choice of an appropriate SDK include the cost, platforms, image recognition technology and the possibility of 3D tracking and recognition. Unity and AR toolkit are a few of the engines that can be used to create AR apps. Google and Android have also provided their respective kits, Google AR Core and AR Spark studio. The instances that have been discussed show the growing market base of AR systems and their importance in the market. Hence, the importance of a review that provides insight into three major fields where AR systems are being used cannot be overemphasized.

Availability of data and materials

All relevant data and material are presented in the main paper.

Abbreviations

  • Augmented reality

Virtual reality

Head mounted displays

Head-up display

Global positioning system

Enhanced augmented reality

Personal augmented reality immersive system

Virtual fitting rooms

Integral video

Natural Orifice Transluminal Endoscopic Surgery

Arteriovenous malformations

Natural interaction

Red, Green, Blue

Hue, Saturation, value

Technology acceptance model

Shah G, Shah A, Shah M (2019) Panacea of challenges in real-world application of big data analytics in the healthcare sector. J Data Inf Manage 1(3):107–116. https://doi.org/10.1007/s42488-019-00010-1

Article   Google Scholar  

Pandya R, Nadiadwala S, Shah R, Shah M (2020) Buildout of methodology for meticulous diagnosis of K-complex in EEG for aiding the detection of Alzheimer's by artificial intelligence. Augment Hum Res 5(1):3. https://doi.org/10.1007/s41133-019-0021-6

Silva R, Oliveira JC, Giraldi GA (2003) Introduction to augmented reality. National Laboratory for Scientific Computation, Av Getulio Vargas, Petropolis

Kundalia K, Patel Y, Shah M (2020) Multi-label movie genre detection from a movie poster using knowledge transfer learning. Augment Hum Res 5(1):11. https://doi.org/10.1007/s41133-019-0029-y

Chavan SR (2016) Augmented reality vs. virtual reality: differences and similarities. https://www.semanticscholar.org/paper/Augmented-Reality-vs.-Virtual-Reality%3A-Differences-Chavan/7dda32ae482e926941c872990840d654f9e761ba . Accessed 18 Feb 2020.

Google Scholar  

Gandhi M, Kamdar J, Shah M (2020) Preprocessing of non-symmetrical images for edge detection. Augment Hum Res 5(1):10. https://doi.org/10.1007/s41133-019-0030-5

Patel D, Shah Y, Thakkar N, Shah K, Shah M (2020) Implementation of artificial intelligence techniques for cancer detection. Augment Hum Res 5(1):6. https://doi.org/10.1007/s41133-019-0024-3

Ahir K, Govani K, Gajera R, Shah M (2020) Application on virtual reality for enhanced education learning, military training and sports. Augment Hum Res 5(1):7. https://doi.org/10.1007/s41133-019-0025-2

Leena Lakra P, Verma P (2017) Augmented reality in marketing: role and applications. 8(11):74–81. https://www.academia.edu/36282179/Augmented_Reality_in_Marketing_Role_and_Applications . Accessed 18 Feb 2020.

Parekh V, Shah D, Shah M (2020) Fatigue detection using artificial intelligence framework. Augment Hum Res 5(1):5. https://doi.org/10.1007/s41133-019-0023-4

Razek ARA, van Husen C, Pallot M, Richir S (2018) A comparative study on conventional versus immersive service prototyping (VR, AR, MR). Paper presented at the virtual reality international conference - Laval virtual. ACM, Laval, April 2018. https://doi.org/10.1145/3234253.3234296

Lee K (2012) Augmented reality in education and training. TechTrends 56(2):13–21. https://doi.org/10.1007/s11528-012-0559-3

Hincapié M, Caponio A, Rios H, Mendívil EG (2011) An introduction to augmented reality with applications in aeronautical maintenance. Paper presented at the 13th international conference on transparent optical networks. IEEE, Stockholm, 26-30 June 2011. https://doi.org/10.1109/ICTON.2011.5970856

Livingston MA, Rosenblum LJ, Brown DG, Schmidt GS, Julier SJ, Baillot Y et al (2011) Military applications of augmented reality. In: Furht B (ed) Handbook of augmented reality. Springer, New York, pp 671–706. https://doi.org/10.1007/978-1-4614-0064-6_31

Chapter   Google Scholar  

Jani K, Chaudhuri M, Patel H, Shah M (2020) Machine learning in films: an approach towards automation in film censoring. J Data, Inf Manage 2(1):55–64. https://doi.org/10.1007/s42488-019-00016-9

Nayyar A, Mahapatra B, Le DN, Suseendran G (2018) Virtual reality (VR) & augmented reality (AR) technologies for tourism and hospitality industry. Int J Eng Technol 7(2):156–160. https://doi.org/10.14419/ijet.v7i2.21.11858

Nam TJ, Lee W (2003) Integrating hardware and software: augmented reality based prototyping method for digital products. In: Extended abstracts on human factors in computing systems. ACM, Lauderdale, pp 956–957

Wagner D, Schmalstieg D (2003) First steps towards handheld augmented reality. Paper presented at the 7th IEEE international symposium on wearable computers. IEEE, White Plains, 21-23 October 2003. https://doi.org/10.1109/ISWC.2003.1241402

Huang ZP, Li WK, Hui P, Peylo C (2014) CloudRidAR: a cloud-based architecture for mobile augmented reality. Paper presented at 2014 workshop on mobile augmented reality and robotic technology-based systems. ACM, Bretton Woods, June 16 2014. https://doi.org/10.1145/2609829.2609832

Rolland J, Hua H (2005) Head-mounted display systems. In: Johnson RB, Driggers RG (eds) Encyclopedia of optical engineering. Marcel Dekker, New York, pp 1–14

Butterworth J, Davidson A, Hench S, Olano MT (1992) 3DM: a three dimensional modeler using a head-mounted display. Paper presented at the 1992 symposium on interactive 3D graphics. ACM, Cambridge, June 1992. https://doi.org/10.1145/147156.147182

Harada T, Furuya Y (2013) Head-up display device. US Patent 7,528,798, 16 May 2005

Bimber O, Raskar R (2005) Spatial augmented reality. Paper presented at the 3rd IEEE/ACM international symposium on mixed and augmented reality, IEEE, Arlington, 5 November 2004. http://pages.cs.wisc.edu/~dyer/cs534/papers/SAR.pdf

Talaviya T, Shah D, Patel N, Yagnik H, Shah M (2020) Implementation of artificial intelligence in agriculture for optimisation of irrigation and application of pesticides and herbicides. Artif Intell Agric 4:58–73. https://doi.org/10.1016/j.aiia.2020.04.002

Jha K, Doshi A, Patel P, Shah M (2019) A comprehensive review on automation in agriculture using artificial intelligence. Artif Intell Agric 2:1–12. https://doi.org/10.1016/j.aiia.2019.05.004

Von Itzstein GS, Billinghurst M, Smith RT, Thomas BH (2017) Augmented reality entertainment: taking gaming out of the box. In: Lee N (ed) Encyclopedia of computer graphics and games. Springer, Cham, pp 1–9. https://doi.org/10.1007/978-3-319-08234-9_81-1

Kakkad V, Patel M, Shah M (2019) Biometric authentication and image encryption for image security in cloud framework. Multiscale Multidiscip Model Exp Des 2(4):233–248. https://doi.org/10.1007/s41939-019-00049-y

Patel H, Prajapati D, Mahida D, Shah M (2020) Transforming petroleum downstream sector through big data: a holistic review. J Petrol Explor Prod Technol 10(6):2601–2611. https://doi.org/10.1007/s13202-020-00889-2

Azuma R, Baillot Y, Behringer R, Feiner S, Julier S, MacIntyre B (2001) Recent advances in augmented reality. IEEE Comput Graph Appl 21(6):34–47. https://doi.org/10.1109/38.963459

Sutherland IE (1968) A head-mounted three dimensional display. Paper presented at the fall joint computer conference. ACM, San Francisco December 9-11, 1968. https://doi.org/10.1145/1476589.1476686

Herbst I, Braun AK, McCall R, Broll W (2008) TimeWarp: interactive time travel with a mobile mixed reality game. Paper presented at the 10th international conference on human computer interaction with mobile devices and services. ACM, Amsterdam, 2-5 September 2008. https://doi.org/10.1145/1409240.1409266

Nilsen T, Linton S, Looser J (2004) Motivations for AR gaming. In: Proc fuse new Zealand game developers conf, Dunedin, New Zealand, 26-29 June 2004, pp 86–93 https://dl.acm.org/doi/10.1109/ISMAR.2005.22

Magerkurth C, Cheok AD, Mandryk RL, Nilsen T (2005) Pervasive games: bringing computer entertainment back to the real world. Comput Entertain 3(3):4. https://doi.org/10.1145/1077246.1077257

Abowd GD, Mynatt ED (2000) Charting past, present, and future research in ubiquitous computing. ACM Trans Comput-Hum Int 7(1):29–58. https://doi.org/10.1145/344949.344988

Szalavári Z, Schmalstieg D, Fuhrmann A, Gervautz M (1998) “Studierstube”: an environment for collaboration in augmented reality. Virtual Real 3(1):37–48. https://doi.org/10.1007/BF01409796

Liarokapis F, Macan L, Malone G, Rebolledo-Mendez G, de Freitas S (2009) Multimodal augmented reality tangible gaming. Vis Comput 25(12):1109–1120. https://doi.org/10.1007/s00371-009-0388-3

Winkler T, Ide M, Herczeg M (2008) Mobile co-operative game-based learning with moles. Paper presented at SITE 2008, society for information technology & teacher education international conference. AACE, Chesapeak, March 2008

Schmitz B, Klemke R, Specht M (2012) An analysis of the educational potential of augmented reality games for learning. Paper presented at the 11th world conference on mobile and contextual learning. mLearn 2012, Helsinki, October 16-18 2012

Fotaris P, Pellas N, Kazanidis I, Smith P (2017) A systematic review of augmented reality game-based applications in primary education. Paper presented at the 11th European conference on games based learning. The FH JOANNEUM University of Applied Science, Graz, 4-5 October 2017

Yan DT, Hu HS (2017) Application of augmented reality and robotic technology in broadcasting: a survey. Robotics 6(3):18. https://doi.org/10.3390/robotics6030018

Mahmood Z, Ali T, Muhammad N, Bibi N, Shahzad I, Azmat S (2017) EAR: enhanced augmented reality system for sports entertainment applications. KSII Trans Int Inform Syst 11(12):6069–6091. https://doi.org/10.3837/tiis.2017.12.021

Cha J, Ryu J, Kim S, Eom S, Ahn B (2004) Haptic interaction in realistic multimedia broadcasting. Paper presented at the 5th pacific rim conference on advances in multimedia information processing. ACM, Tokyo, November 30-December 3 2004. https://doi.org/10.1007/978-3-540-30543-9_61

Lee SO, Ahn SC, Hwang JI, Kim HG (2011) A vision-based mobile augmented reality system for baseball games. In: Shumaker R (ed) Virtual and mixed reality - new trends. International conference, virtual and mixed reality 2011, held as part of HCI international 2011. Lecture notes in computer science, vol 6773. Springer, Berlin, Heidelberg

Ulbricht C, Schmalstieg D (2003) Tangible augmented reality for computer games. Paper presented at the 3rd IASTED international conference on visualization, imaging and image processing. ACTA, Benalmádena, 8-10 September 2003

Rohs M (2007) Marker-based embodied interaction for handheld augmented reality games. J Virtual Real Broadcast 4(5):618–639

Lyons K, Gandy M, Starner T (2000) Guided by voices: an audio augmented reality system. Paper presented at the International conference on auditory display. Atlanta, April 2-5 2000

Linaza MT, Gutierrez A, García A (2013) Pervasive augmented reality games to experience tourism destinations. In: Xiang Z, Tussyadiah I (eds) Information and communication technologies in tourism 2014. Springer, Cham, pp 497–509. https://doi.org/10.1007/978-3-319-03973-2_36

Chandaria J, Thomas GA, Stricker D (2007) The MATRIS project: real-time markerless camera tracking for augmented reality and broadcast applications. J Real-Time Image Process 2(2–3):69–79. https://doi.org/10.1007/s11554-007-0043-z

Han JG, Farin D, de With PHN (2007) A real-time augmented-reality system for sports broadcast video enhancement. Paper presented at the 15th international conference on multimedia. ACM, Augsburg, September 2007. https://doi.org/10.1145/1291233.1291306

Boyle E, Curran T, Demiris A, Klein K, Garcia C, Malerczyk C et al (2002) The creation of mpeg-4 content and its delivery over DVB infrastructure, pp 1–3

Kim S, Choi B, Jeong Y, Hong J, Chung J (2012) An architecture of augmented broadcasting service for next generation smart TV. Paper presented at the IEEE international symposium on broadband multimedia systems and broadcasting. IEEE, Seoul, June 27-29 2012. https://doi.org/10.1109/BMSB.2012.6264289

Sukhadia A, Upadhyay K, Gundeti M, Shah S, Shah M (2020) Optimization of smart traffic governance system using artificial intelligence. Augment Hum Res 5(1):13. https://doi.org/10.1007/s41133-020-00035-x

Shah K, Patel H, Sanghvi D, Shah M (2020) A comparative analysis of logistic regression, random forest and KNN models for the text classification. Augment Hum Res 5(1):12. https://doi.org/10.1007/s41133-020-00032-0

Shah D, Dixit R, Shah A, Shah P, Shah M (2020) A comprehensive analysis regarding several breakthroughs based on computer intelligence targeting various syndromes. Augment Hum Res 5(1):14. https://doi.org/10.1007/s41133-020-00033-z

Carmigniani J, Furht B (2011) Augmented reality: an overview. In: Furht B (ed) Handbook of augmented reality. Springer, New York. https://doi.org/10.1007/978-1-4614-0064-6_1

Patel D, Shah D, Shah M (2020) The intertwine of brain and body: a quantitative analysis on how big data influences the system of sports. Ann Data Sci 7(1):1–16. https://doi.org/10.1007/s40745-019-00239-y

Panchiwala S, Shah M (2020) Comprehensive study on critical security issues and challenges of the IoT world. J Data, Inf Manage. https://doi.org/10.1007/s42488-020-00030-2

Moro C, Štromberga Z, Raikos A, Stirling A (2017) The effectiveness of virtual and augmented reality in health sciences and medical anatomy. Anat Sci Educ 10(6):549–559. https://doi.org/10.1002/ase.1696

Shuhaiber JH (2004) Augmented reality in surgery. Arch Surg 139(2):170–174. https://doi.org/10.1001/archsurg.139.2.170

Barsom EZ, Graafland M, Schijven MP (2016) Systematic review on the effectiveness of augmented reality applications in medical training. Surg Endosc 30(10):4174–4183. https://doi.org/10.1007/s00464-016-4800-6

Sielhorst T, Feuerstein M, Navab N (2008) Advanced medical displays: a literature review of augmented reality. J Disp Technol 4(4):451–467. https://doi.org/10.1109/JDT.2008.2001575

Khor WS, Baker B, Amin K, Chan A, Patel K, Wong J (2016) Augmented and virtual reality in surgery-the digital surgical environment: applications, limitations and legal pitfalls. Ann Transl Med 4(23):454. https://doi.org/10.21037/atm.2016.12.23

Berger JW, Leventon ME, Kikinis R (1999) Technique for creating an ophthalmic augmented reality environment. US patent 5,912,720, 2 December 1998

Fischer J, Neff M, Freudenstein D, Bartz D (2004) Medical augmented reality based on commercial image guided surgery. Paper presented at the 10th Eurographics conference on virtual environments (EGVE). ACM, Grenoble, June 2004

Birkfellner W, Figl M, Huber K, Watzinger F, Wanschitz F, Hummel J et al (2002) A head-mounted operating binocular for augmented reality visualization in medicine - design and initial evaluation. IEEE Trans Med Imaging 21(8):991–997. https://doi.org/10.1109/TMI.2002.803099

Article   MATH   Google Scholar  

Scharver C, Evenhouse R, Johnson A, Leigh J (2004) Designing cranial implants in a haptic augmented reality environment. Commun ACM 47(8):32–38. https://doi.org/10.1145/1012037.1012059

Nicolau SA, Pennec X, Soler L, Buy X, Gangi A, Ayache N et al (2009) An augmented reality system for liver thermal ablation: design and evaluation on clinical cases. Med Image Anal 13(3):494–506. https://doi.org/10.1016/j.media.2009.02.003

Hondori HM, Khademi M, Dodakian L, Cramer SC, Lopes CV (2013) A spatial augmented reality rehab system for post-stroke hand rehabilitation. Stud Health Technol Inform 184:279–285

Nijmeh AD, Goodger NM, Hawkes D, Edwards PJ, McGurk M (2005) Image-guided navigation in oral and maxillofacial surgery. Br J Oral Maxillofac Surg 43(4):294–302. https://doi.org/10.1016/j.bjoms.2004.11.018

Tran HH, Suenaga H, Kuwana K, Masamune K, Dohi T, Nakajima S et al (2011) Augmented reality system for oral surgery using 3D auto stereoscopic visualization. Paper presented at the 14th international conference on medical image computing and computer-assisted intervention. ACM, Toronto, September 2011. https://doi.org/10.1007/978-3-642-23623-5_11

Kersten-Oertel M, Gerard I, Drouin S, Mok K, Sirhan D, Sinclair DS et al (2015) Augmented reality in neurovascular surgery: feasibility and first uses in the operating room. Int J Comput Assist Radiol Surg 10(11):1823–1836. https://doi.org/10.1007/s11548-015-1163-8

Bornik A, Beichel R, Reitinger B, Gotschuli G, Sorantin E, Leberl FW et al (2003) Computer-aided liver surgery planning: an augmented reality approach. Paper presented at the SPIE 5029, medical imaging 2003: visualization, image-guided procedures, and display. SPIE, San Diego, 15 February 2003. https://doi.org/10.1117/12.479743

Tonet O, Megali G, D'Attanasio S, Dario P, Carrozza MC, Marcacci M et al (2000) An augmented reality navigation system for computer assisted arthroscopic surgery of the knee. In: Delp SL, AM DG, Jaramaz B (eds) Medical image computing and computer-assisted intervention - MICCAI 2000. 3rd international conference, Pittsburgh, PA, USA, October 11–14 2000. Lecture notes in computer science (Lecture notes in computer science), vol 1935. Springer, Berlin. https://doi.org/10.1007/978-3-540-40899-4_121

Vosburgh KG, Estépar RSJ (2007) Natural orifice Transluminal endoscopic surgery (NOTES): an opportunity for augmented reality guidance. Stud Health Technol Inform 125:485–490

Soler L, Nicolau S, Schmid J, Koehl C, Marescaux J, Pennec X et al (2004) Virtual reality and augmented reality in digestive surgery. Paper presented at the 3rd IEEE and ACM international symposium on mixed and augmented reality. IEEE, Arlington, 24 January 2005. https://doi.org/10.1109/ISMAR.2004.64

Cabrilo I, Bijlenga P, Schaller K (2014) Augmented reality in the surgery of cerebral arteriovenous malformations: technique assessment and considerations. Acta Neurochir 156(9):1769–1774. https://doi.org/10.1007/s00701-014-2183-9

Scholz J, Smith AN (2016) Augmented reality: designing immersive experiences that maximize consumer engagement. Bus Horiz 59(2):149–161. https://doi.org/10.1016/j.bushor.2015.10.003

Poushneh A (2018) Augmented reality in retail: a trade-off between user's control of access to personal information and augmentation quality. J Retailing Consum Serv 41:169–176. https://doi.org/10.1016/j.jretconser.2017.12.010

Alimamy S, Deans KR, Gnoth J (2017) Augmented reality: uses and future considerations in marketing. In: Benlamri R, Sparer M (eds) Leadership, innovation and entrepreneurship as driving forces of the global economy. Springer, Cham. https://doi.org/10.1007/978-3-319-43434-6_62

Daiker M, Ariyachandra T, Frolick M (2017) The influence of augmented reality on retail pricing. Iss Inform Syst 18(4):116–123 https://iacis.org/iis/2017/4_iis_2017_116-123.pdf

Wakim RS, Drak Al Sebai L, Miladinovic M, Ozturkcan S (2018) A study of Swedish eyewear retailer's smartphone-based augmented reality application. In: Çebi F, Bozbura FT, Gözlü S (eds) Engineering and technology management summit 2018: engineering and technology management in the smart age. Istanbul Technical University, Istanbul

Pachoulakis I, Kapetanakis K (2012) Augmented reality platforms for virtual fitting rooms. Int J Multimed Appl 4(4):35–46. https://doi.org/10.5121/ijma.2012.4404

Gunjal K, Patil P, Phalle A, Kanade AV (2017) A survey on virtual changing room using augmented reality. Int J Adv Res Comput Eng Technol 6(10):1545–1547

Flavián C, Ibáñez-Sánchez S, Orús C (2019) The impact of virtual, augmented and mixed reality technologies on the customer experience. J Bus Res 100:547–560. https://doi.org/10.1016/j.jbusres.2018.10.050

Watson A, Alexander B, Salavati L (2018) The impact of experiential augmented reality applications on fashion purchase intention. Int J Retail Distrib Manage 48(5):433–451. https://doi.org/10.1108/IJRDM-06-2017-0117

McCormick H, Cartwright J, Perry P, Barnes L, Lynch S, Ball G (2014) Fashion retailing - past, present and future. Textile Prog 46(3):227–321. https://doi.org/10.1080/00405167.2014.973247

Boardman R, Henninger CE, Zhu AL (2019) Augmented reality and virtual reality: new drivers for fashion retail? In: Vignali G, Reid LF, Ryding D, Henninger CE (eds) Technology-driven sustainability. Palgrave Macmillan, Cham, pp 155–172

Bonetti F, Warnaby G, Quinn L (2017) Augmented reality and virtual reality in physical and online retailing: a review, synthesis and research agenda. In: Jung T, Tom Dieck MC (eds) Augmented reality and virtual reality: empowering human, place and business. Springer, Cham, pp 119–132. https://doi.org/10.1007/978-3-319-64027-3_9

Bonetti F, Pantano E, Warnaby G, Quinn L, Perry P (2019) Augmented reality in real stores: empirical evidence from consumers’ interaction with AR in a retail format. In: Tom Dieck MC, Jung T (eds) Augmented reality and virtual reality: the power of AR and VR for business. Springer, Cham, pp 3–16. https://doi.org/10.1007/978-3-030-06246-0_1

Dacko SG (2017) Enabling smart retail settings via mobile augmented reality shopping apps. Technol Forecasting Soc Change 124:243–256. https://doi.org/10.1016/j.techfore.2016.09.032

Richter T, Raška K (2017) Influence of augmented reality on purchase intention: the IKEA case. Jönköping University http://hj.diva-portal.org/smash/record.jsf?pid=diva2%3A1115470&dswid=-5433 . Accessed 18 Feb 2020

Spreer P, Kallweit K (2014) Augmented reality in retail: assessing the acceptance and potential for multimedia product presentation at the PoS. Trans Marketing Res 1(1):21–25.

Presle P. (2012) A virtual dressing room based on depth data. Master's thesis, Institut für Softwaretechnik und Interaktive Systeme

Poushneh A, Vasquez-Parraga AZ (2017) Customer dissatisfaction and satisfaction with augmented reality in shopping and entertainment. J Consum Satisf Dissatisfaction Complain Behav 30:97–118

Chang HT, Li YW, Chen HT, Feng SY, Chien TT (2013) A dynamic fitting room based on microsoft kinect and augmented reality technologies. Paper presented at the 15th international conference on human-computer interaction: interaction modalities and techniques. ACM, Las Vegas, 21-23 July 2013. https://doi.org/10.1007/978-3-642-39330-3_19

Goh P, Sandars J (2020) A vision of the use of technology in medical education after the COVID-19 pandemic. MededPublish, pp 1–8. https://doi.org/10.15694/mep.2020.000049.1

Ting DSW, Carin L, Dzau V, Wong TY (2020) Digital technology and COVID-19. Nat Med 26(4):459–461. https://doi.org/10.1038/s41591-020-0824-5

Meterko M, Restuccia JD, Stolzmann K, Mohr D, Brennan C, Glasgow J et al (2015) Response rates, nonresponse bias, and data quality: results from a national survey of senior healthcare leaders. Public Opin Quart 79(1):130–144. https://doi.org/10.1093/poq/nfu052

Eccleston C, Blyth FM, Dear BF, Fisher EA, Keefe FJ, Lynch ME et al (2020) Managing patients with chronic pain during the COVID-19 outbreak: considerations for the rapid introduction of remotely supported (eHealth) pain management services. PAIN 161(5):889–893. https://doi.org/10.1097/j.pain.0000000000001885

Singh RP, Javaid M, Kataria R, Tyagi M, Haleem A, Suman R (2020) Significant applications of virtual reality for COVID-19 pandemic. Diabetes Metab Syndr 14(4):661–664. https://doi.org/10.1016/j.dsx.2020.05.011

Mekni M, Lemieux A (2014) Augmented reality: applications, challenges and future trends. Appl Comput Sci:205–214

Agarwal C, Thakur N (2014) The evolution and future scope of augmented reality. Int J Comput Sci Iss 11(6):59–66

Download references

Acknowledgements

The authors are grateful to Nirma University and School of Technology, Pandit Deendayal Petroleum University for the permission to publish this research.

Not applicable.

Author information

Authors and affiliations.

Department of Computer Engineering, Nirma University, Ahmedabad, Gujarat, 382481, India

Pranav Parekh, Shireen Patel & Nivedita Patel

Department of Chemical Engineering, School of Technology, Pandit Deendayal Petroleum University, Gandhinagar, Gujarat, India

You can also search for this author in PubMed   Google Scholar

Contributions

All the authors make substantial contribution in this manuscript. NP, SP, PP and MS participated in drafting the manuscript. NP, SP and PP wrote the main manuscript, all the authors discussed the results and implication on the manuscript at all stages. The author(s) read and approved the final manuscript.

Corresponding author

Correspondence to Manan Shah .

Ethics declarations

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Parekh, P., Patel, S., Patel, N. et al. Systematic review and meta-analysis of augmented reality in medicine, retail, and games. Vis. Comput. Ind. Biomed. Art 3 , 21 (2020). https://doi.org/10.1186/s42492-020-00057-7

Download citation

Received : 19 February 2020

Accepted : 28 August 2020

Published : 16 September 2020

DOI : https://doi.org/10.1186/s42492-020-00057-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

research paper on augmented reality

Virtual and Augmented Reality

Ieee account.

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

Accessibility Links

  • Skip to content
  • Skip to search IOPscience
  • Skip to Journals list
  • Accessibility help
  • Accessibility Help

Click here to close this panel.

JPhys Photonics

Purpose-led Publishing is a coalition of three not-for-profit publishers in the field of physical sciences: AIP Publishing, the American Physical Society and IOP Publishing.

Together, as publishers that will always put purpose above profit, we have defined a set of industry standards that underpin high-quality, ethical scholarly communications.

We are proudly declaring that science is our only shareholder.

Virtual reality and augmented reality displays: advances and future perspectives

Kun Yin 1 , Ziqian He 1 , Jianghao Xiong 1 , Junyu Zou 1 , Kun Li 2 and Shin-Tson Wu 3,1

Published 8 April 2021 • © 2021 The Author(s). Published by IOP Publishing Ltd Journal of Physics: Photonics , Volume 3 , Number 2 Citation Kun Yin et al 2021 J. Phys. Photonics 3 022010 DOI 10.1088/2515-7647/abf02e

Article metrics

14629 Total downloads

Share this article

Author e-mails.

[email protected]

Author affiliations

1 College of Optics and Photonics, University of Central Florida, Orlando, FL 32816, United States of America

2 Goertek Electronics, 5451 Great America Parkway, Suite 301, Santa Clara, CA 95054, United States of America

Author notes

3 Author to whom any correspondence should be addressed.

Shin-Tson Wu https://orcid.org/0000-0002-0943-0440

  • Received 10 December 2020
  • Accepted 18 March 2021
  • Published 8 April 2021

Peer review information

Method : Single-anonymous Revisions: 1 Screened for originality? Yes

Buy this article in print

Virtual reality (VR) and augmented reality (AR) are revolutionizing the ways we perceive and interact with various types of digital information. These near-eye displays have attracted significant attention and efforts due to their ability to reconstruct the interactions between computer-generated images and the real world. With rapid advances in optical elements, display technologies, and digital processing, some VR and AR products are emerging. In this review paper, we start with a brief development history and then define the system requirements based on visual and wearable comfort. Afterward, various VR and AR display architectures are analyzed and evaluated case by case, including some of the latest research progress and future perspectives.

Export citation and abstract BibTeX RIS

Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 license . Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

1. Introduction

As a promising next-generation display, virtual reality (VR) and augmented reality (AR) provide an attractive new way for people to perceive the world. Unlike conventional display technologies, such as TVs, computers, and smartphones that place a panel in front of the viewer, VR and AR displays are designed to revolutionize the interactions between the viewer, display, and surrounding environment. As a kind of information acquisition medium, VR and AR displays bridge the gap between computer-generated (CG) images and the real world. On the one hand, VR displays generate a fully immersive virtual environment based on CG images, which has a sufficient field of view (FOV) to provide refreshing virtual experience without relying on the viewer's real environment. On the other hand, AR display offers see-through capability with an enriched surrounding environment. By overlapping virtual images with the real world, viewers can immerse in an imaginative world that combines fiction and reality.

Although some commercial VR and AR displays have emerged in recent years, the origin of this technology can be traced back to the last century [ 1 ]. With the introduction of head-mounted display (HMD) and the virtual environment in the 1960s [ 2 , 3 ], such a novel display concept was once considered as state-of-the-art. However, due to the lack of flat panel displays, image rendering capabilities, related sensors, wireless data transfer, and well-designed optical components, this display technology, which was ahead of its time then, came to an end. Fortunately, with the rapid development of optics [ 4 – 6 ], high resolution displays [ 7 ], and information technologies [ 8 ] in recent years, VR and AR are blooming again. Because of the impressive visual experience and high degrees of interaction between viewers and CG images, VR and AR are promising for widespread applications, including, but not limited to, healthcare, education, engineering design, manufacturing, and entertainment.

The goals of VR and AR displays are to provide reality-like clear images that can simulate, merge into, or rebuild the surrounding environment without wearer discomfort [ 9 , 10 ]. Specifically, visual comfort has to meet the requirements of the human visual system based on the eye-to-brain imaging process, otherwise the viewer will feel unreal, unclear, or even dizzy and nauseous. Usually, the human eye has a large FOV: about 160° in the horizontal and 130° in the vertical directions for each eye (monocular vision). The overlapped binocular vision still has 120° FOV in the horizontal direction [ 11 ]. In parallel, the dioptre and rotation of the human eye lens can collaborate to focus on different positions of a real object with the correct depth of field and blur the other portions [ 12 ]. Therefore, to achieve visual comfort, the optical system should provide an adequate FOV, generate 3D images with matched depth and high resolution, and offer sufficient contrast and brightness, to name just a few examples. Regarding wearer comfort, a compact and lightweight structure is desired for long-time use. At present, due to the pros and cons between different optical components and system designs, it is still challenging for VR and AR to meet these goals. Therefore, in this paper, we focus on advanced VR and AR architectures aiming at visual and wearer comfort, and a more comprehensive understanding of the status quo.

2. Advanced architectures for VR displays

Figure 1 (a) depicts a schematic diagram of a VR optical system. For visual comfort, a broad FOV covering the human vision range can be achieved by designing a compact eyepiece with a low f -number ( f /#) [ 13 ]. However, due to the immersive experience with a completely virtual environment, the main issue is with the CG-3D image generation. When evaluating the capability of generating 3D images in VR, an important aspect of the human visual system is stereo perception. The real observation of a 3D object induces an accommodation cue (the focus of the eyes) and a vergence cue (relative rotation of the eyes), that match with each other (figure 1 (b): left) [ 14 , 15 ]. However, in most of the current VR systems, there is only one fixed display plane with different rendered contents. To capture the image information, the viewer's eyes will focus on the display plane, but the position of the CG-3D object is usually not in the display plane. As a result, the visual system in the viewer's brain will force the eyeball to focus on the virtual 3D object, while the eye lens focuses on the display plane, which leads to mismatched accommodation distance and vergence distance (figure 1 (b): right). This phenomenon is called vergence–accommodation conflict (VAC) [ 16 ], which causes dizziness and nausea. Besides visual comfort, the overall weight and volume of the system will also limit the usage time and applications. To achieve wearer comfort, the system should be as light as possible while keeping a broad FOV in the virtual space. In this section, we will focus on advanced VR architectures that address 3D image generation to mitigate VAC and reduce the headset volume.

Figure 1.

Figure 1.  (a) The layout of a VR optical system. (b) The root cause of the VAC issue. The accommodation cue coincides with the vergence cue when viewing a real object (left). Mismatch occurs when viewing a virtual object displayed in a fixed plane (right).

Download figure:

2.1. VAC mitigation

2.1.1. multi-focal system.

The multi-focal display was proposed to solve the VAC problem of HMDs in the late 1990s [ 17 ]. The basic principle of a multi-focal system is to generate multiple image planes or shift the position of image planes to match the vergence distance and accommodation distance, thereby overcoming the VAC issue. Based on different architectures and principles, multi-focal VR systems can be categorized into space multiplexing, time multiplexing, and polarization multiplexing systems.

Space multiplexing can simultaneously generate multiple image planes with different depths. To achieve this goal, Rolland et al [ 18 ] proposed a very straightforward method to physically place multiple screens based on transparent panels, as illustrated in figure 2 (a). However, the transparent panels will not only increase the cost but also exhibit obvious moiré patterns after stacking multiple panels together [ 19 ]. To avoid this problem, beam splitters (BSs) can be utilized to help establish the space multiplexing system, as figure 2 (b) shows [ 20 ]. In this design, the display panel is placed on one side, while the BSs reflect different parts of the display. Since the distance between each BS and the human eye is different, the image is displayed at different depths. Space multiplexing provides a direct solution to address VAC in VR displays and maintains image quality and frame rate. However, this architecture requires multiple display panels or BSs, which leads to dramatically increased weight and volume. Recently, a focal plane display with a phase-only spatial light modulator (SLM) has been demonstrated [ 21 ]. This architecture can achieve multi-focal planes with reduced system size and weight, but it requires an expensive SLM, and the image quality is not ready for commercial products yet.

Figure 2.

Figure 2.  A schematic diagram of space multiplexing using multiple (a) transparent panels and (b) BSs. Time multiplexing by (c) shifting the display and (d) applying a tunable lens. Polarization multiplexing by (e) applying PPML.

The time multiplexing method relies on dynamic components and can timely change the panel distance (figure 2 (c)) or the effective focal length (figure 2 (d)) [ 22 , 23 ]. The panel distance is usually changed by a mechanical motor, which leads to a lack of stability and modulation rate. For time multiplexing, the modulation rate of the dynamic components should be at least N times ( N is the number of image planes) the display frame rate to avoid motion blurs. Therefore, compared with mechanically tuning the panel position, tuning the effective focal length through an electrically driven eyepiece is more favourable. Although it is still challenging to fabricate an adaptive lens with a wide tuning range and fast response time, this method can reduce the number of physical elements, so the system volume is much more compact than that of spatial multiplexing.

Polarization multiplexing generates multiple image planes based on different polarization states. To distinguish different polarization states, the most critical optical component is a polarization-dependent lens with different focal lengths for two orthogonal polarization states. Two such examples are: (a) the Pancharatnam–Berry phase lens, based on left-handed circularly polarized light (LCP)/right-handed circularly polarized light (RCP), and (b) the birefringent lens based on horizontal/vertical linearly polarized light [ 24 , 25 ]. Figure 2 (e) depicts the basic polarization multiplexing system. The light emitted from the display panel transmits through a pixelated polarization modulation layer (PPML), which can modulate the ratio of two orthogonal polarization states, so the light intensity of each pixel in the corresponding focal plane can be adjusted independently. PPML can be a polarization rotator for a linear polarization system [ 26 ] or an integrated polarization rotator and quarter-wave plate for a circularly polarized system [ 27 ]. The advantage of polarization multiplexing is that it can generate multiple image planes without sacrificing the frame rate or an enlarged system volume. However, the major limitation of polarization multiplexing is that only two orthogonal polarization states can be utilized. It should be mentioned that these multiplexing approaches can be combined. For example, time multiplexing or space multiplexing can be combined with polarization multiplexing to increase the number of focal planes [ 27 , 28 ].

2.1.2. Micro-lens array system

Unlike using a large single lens as an eyepiece, another advanced architecture involves adding a micro-lens array (MLA) in front of the display panel to globally or individually change the position of virtual images in a VR system [ 10 ]. When the MLA is precisely aligned with the display panel, a small movement of the MLA can lead to a large focus change for the virtual image. As a result, instead of moving a thicker display panel or bulkier lens over a longer range, pushing or pulling the MLA plate a small distance can significantly mitigate the VAC. It is worth mentioning that the focus of an MLA based on liquid crystal materials can be switched dynamically for several microns, which means the movement of virtual images can be obtained without any mechanical motion, as shown in figure 3 (a). Furthermore, as figure 3 (b) shows, if each MLA element can be precisely controlled independently, then we can produce a specific focus for each lenslet and generate pixelated depth. These techniques are suitable for VR displays as well as for free-space-based couplers in AR displays. It is worth mentioning that in the MLA system, the resolution is usually an important issue, which needs to be further improved.

Figure 3.

Figure 3.  A schematic diagram of focus tuning systems based on (a) an electronically addressable MLA and (b) an individually tunable MLA. A schematic diagram of (c) a real object and (d) light field with an MLA.

2.1.3. Light field system

To mitigate VAC, both temporally and spatially changed displays have been proposed. However, due to a limited or discrete tuning range, these methods can only partially recreate the 3D object with the correct depth. Rather than changing the image focus, light field displays ideally recreate a physical wavefront similar to that created by a real object. The light field capture (e.g. integral imaging) [ 29 – 31 ] can be achieved by a lens array to convert the light from display pixels to rays with arbitrary spatial angles. As depicted in figure 3 (d), the spatial points correspond to the pixels on the display panel. To display a virtual 3D object, we trace the points on the object and light the corresponding pixels on the display panel. Then, the light field on those points can be approximated with discrete emitting rays. Although this method can provide correct depth information and retinal blur, the resolution is sacrificed. If the amount of information is taken into consideration, it is not surprising that these approaches that aim to show true 3D information cannot offer sufficient resolution due to the limited bandwidth of the current devices. Generally, the resolution is limited by the display and the individual lens. Although a high-resolution display has been proposed, the pixel pitch is still determined by the diffraction limit of the employed lens [ 29 ]. These approaches should gradually mature in the long run and eventually reach a satisfactory level for viewers. But at the current stage, the main drawbacks of this architecture are resolution loss, refresh rate increase, and/or redundancy of display panels.

2.2. Pancake VR

As discussed above, aside from visual comfort, wearable comfort is another important consideration. To reduce the volume and weight of a VR system, thereby improving its wearable comfort, a compact optical design, while taking the headset's central gravity into consideration, is urgently needed. Recently, polarization-based folded optics (or pancake optics) with further reduced form factors have attracted increasing attention. The system was originally proposed for use in flight simulators [ 32 ] and it has gained renewed interest due to rapid development of VR [ 33 , 34 ]. The basic concept is to create a cavity to fold the optical path into a smaller space. The working mechanism is illustrated in figure 4 (a). The cavity lies between a BS and a reflective polarizer. The BS (including a metallic or dielectric half mirror) has 50% transmittance and it flips the handedness of incident polarized light upon reflection. The reflective polarizer selectively transmits light with one polarization state and reflects the orthogonal one, which can be achieved by a wire-grid polarizer, birefringent multi-layer film, or cholesteric liquid crystal (CLC). The former two respond to a linear polarization, while the latter respond to a circular polarization. To explain the working principle, we use a circularly polarized light as an example. As shown in figure 4 (a), the incoming RCP light in region A firstly passes through the BS (50%) and gets reflected by the reflective polarizer. Then, it is reflected by the BS again (25%), while flipped to the LCP state. Finally, the LCP light passes the reflective polarizer and enters region C. Because of the BS, only 25% of the total energy is delivered to the viewer's side. Therefore, system efficiency is an important issue in the pancake VR system. Practical systems often involve one or more refractive elements, which can be placed in any of the specified ABC regions. The surfaces of the reflective polarizer and BS can also be curved according to design requirements. An example with a refractive lens placed in region B is plotted in figure 4 (b). The BS (half reflector) in this case is coated on the curved surface of the lens.

Figure 4.

Figure 4.  Polarization-based folded optics. (a) An illustration of the working principle. (b) An example of folded optics with a refractive lens. (c) A CLC-based reflective polarizer with optical power. (d) A reflective hologram with angular selectivity.

All the above discussions only consider traditional geometric optics, where the optical power is provided by reflection or refraction of the curved surfaces. Recent advances in holographic optics, however, offer an even wider range of choices for optical elements. Both the reflective polarizer and BS can be flat holographic films [ 35 ]. As figure 4 (c) shows, the reflective polarizer can have a focusing power by patterning the CLC molecules. The polarization selectivity of CLC leads to an optical power for one circular polarization and total transparency for the other. The BS can also be replaced by a phase hologram. Such a phase hologram is often fabricated by holographic exposure of a photopolymer [ 36 ]. Its index modulation is usually small, resulting in narrow angular and spectral responses. This angular selectivity can be utilized to boost the overall system efficiency. As depicted in figure 4 (d), for a certain reflective hologram, light within the angular response is reflected with flipped handedness. Other incident lights that do not meet the Bragg conditions will traverse the hologram. With this feature, the BS efficiency can potentially reach 100% because both the transmission and reflection efficiencies can reach 100%. This means the overall system efficiency can be improved from 25% to nearly 100%. However, the narrow angular and spectral selectivity also indicates the requirement for a directional backlight with narrow spectral linewidth, which could be challenging for practical implementation.

3. Advanced architectures for AR displays

In contrast to the immersive experience provided by VR displays, AR displays aim for see-through systems that overlap CG images with physical environments. To obtain this unique visual experience with wearable comfort, the near-eye systems need to possess high transmittance with sufficient FOV and a compact form factor. Therefore, freeform optics with broad FOV and high transmittance are essential for AR displays. However, due to the prism-shape, this architecture presents a relatively large volume and heavy weight. To reduce the system size while keeping a sufficient FOV, a lightguide-based structure and free-space coupler are commonly used to create a delicate balance between visual comfort and wearable comfort.

3.1. Freeform prisms and BS architectures

Freeform prisms have been extensively investigated due to the development of diamond-turning machines. Typically, the freeform prisms used in an AR system need a partially reflective surface and a total internal reflection (TIR) surface to overlap the CG images and transmit the surrounding environments. As shown in figure 5 (a), this configuration sophisticatedly incorporates two refraction surfaces, a TIR surface, and a partial reflection surface into one element, and therefore allows extra design freedom [ 37 , 38 ]. This design provides high-quality images with a wide FOV, but due to its volume limitation the entire system will be bulky and heavy. Another common example of a freeform-based AR device uses a designed BS cube as the coupler. In figure 5 (b), the magnifying optics is a reflective concave mirror disposed directly on the BS cube, which has more freedom to be further optimized. This device architecture provides the simplest solution to AR display with a broad FOV but a larger form factor. Moreover, there is another trade-off between the FOV and eyebox (or exit pupil) due to the conservation of étendue, which is the product of the FOV and eyebox. Therefore, the larger the FOV, the smaller the eyebox [ 39 ].

Figure 5.

Figure 5.  A schematic AR diagram with a (a) a freeform prism and (b) a specially designed BS cube.

3.2. Lightguide-based architectures

Compared to the freeform design, the lightguide-based structure has a more balanced performance between visual comfort and wearable comfort, especially in the compact and thin form factor [ 40 , 41 ]. Over the past decade, lightguide-based near-eye display (LNED) has become one of the most widely used architectures for AR displays and is applied in many commercial products, such as HoloLens 2, Magic Leap 1, and Lumus. For an LNED, input and output couplers are pivotal optical elements affecting the system's performance. Typically, the input coupler has a high efficiency enabling it to fully utilize the light emitted from the optical engine. In contrast, the output coupler has low and gradient efficiency across the exit pupil to ensure an expanded and uniform eyebox. According to different coupler designs, LNEDs can be categorized into grating-based lightguides (figure 6 (a)) and geometrical lightguides (figure 6 (b)).

Figure 6.

Figure 6.  Schematic diagrams of (a) grating-based and (b) geometrical lightguide-based AR architectures.

3.2.1. Grating-based lightguide

As shown in figure 6 (a), the display light is coupled into the lightguide by an input grating and then propagates inside the lightguide through TIR. When it encounters the output grating, the light is replicated and diffracted into the viewer's eyes. To provide a comprehensive understanding, we will theoretically analyze the FOV limit and discuss the commonly used grating couplers. For a diffractive grating, the first-order grating equation can be stated as:

where θ in and θ out represent the incident angle and diffracted angle, respectively, n in and n out are the refractive index of the incident medium and output medium, λ is the wavelength in vacuum, and Λ is the grating period. With this simple grating equation, the maximum system FOV can be calculated. If we assume the FOV in air is centrosymmetric, then the viewing angle in air ( θ air ) is related to the minimum/maximum guiding angles ( θ min / θ max ) in the lightguide as:

where n g is the refractive index of the lightguide, n air is the refractive index of air, θ min can be set to the TIR angle in the lightguide, and θ max should be less than 90°. Thus, the maximum horizontal FOV is [ 42 ]:

Figure 7 (a) shows the FOV as a function of n g and θ max . In an ideal case where θ max = 90° and n g = 2, the maximum system FOV is only 60°. In practical designs, such a high index lightguide substrate is still challenging to achieve, and θ max cannot approach 90° due to image quality considerations. This FOV limit is generally true for most grating-based lightguide AR. However, some methods can be employed to circumvent this limit. For instance, using a different system configuration [ 42 ], FOV can be expanded to 100°, or by leveraging polarization-dependent optical elements the FOV can be nearly doubled [ 43 ]. In equation ( 3 ), it seems that the FOV is independent of the wavelength, but the wavelength dependency is implicitly embedded in equation ( 2 ). For the extreme case with θ max = 90° and n g = 2, if the waveguide is designed at 535 nm, then the grating period is calculated to be 357 nm and the horizontal FOV is [−30°, 30°]. Utilizing such a grating period for blue (e.g. 450 nm) and red (e.g. 630 nm) with the assumption that the angle ranges in the lightguide are the same will lead to an FOV of [−15°, 48°] and [−50°, 14°], respectively. Thus, more than one grating is needed to obtain the same FOV for RGB colours. Although implementation of three gratings with narrow spectral bandwidths for R, G, and B in one lightguide is possible, it is still hard to eliminate colour crosstalk among different gratings. A more common choice is to have two (e.g. one for R, and one for G and B) or three (e.g. R, G, and B) lightguides [ 44 ], where the system's compactness is slightly sacrificed. Another important aspect is that the spectral response of most gratings depends on the incident angle. This can be well illustrated using a volume Bragg grating (VBG) as an example. For a VBG, the central wavelength is defined by the Bragg condition as:

Figure 7.

Figure 7.  (a) FOV as a function of lightguide refractive indexes and maximum guiding angles. (b) Angle dependency of a VBG ( n eff = 1.5) designed for 535 nm and a diffraction angle of 50° at normal incidence. The inset shows the definition of θ , which is the angle of incident light relative to the normal direction of Bragg planes. For reflective VBGs: diffraction efficiency as a function of (c) wavelength and (d) incident angle; for transmissive VBGs: diffraction efficiency as a function of (e) wavelength and (f) incident angle. Simulations are based on rigorous coupled wave analysis.

where θ represents the incident light angle with respect to the normal direction of Bragg planes (see the inset in figure 7 (b)), and n eff is the effective refractive index of the VBG. If a VBG (e.g. n eff = 1.5) is designed for a normally incident green light ( λ = 535 nm) with 50° diffraction angle in a lightguide, then the angle-dependent central wavelength can be calculated, as figure 7 (b) depicts. For such a VBG, the central wavelength would shift from green to blue as the incident angle increases. Therefore, when designing a VBG-based lightguide AR for full-colour operation, such a colour crosstalk should be carefully analyzed.

In terms of selecting grating couplers, two types of gratings are commonly used in lightguide AR: holographic VBGs and surface relief gratings (SRGs). In holographic VBGs, sinusoidal refractive index modulation in the volume is introduced by interference exposure of holographic photopolymers. The refractive index modulation can be described by [ 45 ]:

Unlike holographic VBGs that have refractive index modulation in the bulk, SRGs have specially designed microstructures on the surface, which can be massively produced by nanoimprinting [ 48 ]. The surface structures have a large design degree of freedom. The shapes of grating structures can be blazed, slanted, binary, and even analogue, according to different needs [ 10 ]. The spectral and angular responses of SRGs strongly depend on the shape of surface structures. Due to high refractive index contrast between the substrate and air, the structure height can be submicron to achieve high diffraction efficiency.

Besides holographic VBG and SRG, CLC-based polarization volume grating (PVG) is also a strong contender [ 49 , 50 ]. Due to their volume grating nature, PVGs can be treated as a branch of holographic VBGs and their spectral and angular responses are very similar. However, PVGs exhibit some unique properties. First, PVGs are strongly circular-polarization dependent originating from CLCs [ 51 ], while VBGs and SRGs have weak polarization dependency on linear polarizations. For example, for a left-handed reflective PVG, it only diffracts the LCP light within the bandwidth into the first order, while transmitting the RCP light. This feature is useful for designing polarization-dependent optical elements. Second, if we use equation ( 5 ) to approximately describe the behaviour of PVGs (in fact, to describe PVGs the refractive indices in equation ( 5 ) should be replaced by dielectric constants), Δ n can be very large. For instance, if the host liquid crystal has a birefringence of 0.2, the effective Δ n can be as large as 0.5 ∼ 0.6 for a VBG. As a result, its spectral and angular bandwidths can be much larger than those of holographic VBGs. Moreover, recent studies show that multi-layer PVGs or gradient-pitch PVGs can be easily achieved to further enlarge the angular bandwidth [ 52 , 53 ].

3.2.2. Geometrical lightguide

Compared to grating-based lightguides, geometrical lightguides need more complex designs (e.g. spatial variant coatings) to achieve gradient efficiency, and it is relatively hard to add a lens power to the output. However, the working principle is very simple, and all the designs are based on surface reflection. Generally, geometrical lightguides use embedded reflective surfaces as the exit pupil expander to reflect and replicate the light [ 54 , 55 ].

As figure 6 (b) shows, a series of cascaded, embedded, and partially reflective surfaces can be used as output couplers in the geometrical lightguide architecture. As the embedded surface is reflective, it yields good colour uniformity over the entire FOV. However, this cascaded design produces the Louver effect [ 10 ], which is unfavourable for see-through devices. Recently, this effect has been reduced due to better cutting, polishing, coating, and design, but it is still a limitation. In addition, these complicated fabricating processes put more burdens on manufacturers. As an extension, the embedded partially reflective surfaces can be designed as flat surfaces (figure 6 (b)), pin-shaped mirror arrays (figure 8 (a)), microprism arrays (figure 8 (b)), or a curved lightguide with curved surfaces (figure 8 (c)) [ 56 ].

Figure 8.

Figure 8.  A schematic diagram of geometrical lightguide AR: (a) microprism array, (b) pin-shaped mirror array, and (c) curved coupler.

3.3. Free-space coupler-based architectures

Unlike freeform optical devices or LNEDs, free-space couplers have greater freedom in the architecture, and there are no special restrictions on volume or TIR. Undoubtedly, due to large degrees of freedom, numerous architectures based on free-space couplers have been proposed, but each design has its pros and cons. These systems can be classified into three categories based on the working principles: reflective coupler, diffusive coupler, and diffractive coupler.

3.3.1. Reflective coupler

A reflective free-space coupler is based on the surface reflection of a flat or curved surface. Due to the high transmittance requirement, these surfaces should be partially reflective with sufficient reflection and transmittance. Figure 9 (a) depicts the most straightforward architecture with a flat coupler, which is a tilted partial reflective surface. The CG images emitted from the display are collimated by the lens and then reflected into the viewer's eye through the flat coupler. To further simplify the system, such a flat coupler can be replaced by a partially reflective curved or freeform surface with a specially designed profile, as shown in figure 9 (b). This design is aimed at smartphone displays rather than complex off-axis imaging and micro-display. This architecture has been successfully applied to Meta 2 by Meta Vison, DreamGlass by Dream World, and NorthStar by LeapMotion. Due to a large display panel and curved reflective surface, such a reflective coupler exhibits a relatively broad FOV but also a large system volume.

Figure 9.

Figure 9.  A schematic diagram of reflective free-space coupler-based AR: (a) a flat coupler, and (b) a curved coupler. llustrations of diffusive free-space coupler-based AR: (c) a single diffuser, and (d) multiple diffusers.

3.3.2. Diffusive coupler

A diffusive free-space coupler is based on the light scattering of optical elements [ 57 ]. In such a system, the displayed images are directly projected onto the coupler, which is usually a diffuser with a flat or curved surface. As illustrated in figure 9 (c), the light is scattered by the coupler and then the image is displayed on the diffuser surface. Usually, the image source is a liquid-crystal-on-silicon (LCoS) or digital micro-mirror device, and the image resolution is controlled by the display and projection lens. To keep see-through capability, the diffuser should have angular selectivity to scatter the off-axis incident image and transmit the environment light in front of the eye. Therefore, the system can accommodate more than one diffuser, and thereby has the space to construct a 3D image with multiple planes [ 58 ], similar to the multiplane design in a VR system. As depicted in figure 9 (d), each diffuser scatters the incoming light with the corresponding incident angle and do not interfere with each other.

3.3.3. Diffractive coupler

A diffractive free-space coupler is based on flat diffraction optical elements with designed phase profiles, such as lens or freeform optics [ 59 , 60 ]. More specifically, the architectures based on diffractive couplers can be divided into free-space systems, Maxwellian systems, and integral imaging systems. The free-space-based diffractive couplers, as illustrated in figure 10 (a), are utilized in a pupil-forming system, which means it uses relay optics to first image the object and then deliver the relayed image to the viewer's eye with the diffractive coupler [ 61 , 62 ]. The image source includes but is not limited to a conventional 2D display and laser light source. However, due to the nature of diffractive flat optics and off-axis system configuration, aberrations like coma and astigmatism are large and need to be tackled with sophisticated optical design or image pre-processing. The Maxwellian system adopts the principle of a Maxwellian view [ 63 ], which directly forms a focus-free image on the retina. The diffractive couple can be a reflective off-axis lens with a designed focal length [ 64 , 65 ]. It is worth mentioning that because the light needs to be focused on the pupil, the eyebox in the Maxwellian system is relatively small. To expand the eyebox, an exit pupil shifting can be applied to increase the area covered by the focal point [ 66 ]. Generally, the image light is focused by the coupler and the focal spot is located at the eye lens. As a result, the image on the retina stays in-focus no matter how much the optical power of the eye lens changes. Depending on the image source, the system can be achieved by an LCoS (figure 10 (a)) or a laser beam scanner (LBS) (figure 10 (c)) for a simpler design. The light field system with an MLA can also be applied to the AR system, such as the light field in a VR display [ 67 , 68 ]. As depicted in figure 10 (d), a typical configuration is the projection system which is used to relay the original image from the image source to near the focus of the diffractive coupler, similar to the free-space combination system. The relayed image then works in the same way as depicted in figure 3 (d) and produces the light field to display 3D virtual objects. Similar to the multiplexing method in VR displays, these different AR architectures with fictional optics are not independent. On the contrary, they can be combined with each other to balance their respective advantages and trade-offs, and even enable new features [ 69 ].

Figure 10.

Figure 10.  A schematic diagram of diffractive free-space coupler-based AR: (a) a free-space diffractive coupler, a Maxwellian system with (b) SLM and (c) LBS, and (d) an integral imaging system.

To quantitatively summarize the performance of AR architectures based on visual comfort and wearable comfort, table 1 compares the form factor and FOV among different coupling methods. It should be mentioned that for each architecture, the performance can be further improved based on the current value but at the cost of other parameters. Therefore, the contents listed in table 1 are the general conditions rather than strict restrictions.

Table 1.  Performance comparison of various AR architectures.

a These not only depend on the FOV and eyebox design but also include an optical engine part. b These typical values come from products and prototypes.

4. Conclusions

In this review, we summarize the advanced architectures with different optical components in the rapidly evolving VR and AR systems, including the most recent optical research and products, and analyze the systems based on the visual and wearable comforts case by case. Because of the various advanced architectures with unique features, such as reducing VAC through adjustable lenses, solving compact size issues using polarizing films, and providing a large FOV through freeform optics, VR and AR displays present both scientific significance and broad application prospects. Although, at the current stage, it is still challenging for these architectures to meet all the requirements for visual and wearable comfort, learning about and reviewing advanced systems will certainly help us focus on unresolved issues and inspire more elegant solutions.

Acknowledgments

The authors are indebted to GoerTek Electronics for financial support.

Data availability statement

All data that support the findings of this study are included within the article (and any supplementary files).

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Sensors (Basel)

Logo of sensors

In-Depth Review of Augmented Reality: Tracking Technologies, Development Tools, AR Displays, Collaborative AR, and Security Concerns

Toqeer ali syed.

1 Faculty of Computer and Information Systems, Islamic University of Madinah, Medina 42351, Saudi Arabia

Muhammad Shoaib Siddiqui

Hurria binte abdullah.

2 School of Social Sciences and Humanities, National University of Science and Technology (NUST), Islamabad 44000, Pakistan

3 Malaysian Institute of Information Technology, Universiti Kuala Lumpur, Kuala Lumpur 50250, Malaysia

4 Department of Computer Science, Bacha Khan University Charsadda, Charsadda 24420, Pakistan

Abdallah Namoun

Ali alzahrani, adnan nadeem, ahmad b. alkhodre.

Augmented reality (AR) has gained enormous popularity and acceptance in the past few years. AR is indeed a combination of different immersive experiences and solutions that serve as integrated components to assemble and accelerate the augmented reality phenomena as a workable and marvelous adaptive solution for many realms. These solutions of AR include tracking as a means for keeping track of the point of reference to make virtual objects visible in a real scene. Similarly, display technologies combine the virtual and real world with the user’s eye. Authoring tools provide platforms to develop AR applications by providing access to low-level libraries. The libraries can thereafter interact with the hardware of tracking sensors, cameras, and other technologies. In addition to this, advances in distributed computing and collaborative augmented reality also need stable solutions. The various participants can collaborate in an AR setting. The authors of this research have explored many solutions in this regard and present a comprehensive review to aid in doing research and improving different business transformations. However, during the course of this study, we identified that there is a lack of security solutions in various areas of collaborative AR (CAR), specifically in the area of distributed trust management in CAR. This research study also proposed a trusted CAR architecture with a use-case of tourism that can be used as a model for researchers with an interest in making secure AR-based remote communication sessions.

1. Introduction

Augmented reality (AR) is one of the leading expanding immersive experiences of the 21st century. AR has brought a revolution in different realms including health and medicine, teaching and learning, tourism, designing, manufacturing, and other similar industries whose acceptance accelerated the growth of AR in an unprecedented manner [ 1 , 2 , 3 ]. According to a recent report in September 2022, the market size of AR and VR reached USD 27.6 billion in 2021, which is indeed estimated to reach USD 856.2 billion by the end of the year 2031 [ 4 ]. Big companies largely use AR-based technologies. For instance, Amazon, one of the leading online shopping websites, uses this technology to make it easier for customers to decide the type of furniture they want to buy. The rise in mobile phone technology also acted as an accelerator in popularizing AR. Earlier, mobile phones were not advanced and capable enough to run these applications due to their low graphics. Nowadays, however, smart devices are capable enough to easily run AR-based applications. A lot of research has been done on mobile-based AR. Lee et al. [ 5 ] developed a user-based design interface for educational purpose in mobile AR. To evaluate its conduct, fourth-grade elementary students were selected.

The adoption of AR in its various perspectives is backed up by a prolonged history. This paper presents an overview of the different integrated essential components that contribute to the working framework of AR, and the latest developments on these components are collected, analyzed, and presented, while the developments in the smart devices and the overall experience of the users have changed drastically [ 6 ]. The tracking technologies [ 7 ] are the building blocks of AR and establish a point of reference for movement and for creating an environment where the virtual and real objects are presented together. To achieve a real experience with augmented objects, several tracking technologies are presented which include techniques such as sensor-based [ 8 ], markerless, marker-based [ 9 , 10 ], and hybrid tracking technologies. Among these different technologies, hybrid tracking technologies are the most adaptive. As part of the framework constructed in this study, the simultaneous localization and mapping (SLAM) and inertial tracking technologies are combined. The SLAM technology collects points through cameras in real scenes while the point of reference is created using inertial tracking. The virtual objects are inserted on the relevant points of reference to create an augmented reality. Moreover, this paper analyzes and presents a detailed discussion on different tracking technologies according to their use in different realms i.e., in education, industries, and medical fields. Magnetic tracking is widely used in AR systems in medical, maintenance, and manufacturing. Moreover, vision-based tracking is mostly used in mobile phones and tablets because they have screen and camera, which makes them the best platform for AR. In addition, GPS tracking is useful in the fields of military, gaming, and tourism. These tracking technologies along with others are explained in detail in Section 3 .

Once the points of reference are collected after tracking, then another important factor that requires significant accuracy is to determine at which particular point the virtual objects have to be mixed with the real environment. Here comes the role of display technologies that gives the users of augmented reality an environment where the real and virtual objects are displayed visually. Therefore, display technologies are one of the key components of AR. This research identifies state-of-the-art display technologies that help to provide a quality view of real and virtual objects. Augmented reality displays can be divided into various categories. All have the same task to show the merged image of real and virtual content to the user’s eye. The authors have categorized the latest technologies of optical display after the advancements in holographic optical elements (HOEs). There are other categories of AR displays, such as video-based, eye multiplexed, and projected onto a physical surface. Optical see-through has two sub-categories, one is a free-space combiner and the other is a wave-guide combiner [ 11 , 12 ]. The thorough details of display technologies are presented in Section 4 .

To develop these AR applications, different tools are used depending on the type of application used. For example, to develop a mobile-based AR application for Android or iOS, ARToolKit [ 13 ] is used. However, FLARToolKit [ 14 ] is used to create a web-based application using Flash. Moreover, there are various plug-ins available that can be integrated with Unity [ 15 ] to create AR applications. These development tools are reviewed in Section 6 of this paper. Figure 1 provides an overview of reviewed topics of augmented reality in this paper.

An external file that holds a picture, illustration, etc.
Object name is sensors-23-00146-g001.jpg

Overview of AR, VR, and collaborative AR applications, tools, and technologies.

After going through a critical review process of collaborative augmented reality, the research has identified that some security flaws and missing trust parameters need to be addressed to ensure a pristine environment is provided to the users. Hackers and intruders are always active to exploit different vulnerabilities in the systems and software, but the previous research conducted on collaborative augmented reality did not depict reasonable efforts made in this direction to make secure collaboration. To address the security flaws and to provide secure communication in collaborative augmented reality, this research considered it appropriate to come up with a security solution and framework that can limit danger and risks that may be posed in the form of internal and external attacks. To actualize the secure platform, this study came up with an architecture for presenting a secure collaborative AR in the tourism sector in Saudi Arabia as a case study. The focus of the case study is to provide an application that can guide tourists during their visit to any of the famous landmarks in the country. This study proposed a secure and trustful mobile application based on collaborative AR for tourists. In this application, the necessary information is rendered on screen and the user can hire a guide to provide more information in detail. A single guide can provide the services to a group of tourists visiting the same landmark. A blockchain network was used to secure the applications and protect the private data of the users [ 16 , 17 ]. For this purpose, we performed a thorough literature review for an optimized solution regarding security and tracking for which we studies the existing tracking technologies and listed them in this paper along with their limitations. In our use case, we used a GPS tracking system to track the user’s movement and provide the necessary information about the visited landmark through the mobile application.

Observing the fact that AR operates in an integrated fashion that combines different technologies including tracking technologies, display technologies, AR tools, collaborative AR, and applications of AR has encouraged us to explore and present these conceptions and technologies in detail. To facilitate researchers on these different techniques, the authors have explored the research previously conducted and presented it in a Venn diagram, as shown in Figure 2 . Interested investigators can choose their required area of research in AR. As can be seen in the diagram, most research has been done in the area of tracking technologies. This is further divided into different types of tracking solutions including fiducial tracking, video-based tracking, and inertial tracking. Some papers lie in several categories for, example some papers such as [ 18 , 19 , 20 ] fall in both the fiducial tracking and sensor categories. Similarly, computer vision and display devices have some common papers, and inertial tracking and video-based tracking also have some papers in common. In addition, display devices share common papers with computer vision, mobile AR, design guidelines, tool-kits, evaluation, AR tags, and security and privacy of AR. Furthermore, visualization has different papers in common with business, interior design, and human–robot communication. While education shares some paper with gaming, simulation, medicine, heritage, and manufacturing. In short, we have tried to summarize all papers and further elaborate in their sections for the convenience of the reader.

An external file that holds a picture, illustration, etc.
Object name is sensors-23-00146-g002.jpg

Classification of reviewed papers with respect to tracking, display, authoring tools, application, Collaborative and security.

Contribution: This research presents a comprehensive review of AR and its associated technologies. A review of state-of-the-art tracking and display technologies is presented followed by different essential components and tools that can be used to effectively create AR experiences. The study also presents the newly emerging technologies such as collaborative augmented reality and how different application interactions are carried out. During the review phase, the research identified that the AR-based solutions and particularly collaborative augmented reality solutions are vulnerable to external intrusion. It is identified that these solutions lack security and the interaction could be hijacked, manipulated, and sometimes exposed to potential threats. To address these concerns, this research felt the need to ensure that the communication has integrity; henceforth, the research utilizes the state-of-the-art blockchain infrastructure for the collaborating applications in AR. The paper further proposes complete secure framework wherein different applications working remotely have a real feeling of trust with each other [ 21 ].

Outline : This paper presents the overview of augmented reality and its applications in various realms in Section 2 . In Section 3 , tracking technologies are presented, while a detailed overview of the display technologies is provided in Section 4 . Section 6 apprises readers on AR development tools. Section 7 highlights the collaborative research on augmented reality, while Section 8 interprets the AR interaction and input technologies. The paper presents the details of design guidelines and interface patterns in Section 9 , while Section 10 discusses the security and trust issues in collaborative AR. Section 12 highlights future directions for research, while Section 13 concludes this research.

2. Augmented Reality Overview

People, for many years, have been using lenses, light sources, and mirrors to create illusions and virtual images in the real world [ 22 , 23 , 24 ]. However, Ivan Sutherland was the first person to truly generate the AR experience. Sketchpad, developed at MIT in 1963 by Ivan Sutherland, is the world’s first interactive graphic application [ 25 ]. In Figure 3 , we have given an overview of the development of AR technology from the beginning to 2022. Bottani et al. [ 26 ] reviews the AR literature published during the time period of 2006–2017. Moreover, Sereno et al. [ 27 ] use a systematic survey approach to detail the existing literature available on the intersection of computer-supported collaborative work and AR.

An external file that holds a picture, illustration, etc.
Object name is sensors-23-00146-g003.jpg

Augmented reality advancement over time for the last 60 years.

2.1. Head-Mounted Display

Ens et al. [ 28 ] review the existing work on design exploration for mixed-scale gestures where the Hololens AR display is used to interweave larger gestures with micro-gestures.

2.2. AR Towards Applications

ARToolKit tracking library [ 13 ] aimed to provide the computer vision tracking of a square marker in real-time which fixed two major problems, i.e., enabling interaction with real-world objects and secondly, the user’s viewpoint tracking system. Researchers conducted studies to develop handheld AR systems. Hettig et al. [ 29 ] present a system called “Augmented Visualization Box” to asses surgical augmented reality visualizations in a virtual environment. Goh et al. [ 30 ] present details of the critical analysis of 3D interaction techniques in mobile AR. Kollatsch et al. [ 31 ] introduce a system that creates and introduces the production data and maintenance documentation into the AR maintenance apps for machine tools which aims to reduce the overall cost of necessary expertise and the planning process of AR technology. Bhattacharyya et al. [ 32 ] introduce a two-player mobile AR game known as Brick, where users can engage in synchronous collaboration while inhabiting the real-time and shared augmented environment. Kim et al. [ 33 ] suggest that this decade is marked by a tremendous technological boom particularly in rendering and evaluation research while display and calibration research has declined. Liu et al. [ 34 ] expand the information feedback channel from industrial robots to a human workforce for human–robot collaboration development.

2.3. Augmented Reality for the Web

Cortes et al. [ 35 ] introduce the new techniques of collaboratively authoring surfaces on the web using mobile AR. Qiao et al. [ 36 ] review the current implementations of mobile AR, enabling technologies of AR, state-of-art technology, approaches for potential web AR provisioning, and challenges that AR faces in a web-based system.

2.4. AR Application Development

The AR industry was tremendously increasing in 2015, extending from smartphones to websites with head-worn display systems such as Google Glass. In this regard, Agati et al. [ 18 ] propose design guidelines for the development of an AR manual assembly system which includes ergonomics, usability, corporate-related, and cognition.

AR for Tourism and Education: Shukri et al. [ 37 ] aim to introduce the design guidelines of mobile AR for tourism by proposing 11 principles for developing efficient AR design for tourism which reduces cognitive overload, provides learning ability, and helps explore the content while traveling in Malaysia. In addition to it, Fallahkhair et al. [ 38 ] introduce new guidelines to make AR technologies with enhanced user satisfaction, efficiency, and effectiveness in cultural and contextual learning using mobiles, thereby enhancing the tourism experience. Akccayir et al. [ 39 ] show that AR has the advantage of placing the virtual image on a real object in real time while pedagogical and technical issues should be addressed to make the technology more reliable. Salvia et al. [ 40 ] suggest that AR has a positive impact on learning but requires some advancements.

Sarkar et al. [ 41 ] present an AR app known as ScholAR. It introduces enhancing the learning skills of the students to inculcate conceptualizing and logical thinking among sevemth-grade students. Soleiman et al. [ 42 ] suggest that the use of AR improves abstract writing as compared to VR.

2.5. AR Security and Privacy

Hadar et al. [ 43 ] scrutinize security at all steps of AR application development and identify the need for new strategies for information security, privacy, and security, with a main goal to design and introduce capturing and mapping concerns. Moreover, in the industrial arena, Mukhametshin et al. [ 44 ] focus on developing sensor tag detection, tracking, and recognition for designing an AR client-side app for Siemen Company to monitor the equipment for remote facilities.

3. Tracking Technology of AR

Tracking technologies introduce the sensation of motion in the virtual and augmented reality world and perform a variety of tasks. Once a tracking system is rightly chosen and correctly installed, it allows a person to move within a virtual and augmented environment. It further allows us to interact with people and objects within augmented environments. The selection of tracking technology depends on the sort of environment, the sort of data, and the availability of required budgets. For AR technology to meet Azuma’s definition of an augmented reality system, it must adhere to three main components:

  • it combines virtual and the real content;
  • it is interactive in real time;
  • is is registered in three dimensions.

The third condition of being “registered in three dimensions” alludes to the capability of an AR system to project the virtual content on physical surroundings in such a way that it seems to be part of the real world. The position and orientation (pose) of the viewer concerning some anchor in the real world must be identified and determined for registering the virtual content in the real environment. This anchor of the real world may be the dead-reckoning from inertial tracking, a defined location in space determined using GPS, or a physical object such as a paper image marker or magnetic tracker source. In short, the real-world anchor depends upon the applications and the technologies used. With respect to the type of technology used, there are two ways of registering the AR system in 3D:

  • Determination of the position and orientation of the viewer relative to the real-world anchor: registration phase;
  • Upgrading of viewer’s pose with respect to previously known pose: tracking phase.

In this document, the word “tracking” would define both phases as common terminology. There are two main types of tracking techniques which are explained as follows (depicted in Figure 4 ).

An external file that holds a picture, illustration, etc.
Object name is sensors-23-00146-g004.jpg

Categorization of augmented reality tracking techniques.

3.1. Markerless Tracking Techniques

Markerless tracking techniques further have two types, one is sensor based and another is vision based.

3.1.1. Sensor-Based Tracking

Magnetic Tracking Technology: This technology includes a tracking source and two sensors, one sensor for the head and another one for the hand. The tracking source creates an electromagnetic field in which the sensors are placed. The computer then calculates the orientation and position of the sensors based on the signal attenuation of the field. This gives the effect of allowing a full 360 range of motion. i.e., allowing us to look all the way around the 3D environment. It also allows us to move around all three degrees of freedom. The hand tracker has some control buttons that allow the user to navigate along the environment. It allows us to pick things up and understand the size and shape of the objects [ 45 ]. In Figure 5 we have tried to draw the tracking techniques to give a better understanding to the reader.

An external file that holds a picture, illustration, etc.
Object name is sensors-23-00146-g005.jpg

Augmented reality tracking techniques presentation.

Frikha et al. [ 46 ] introduce a new mutual occlusion problem handler. The problem of occlusion occurs when the real objects are in front of the virtual objects in the scene. The authors use a 3D positioning approach and surgical instrument tracking in an AR environment. The paradigm is introduced that is based on monocular image-based processing. The result of the experiment suggested that this approach is capable of handling mutual occlusion automatically in real-time.

One of the main issues with magnetic tracking is the limited positioning range [ 47 ]. Orientation and position can be determined by setting up the receiver to the viewer [ 48 ]. Receivers are small and light in weight and the magnetic trackers are indifferent to optical disturbances and occlusion; therefore, these have high update rates. However, the resolution magnetic field declines with the fourth power of the distance, and the strength of magnetic fields decline with the cube of the distance [ 49 ]. Therefore, the magnetic trackers have constrained working volume. Moreover, magnetic trackers are sensitive to environments around magnetic fields and the type of magnetic material used and are also susceptible to measurement jitter [ 50 ].

Magnetic tracking technology is widely used in the range of AR systems, with applications ranging from maintenance [ 51 ] to medicine [ 52 ] and manufacturing [ 53 ].

Inertial Tracking: Magnetometers, accelerometers, and gyroscopes are examples of inertial measurement units (IMU) used in inertial tracking to evaluate the velocity and orientation of the tracked object. An inertial tracking system is used to find the three rotational degrees of freedom relative to gravity. Moreover, the time period of the trackers’ update and the inertial velocity can be determined by the change in the position of the tracker.

Advantages of Inertial Tracking: It does not require a line of sight and has no range limitations. It is not prone to optical, acoustic, magnetic, and RE interference sources. Furthermore, it provides motion measurement with high bandwidth. Moreover, it has negligible latency and can be processed as fast as one desires.

Disadvantages of Inertial Tracking: They are prone to drift of orientation and position over time, but their major impact is on the position measurement. The rationale behind this is that the position must be derived from the velocity measurements. The usage of a filter could help in resolving this issue. However, the issue could while focusing on this, the filter can decrease the responsiveness and the update rate of the tracker [ 54 ]. For the ultimate correction of this issue of the drift, the inertial sensor should be combined with any other kind of sensor. For instance, it could be combined with ultrasonic range measurement devices and optical trackers.

3.1.2. Vision-Based Tracking

Vision-based tracking is defined as tracking approaches that ascertain the camera pose by the use of data captured from optical sensors and as registration. The optical sensors can be divided into the following three categories:

  • visible light tracking;
  • 3D structure tracking;
  • infrared tracking.

In recent times, vision-based tracking AR is becoming highly popular due to the improved computational power of consumer devices and the ubiquity of mobile devices, such as tablets and smartphones, thereby making them the best platform for AR technologies. Chakrabarty et al. [ 55 ] contribute to the development of autonomous tracking by integrating the CMT into IBVS, their impact on the rigid deformable targets in indoor settings, and finally the integration of the system into the Gazebo simulator. Vision-based tracking is demonstrated by the use of an effective object tracking algorithm [ 56 ] known as the clustering of static-adaptive correspondences for deformable object tracking (CMT). Gupta et al. [ 57 ] detail the comparative analysis between the different types of vision-based tracking systems.

Moreover, Krishna et al. [ 58 ] explore the use of electroencephalogram (EEG) signals in user authentication. User authentication is similar to facial recognition in mobile phones. Moreover, this is also evaluated by combining it with eye-tracking data. This research contributes to the development of a novel evaluation paradigm and a biometric authentication system for the integration of these systems. Furthermore, Dzsotjan et al. [ 59 ] delineate the usefulness of the eye-tracking data evaluated during the lectures in order to determine the learning gain of the user. Microsoft HoloLens2’s designed Walk the Graph app was used to generate the data. Binary classification was performed on the basis of the kinematic graphs which users reported of their own movement.

Ranging from smartphones to laptops and even to wearable devices with suitable cameras located in them, visible light tracking is the most commonly used optical sensor. These cameras are particularly important because they can both make a video of the real environment and can also register the virtual content to it, and thereby can be used in video see-through AR systems.

Chen et al. [ 60 ] resolve the shortcomings of the deep learning lightning model (DAM) by combining the method of transferring a regular video to a 3D photo-realistic avatar and a high-quality 3D face tracking algorithm. The evaluation of the proposed system suggests its effectiveness in real-world scenarios when we have variability in expression, pose, and illumination. Furthermore, Rambach et al. [ 61 ] explore the details pipeline of 6DoF object tracking using scanned 3D images of the objects. The scope of research covers the initialization of frame-to-frame tracking, object registration, and implementation of these aspects to make the experience more efficient. Moreover, it resolves the challenges that we faced with occlusion, illumination changes, and fast motion.

3.1.3. Three-Dimensional Structure Tracking

Three-dimensional structure information has become very affordable because of the development of commercial sensors capable of accomplishing this task. It was begun after the development of Microsoft Kinect [ 62 ]. Syahidi et al. [ 63 ] introduce a 3D AR-based learning system for pre-school children. For determining the three-dimensional points in the scene, different types of sensors could be used. The most commonly used are the structured lights [ 64 ] or the time of flight [ 65 ]. These technologies work on the principle of depth analysis. In this, the real environment depth information is extracted by the mapping and the tracking [ 66 ]. The Kinect system [ 67 ], developed by Microsoft, is one of the widely used and well-developed approaches in Augmented Reality.

Rambach et al. [ 68 ] present the idea of augmented things: utilizing off-screen rendering of 3D objects, the realization of application architecture, universal 3D object tracking based on the high-quality scans of the objects, and a high degree of parallelization. Viyanon et al. [ 69 ] focus on the development of an AR app known as “AR Furniture" for providing the experience of visualizing the design and decoration to the customers. The customers fit the pieces of furniture in their rooms and were able to make a decision regarding their experience. Turkan et al. [ 70 ] introduce the new models for teaching structural analysis which has considerably improved the learning experience. The model integrates 3D visualization technology with mobile AR. Students can enjoy the different loading conditions by having the choice of switching loads, and feedback can be provided in the real-time by AR interface.

3.1.4. Infrared Tracking

The objects that emitted or reflected the light are some of the earliest vision-based tracking techniques used in AR technologies. Their high brightness compared to their surrounding environment made this tracking very easy [ 71 , 72 ]. The self-light emitting targets were also indifferent to the drastic illumination effects i.e., harsh shadows or poor ambient lighting. In addition, these targets could either be transfixed to the object being tracked and camera at the exterior of the object and was known as “outside-looking-in” [ 73 ]. Or it could be “inside-looking-out”, external in the environment with camera attached to the target [ 74 ]. The inside-looking-out configuration, compared to the sensor of the inside-looking-out system, has greater resolution and higher accuracy of angular orientation. The inside-looking-out configuration is used in the development of several systems [ 20 , 75 , 76 , 77 ], typically with infrared LEDs mounted on the ceiling and a head-mounted display with a camera facing externally.

3.1.5. Model-Based Tracking

The three-dimensional tracking of real-world objects has been the subject of researchers’ interest. It is not as popular as natural feature tracking or planner fiducials, however, a large amount of research has been done on it. In the past, tracking the three-dimensional model of the object was usually created by the hand. In this system, the lines, cylinders, spheres, circles, and other primitives were combined to identify the structure of objects [ 78 ]. Wuest et al. [ 79 ] focus on the development of the scalable and performance pipeline for creating a tracking solution. The structural information of the scene was extracted by using the edge filters. Additionally, for the determination of the pose, edge information and the primitives were matched [ 80 ].

In addition, Gao et al. [ 81 ] explore the tracking method to identify the different vertices of a convex polygon. This is done successfully as most of the markers are square. The coordinates of four vertices are used to determine the transformation matrix of the camera. Results of the experiment suggested that the algorithm was so robust to withstand fast motion and large ranges that make the tracking more accurate, stable, and real time.

The combination of edge-based tracking and natural feature tracking has the following advantages:

  • It provides additional robustness [ 82 ].
  • Enables spatial tracking and thereby is able to be operated in open environments [ 83 ].
  • For variable and complex environments, greater robustness was required. Therefore, they introduced the concept of keyframes [ 84 ] in addition to the primitive model [ 85 ].

Figen et al. [ 86 ] demonstrate of a series of studies that were done at the university level in which participants were asked to make the mass volume of buildings. The first study demanded the solo work of a designer in which they had to work using two tools: MTUIs of the AR apps and analog tools. The second study developed the collaboration of the designers while using analog tools. The study has two goals: change in the behavior of the designer while using AR apps and affordances of different interfaces.

Developing and updating the real environment’s map simultaneously had been the subject of interest in model-based tracking. This has a number of developments. First, simultaneous localization and map building (SLAM) was primarily done for robot navigation in unknown environments [ 87 ]. In augmented reality, [ 88 , 89 ], this technique was used for tracking the unknown environment in a drift-free manner. Second, parallel mapping and tracking [ 88 ] was developed especially for AR technology. In this, the mapping of environmental components and the camera tracks were identified as a separate function. It improved tracking accuracy and also overall performance. However, like SLAM, it did not have the capability to close large loops in the constrained environment and area ( Figure 6 ).

An external file that holds a picture, illustration, etc.
Object name is sensors-23-00146-g006.jpg

Hybrid tracking: inertial and SLAM combined and used in the latest mobile-based AR tracking.

Oskiper et al. [ 90 ] propose a simultaneous localization and mapping (SLAM) framework for sensor fusion, indexing, and feature matching in AR apps. It has a parallel mapping engine and error-state extended Kalman filter (EKF) for these purposes. Zhang et al.’s [ 91 ] Jaguar is a mobile tracking AR application with low latency and flexible object tracking. This paper discusses the design, execution, and evaluation of Jaguar. Jaguar enables a markerless tracking feature which is enabled through its client development on top of ARCoreest from Google. ARCore is also helpful for context awareness while estimating and recognizing the physical size and object capabilities, respectively.

3.1.6. Global Positioning System—GPS Tracking

This technology refers to the positioning of outdoor tracking with reference to the earth. The present accuracy of the GPS system is up to 3 m. However, improvements are available with the advancements in satellite technology and a few other developments. Real-time kinematic (RTS) is one example of them. It works by using the carrier of a GPS signal. The major benefit of it is that it has the ability to improve the accuracy level up to the centimeter level. Feiner’s touring machine [ 92 ] was the first AR system that utilized GPS in its tracking system. It used the inclinometer/magnetometer and differential GPS positional tracking. The military, gaming [ 93 , 94 ], and the viewership of historical data [ 95 ] have applied GPS tracking for the AR experiences. As it only has the supporting positional tracking low accuracy, it could only be beneficial in the hybrid tracking systems or in the applications where the pose registration is not important. AR et al. [ 96 ] use the GPS-INS receiver to develop models for object motion having more precision. Ashutosh et al. [ 97 ] explore the hardware challenges of AR technology and also explore the two main components of hardware technology: battery performance and global positioning system (GPS). Table 1 provides a succinct categorization of the prominent tracking technologies in augmented reality. Example studies are referred to while highlighting the advantages and challenges of each type of tracking technology. Moreover, possible areas of application are suggested.

3.1.7. Miscellaneous Tracking

Yang et al. [ 98 ], in order to recognize the different forms of hatch covers having similar shapes, propose tracking and cover recognition methods. The results of the experiment suggest its real-time property and practicability, and tracking accuracy was enough to be implemented in the AR inspection environment. Kang et al. [ 99 ] propose a pupil tracker which consists of several features that make AR more robust: key point alignment, eye-nose detection, and infrared (NIR) led. NIR led turns on and off based on the illumination light. The limitation of this detector is that it cannot be applied in low-light conditions.

Summary of tracking techniques and their related attributes.

Moreover, Bach et al. [ 118 ] introduce an AR canvas for information visualization which is quite different from the traditional AR canvas. Therefore, dimensions and essential aspects for developing the visualization design for AR-canvas while enlisting the several limitations within the process. Zeng et al. [ 119 ] discuss the design and the implementation of FunPianoAR for creating a better AR piano learning experience. However, a number of discrepancies occurred with this system, and the initiation of a hybrid system is a more viable option. Rewkowski et al. [ 120 ] introduce a prototype system of AR to visualize the laparoscopic training task. This system is capable of tracking small objects and requires surgery training by using widely compatible and inexpensive borescopes.

3.1.8. Hybrid Tracking

Hybrid tracking systems were used to improve the following aspects of the tracking systems:

  • Improving the accuracy of the tracking system.
  • Coping with the weaknesses of the respective tracking methods.
  • Adding more degrees of freedom.

Gorovyi et al. [ 108 ] detail the basic principles that make up an AR by proposing a hybrid visual tracking algorithm. The direct tracking techniques are incorporated with the optical flow technique to achieve precise and stable results. The results suggested that they both can be incorporated to make a hybrid system, and ensured its success in devices having limited hardware capabilities. Previously, magnetic tracking [ 109 ] or inertial trackers [ 110 ] were used in the tracking applications while using the vision-based tracking system. Isham et al. [ 111 ] use a game controller and hybrid tracking to identify and resolve the ultrasound image position in a 3D AR environment. This hybrid system was beneficial because of the following reasons:

  • Low drift of vision-based tracking.
  • Low jitter of vision-based tracking.
  • They had a robust sensor with high update rates. These characteristics decreased the invalid pose computation and ensured the responsiveness of the graphical updates [ 121 ].
  • They had more developed inertial and magnetic trackers which were capable of extending the range of tracking and did not require the line of sight. The above-mentioned benefits suggest that the utilization of the hybrid system is more beneficial than just using the inertial trackers.

In addition, Mao et al. [ 122 ] propose a new tracking system with a number of unique features. First, it accurately translates the relative distance into the absolute distance by locating the reference points at the new positions. Secondly, it embraces the separate receiver and sender. Thirdly, resolves the discrepancy in the sampling frequency between the sender and receiver. Finally, the frequency shift due to movement is highly considered in this system. Moreover, the combination of the IMU sensor and Doppler shift with the distributed frequency modulated continuous waveform (FMCW) helps in the continuous tracking of mobile due to multiple time interval developments. The evaluation of the system suggested that it can be applied to the existing hardware and has an accuracy to the millimeter level.

The GPS tracking system alone only provides the positional information and has low accuracy. So, GPS tracking systems are usually combined with vision-based tracking or inertial sensors. The intervention would help gain the full pose estimation of 6DoF [ 123 ]. Moreover, backup tracking systems have been developed as an alternative when the GPS fails [ 98 , 124 ]. The optical tracking systems [ 100 ] or the ultrasonic rangefinders [ 101 ] can be coupled with the inertial trackers for enhancing efficiency. As the differential measurement approach causes the problem of drift, these hybrid systems help resolve them. Furthermore, the use of gravity as a reference to the inertial sensor made them static and bound. The introduction of the hybrid system would make them operate in a simulator, vehicle, or in any other moving platform [ 125 ]. The introduction of accelerators, cameras, gyroscopes [ 126 ], global positioning systems [ 127 ], and wireless networking [ 128 ] in mobile phones such as tablets and smartphones also gives an opportunity for hybrid tracking. Furthermore, these devices have the capability of determining outdoor as well as indoor accurate poses [ 129 ].

3.2. Marker-Based Tracking

Fiducial Tracking: Artificial landmarks for aiding the tracking and registration that are added to the environment are known as fiducial. The complexity of fiducial tracking varies significantly depending upon the technology and the application used. Pieces of paper or small colored LEDs were used typically in the early systems, which had the ability to be detected using color matching and could be added to the environment [ 130 ]. If the position of fiducials is well-known and they are detected enough in the scene then the pose of the camera can be determined. The positioning of one fiducial on the basis of a well-known previous position and the introduction of additional fiducials gives an additional benefit that workplaces could dynamically extend [ 131 ]. A QR code-based fudicial/marker is also proposed by some researchers for marker-/tag-based tracking [ 115 ]. With the progression of work on the concept and complexity of the fiducials, additional features such as multi-rings were introduced for the detection of fiducials at much larger distances [ 116 ]. A minimum of four points of a known position is needed for determining for calculating the pose of the viewer [ 117 ]. In order to make sure that the four points are visible, the use of these simpler fiducials demanded more care and effort for placing them in the environment. Examples of such fiducials are ARToolkit and its successors, whose registration techniques are mostly planar fiducial. In the upcoming section, AR display technologies are discussed to fulfill all the conditions of Azuma’s definition.

3.3. Summary

This section provides comprehensive details on tracking technologies that are broadly classified into markerless and marker-based approaches. Both types have many subtypes whose details, applications, pros, and cons are provided in a detailed fashion. The different categories of tracking technologies are presented in Figure 4 , while the summary of tracking technologies is provided in Figure 7 . Among the different tracking technologies, hybrid tracking technologies are the most adaptive. This study combined SLAM and inertial tracking technologies as part of the framework presented in the paper.

An external file that holds a picture, illustration, etc.
Object name is sensors-23-00146-g007.jpg

Steps for combining real and virtual content.

4. Augmented Reality Display Technology

For the combination of a real and the virtual world in such a way that they both look superimposed on each other, as in Azuma’s definition, some technology is necessarily required to display them.

4.1. Combination of Real and the Virtual Images

Methods or procedures required for the merging of the virtual content in the physical world include camera calibration, tracking, registration, and composition as depicted in Figure 7 .

4.2. Camera vs. Optical See Through Calibration

It is a procedure or an optical model in which the eye display geometry or parameters define the user’s view. Or, in other words, it is a technique of complementing the dimensions and parameters of the physical and the virtual camera.

In AR, calibration can be used in two ways, one is camera calibration, and another is optical calibration. The camera calibration technique is used in video see-through (VST) displays. However, optical calibration is used in optical see-through (OST) displays. OST calibration can be further divided into three umbrellas of techniques. Initially, manual calibration techniques were used in OST. Secondly, semi-automatic calibration techniques were used, and thirdly, we have now automatic calibration techniques. Manual calibration requires a human operator to perform the calibration tasks. Semi-automatic calibration, such as simple SPAAM and display relative calibration (DRC), partially collect some parameters automatically, which usually needed to be done manually in earlier times by the user. Thirdly, the automatic OST calibration was proposed by Itoh et al. in 2014 with the model of interaction-free display calibration technique (INDICA) [ 132 ]. In video see through (VST), computer vision techniques such as cameras are used for the registration of real environments. However, in optical see through (OST), VST calibration techniques cannot be used as it is more complex because cameras are replaced by human eyes. Various calibration techniques were developed for OST. The author evaluates the registration accuracy of the automatic OST head-mounted display (HMD) calibration technique called recycled INDICA presented by Itoh and Klinker. In addition, two more calibration techniques called the single-point active alignment method (SPAAM) and degraded SPAAM were also evaluated. Multiple users were asked to perform two separate tasks to check the registration and the calibration accuracy of all three techniques can be thoroughly studied. Results show that the registration method of the recycled INDICA technique is more accurate in the vertical direction and showed the distance of virtual objects accurately. However, in the horizontal direction, the distance of virtual objects seemed closer than intended [ 133 ]. Furthermore, the results show that recycled INDICA is more accurate than any other common technique. In addition, this technique is also more accurate than the SPAAM technique. Although, different calibration techniques are used for OST and VST displays, as discussed in [ 133 ], they do not provide all the depth cues, which leads to interaction problems. Moreover, different HMDs have different tracking systems. Due to this, they are all calibrated with an external independent measuring system. In this regard, Ballestin et al. propose a registration framework for developing AR environments where all the real objects, including users, and virtual objects are registered in a common frame. The author also discusses the performance of both displays during interaction tasks. Different simple and complex tasks such as 3D blind reaching are performed using OST and VST HMDs to test their registration process and interaction of the users with both virtual and real environments. It helps to compare the two technologies. The results show that these technologies have issues, however, they can be used to perform different tasks [ 134 ].

Non-Geometric Calibration Method

Furthermore, these geometric calibrations lead to perceptual errors while converting from 3D to 2D [ 135 ]. To counter this problem, parallax-free video see-through HMDs were proposed; however, they were very difficult to create. In this regard, Cattari et al. in 2019 proposes a non-stereoscopic video see-through HMD for a close-up view. It mitigates perceptual errors by mitigating geometric calibration. Moreover, the authors also identify the problems of non-stereoscopic VST HMD. The aim is to propose a system that provides a view consistent with the real world [ 136 , 137 ]. Moreover, State et al. [ 138 ] focus on a VST HMD system that generates zero eye camera offset. While Bottechia et al. [ 139 ] present an orthoscope monocular VST HMD prototype.

4.3. Tracking Technologies

Some sort of technology is required to track the position and orientation of the object of interest which could either be a physical object or captured by a camera with reference to the coordinate plan (3D or 2D) of a tracking system. Several technologies ranging from computer vision techniques to 6DoF sensors are used for tracking the physical scenes.

4.4. Registration

Registration is defined as a process in which the coordinate frame used for manifesting the virtual content is complemented by the coordinate frame of the real-world scene. This would help in the accurate alignment of the virtual content and the physical scene.

4.5. Composition

Now, the accuracy of two important steps, i.e., the accurate calibration of the virtual camera and the correct registration of the virtual content relative to the physical world, signifies the right correspondence between the physical environment and the virtual scene which is generated on the basis of tracking updates. This process then leads to the composition of the virtual scene’s image and can be done in two ways: Optically (or physically) or digitally. The physical or digital composition depends upon the configuration and dimensions of the system used in the augmented reality system.

4.6. Types of Augmented Reality Displays

The combination of virtual content in the real environment divides the AR displays into four major types, as depicted in Figure 8 . All have the same job to show the merged image of real and virtual content to the user’s eye. The authors have categorized the latest technologies of optical display after the advancements in holographic optical elements HOEs. There are other categories of AR display that arealso used, such as video-based, eye multiplexed, and projection onto a physical surface.

An external file that holds a picture, illustration, etc.
Object name is sensors-23-00146-g008.jpg

Types of augmented reality display technologies.

4.7. Optical See-Through AR Display

These kinds of displays use the optical system to merge the real scenes and virtual scene images. Examples of AR displays are head-up display HUD systems of advanced cars and cockpits of airplanes. These systems consist of the following components: beam splitters, which can be of two forms, combined prisms or half mirrors. Most beam splitters reflect the image from the video display. This reflected image is then integrated with a real-world view that can be visualized from the splitter. For half mirrors as a beam splitter, the working way is somewhat different: the real-world view is reflected on the mirror rather than the image of the video display. At the same time, the video display can also be viewed from the mirror. The transport projection system is semi-transparent optical technology used in optical display systems. Their semi-transparent property allows the viewer to witness the view at the back of the screen. Additionally, this system uses diffused light to manifest the exhibited image. Examples of semi-display optical systems are transparent projection film, transparent LCDs, etc. Optical combiners are used for the combination of virtual and real scene images. Optical see-through basically has two sub-categories, one is a free-space combiner and the other is a wave-guide combiner [ 140 ]. Additionally, now the advancement of technology has enabled technicians to make self-transparent displays. This self-transparent feature help in the miniaturization and simplification of the size and structure of the optical see-through displays.

4.7.1. Free-Space Combiners

Papers related to free space combiners are discussed here. Pulli et al. [ 11 ] introduce a second-generation immersive optical see-through AR system known as meta 2. It is based on an optical engine that uses the free-form visor to make a more immersive experience. Another traditional geometric display is ultra-fast high-resolution piezo linear actuators combined with Alvarez’s lens to make a new varifocal optical see-through HMD. It uses a beamsplitter which acts as an optical combiner to merge the light paths of the real and virtual worlds [ 12 ]. Another type of free-space combiner is Maxwellian-type [ 112 , 113 , 114 , 141 ]. In [ 142 ], the author employs the random structure as a spatial light modulator for developing a light-field near-eye display based on random pinholes. The latest work in [ 143 , 144 ] introduces an Ini-based light field display using the multi-focal micro-lens to propose the extended depth of the field. To enhance the eyebox view there is another technique called puppil duplication steering [ 145 , 146 , 147 , 148 , 149 , 150 ]. In this regard, refs. [ 102 , 151 ] present the eyebox-expansion method for the holographic near-eye display and pupil-shifting holographic optical element (PSHOE) for the implementation. Additionally, the design architecture is discussed and the incorporation of the holographic optical element within the holographic display system is discussed. There is another recent technique similar to the Maxwellian view called pin-light systems. It increases the Maxwellian view with larger DoFs [ 103 , 104 ].

4.7.2. Wave-Guide Combiner

The waveguide combiner basically traps light into TIR as opposed to free-space, which lets the light propagate without restriction [ 104 , 105 , 106 ]. The waveguide combiner has two types, one is diffractive waveguides and another is achromatic waveguides [ 107 , 152 , 153 , 154 , 155 ].

4.8. Video-Based AR Displays

These displays execute the digital processes as their working principle [ 156 ]. To rephrase, the merging of the physical world video and the virtual images, in video display systems, is carried out by digital processing. The working of the video-based system depends upon the video camera system by which it fabricates the real-world video into digital. The rationale behind this system is that the composition of the physical world’s video or scenario with the virtual content could be manifested digitally through the operation of a digital image processing technique [ 157 ]. Mostly, whenever the user has to watch the display, they have to look in the direction of the video display, and the camera is usually attached at the back of this display. So, the camera faces the physical world scene. These are known as “video see-through displays" because in them the real world is fabricated through the digitization (i.e., designing the digital illusion) of these video displays. Sometimes the design of the camera is done in such a way that it may show an upside-down image of an object, create the illusion of a virtual mirror, or site the image at a distant place.

4.9. Projection-Based AR Display

Real models [ 158 ] and walls [ 159 ] could be example of projection-based AR displays. All the other kinds of displays use the display image plan for the combination of the real and the virtual image. However, this display directly overlays the virtual scene image over the physical object. They work in the following manner:

  • First, they track the user’s viewpoint.
  • Secondly, they track the physical object.
  • Then, they impart the interactive augmentation [ 160 ].

Mostly, these displays have a projector attached to the wall or a ceiling. This intervention has an advantage as well as a disadvantage. The advantage is that this does not demand the user to wear something. The disadvantage is that it is static and restricts the display to only one location of projection. For resolving this problem and making the projectors mobile, a small-sized projector has been made that could be easily carried from one place to another [ 161 ]. More recently, with the advancement of technology, miniaturized projectors have also been developed. These could be held in the hand [ 162 ] or worn on the chest [ 163 ] or head [ 164 ].

4.10. Eye-Multiplexed Augmented Reality Display

In eye-multiplexed AR displays, the users are allowed to combine the views of the virtual and real scenes mentally in their minds [ 72 ]. Rephrased, these displays do not combine the image digitally; therefore, it requires less computational power [ 72 ]. The process is as follows. First, the virtual image gets registered to the physical environment. Second, the user will get to see the same rendered image as the physical scene because the virtual image is registered to the physical environment. The user has to mentally configure the images in their mind to combine the virtual and real scene images because the display does not composite the rendered and the physical image. For two reasons, the display should be kept near the viewer’s eye: first, the display could appear as an inset into the real world, and second, the user would have to put less effort into mentally compositing the image.

The division of the displays on the basis of the position of the display between the real and virtual scenes is referred to as the “eye to world spectrum”.

4.11. Head-Attached Display

Head-attached displays are in the form of glasses, helmets, or goggles. They vary in size from smaller to bigger. However, with the advancement of technology, they are becoming lighter to wear. They work by displaying the virtual image right in front of the user’s eye. As a result, no other physical object can come between the virtual scene and the viewer’s eye. Therefore, the third physical object cannot occlude them. In this regard, Koulieris et al. [ 165 ] summarized the work on immersive near-eye tracking technologies and displays. Results suggest various loopholes within the work on display technologies: user and environmental tracking and emergence–accommodation conflict. Moreover, it suggests that advancement in the optics technology and focus adjustable lens will improve future headset innovations and creation of a much more comfortable HMD experience. In addition to it, Minoufekr et al. [ 166 ] illustrate and examine the verification of CNC machining using Microsoft HoloLens. In addition, they also explore the performance of AR with machine simulation. Remote computers can easily pick up the machine models and load them onto the HoloLens as holograms. A simulation framework is employed that makes the machining process observed prior to the original process. Further, Franz et al. [ 88 ] introduce two sharing techniques i.e., over-the-shoulder AR and semantic linking for investigating the scenarios in which not every user is wearing HWD. Semantic linking portrays the virtual content’s contextual information on some large display. The result of the experiment suggested that semantic linking and over-the-shoulder suggested communication between participants as compared to the baseline condition. Condino et al. [ 167 ] aim to explore two main aspects. First, to explore complex craniotomies to gauge the reliability of the AR-headsets [ 168 ]. Secondly, for non-invasive, fast, and completely automatic planning-to-patient registration, this paper determines the efficacy of patient-specific template-based methodology for this purpose.

4.12. Head-Mounted Displays

The most commonly used displays in AR research are head-mounted displays (HMDs). They are also known as face-mounted displays or near-eye displays. The user puts them on, and the display is represented right in front of their eyes. They are most commonly in the form of goggles. While using HMDs, optical and video see-through configurations are most commonly used. However, recently, head-mounted projectors are also explored to make them small enough to wear. Examples of smart glasses, Recon Jet, Google glass, etc., are still under investigation for their usage in head-mounted displays. Barz et al. [ 169 ] introduce a real-time AR system that augments the information obtained from the recently attended objects. This system is implemented by using head-mounted displays from the state-of-the-art Microsoft HoloLens [ 170 ]. This technology can be very helpful in the fields of education, medicine, and healthcare. Fedosov et al. [ 171 ] introduce a skill system, and an outdoor field study was conducted on the 12 snowboards and skiers. First, it develops a system that has a new technique to review and share personal content. Reuter et al. [ 172 ] introduce the coordinative concept, namely RescueGlass, for German Red Cross rescue dog units. This is made up of a corresponding smartphone app and a hands-free HMD (head-mounted display) [ 173 ]. This is evaluated to determine the field of emergency response and management. The initial design is presented for collaborative professional mobile tasks and is provided using smart glasses. However, the evaluation suggested a number of technical limitations in the research that could be covered in future investigations. Tobias et al. [ 174 ] explore the aspects such as ambiguity, depth cues, performed tasks, user interface, and perception for 2D and 3D visualization with the help of examples. Secondly, they categorize the head-mounted displays, introduce new concepts for collaboration tasks, and explain the concepts of big data visualization. The results of the study suggested that the use of collaboration and workspace decisions could be improved with the introduction of the AR workspace prototype. In addition, these displays have lenses that come between the virtual view and the user’s eye just like microscopes and telescopes. So, the experiments are under investigation to develop a more direct way of viewing images such as the virtual retinal display developed in 1995 [ 175 ]. Andersson et al. [ 176 ] show that training, maintenance, process monitoring, and programming can be improved by integrating AR with human—robot interaction scenarios.

4.13. Body-Attached and Handheld Displays

Previously, the experimentation with handheld display devices was done by tethering the small LSDs to the computers [ 177 , 178 ]. However, advancements in technology have improved handheld devices in many ways. Most importantly, they have become so powerful to operate AR visuals. Many of them are now used in AR displays such as personal digital assistants [ 179 ], cell phones [ 180 ], tablet computers [ 181 ], and ultra-mobile PCs [ 182 ].

4.13.1. Smartphones and Computer tablets

In today’s world, computer tablets and smartphones are powerful enough to run AR applications, because of the following properties: various sensors, cameras, and powerful graphic processors. For instance, Google Project Tango and ARCore have the most depth imaging sensors to carry out the AR experiences. Chan et al. [ 183 ] discuss the challenges faced while applying and investigating methodologies to enhance direct touch interaction on intangible displays. Jang et al. [ 184 ] aim to explore e-leisure due to enhancement in the use of mobile AR in outdoor environments. This paper uses three methods, namely markerless, marker-based, and sensorless to investigate the tracking of the human body. Results suggested that markerless tracking cannot be used to support the e-leisure on mobile AR. With the advancement of electronic computers, OLED panels and transparent LCDs have been developed. It is also said that in the future, building handheld optical see-through devices would be available. Moreover, Fang et al. [ 185 ] focus on two main aspects of mobile AR. First, a combination of the inertial sensor, 6DoF motion tracking based on sensor-fusion, and monocular camera for the realization of mobile AR in real-time. Secondly, to balance the latency and jitter phenomenon, an adaptive filter design is introduced. Furthermore, Irshad et al. [ 186 ] introduce an evaluation method to assess mobile AR apps. Additionally, Loizeau et al. [ 187 ] explore a way of implementing AR for maintenance workers in industrial settings.

4.13.2. Micro Projectors

Micro projectors are an example of a mobile phone-based AR display. Researchers are trying to investigate these devices that could be worn on the chest [ 188 ], shoulder [ 189 ], or wrist [ 190 ]. However, mostly they are handheld and look almost like handheld flashlights [ 191 ].

4.13.3. Spatial Displays

Spatial displays are used to exhibit a larger display. Henceforth, these are used in the location where more users could get benefit from them i.e., public displays. Moreover, these displays are static, i.e., they are fixed at certain positions and can not be mobilized.

The common examples of spatial displays include those that create optical see-through displays through the use of optical beamers: half mirror workbench [ 192 , 193 , 194 , 195 ] and virtual showcases. Half mirrors are commonly used for the merging of haptic interfaces. They also enable closer virtual interaction. Virtual showcases may exhibit the virtual images on some solid or physical objects mentioned in [ 196 , 197 , 198 , 199 , 200 ]. Moreover, these could be combined with the other type of technologies to excavate further experiences. The use of volumetric 3D displays [ 201 ], autostereoscopic displays [ 202 ], and other three-dimensional displays could be researched to investigate further interesting findings.

4.13.4. Sensory Displays

In addition to visual displays, there are some sensors developed that work with other types of sensory information such as haptic or audio sensors. Audio augmentation is easier than video augmentation because the real world and the virtual sounds get naturally mixed up with each other. However, the most challenging part is to make the user think that the virtual sound is spatial. Multi-channel speaker systems and the use of stereo headphones with the head-related transfer function (HRTF) are being researched to cope with this challenge [ 203 ]. Digital sound projectors use the reverberation and the interference of sound by using a series of speakers [ 204 ]. Mic-throughand hear-through systems, developed by Lindeman [ 205 , 206 , 206 ], work effectively and are analogous to video and optical see-through displays. The feasibility test for this system was done by using a bone conduction headset. Other sensory experiences are also being researched. For example, the augmentation of the gustatory and olfactory senses. Olfactory and visual augmentation of a cookie-eating scene was developed by Narumi [ 207 ]. Table 2 gives the primary types of augmented reality display technologies and discusses their advantages and disadvantages.

A Summary of Augmented Reality Display Technologies.

4.14. Summary

This section presented a comprehensive survey of AR display technologies. These displays not only focused on combing the virtual and real-world scenes of visual experience but also other ways of combining the sensory, olfactory, and gustatory senses are also under examination by researchers. Previously, head-mounted displays were most commonly in practice; however, now handheld devices and tablets or mobile-based experiences are widely used. These things may also change in the future depending on future research and low cost. The role of display technologies was elaborated first, thereafter, the process of combining the real and augmented contents and visualizing these to users was elaborated. The section elaborated thoroughly on where the optical see-through and video-based see-through are utilized along with details of devices. Video see-through (VST) is used in head-mounted displays and computer vision techniques such as cameras are used for registration of real environment, while in optical see-through (OST), VST calibration techniques cannot be used due to complexity, and cameras are replaced by human eyes. The optical see-through is a trendy approach as of now. The different calibration approaches are presented and analyzed and it is identified after analysis, the results show that recycled INDICA is more accurate than other common techniques presented in the paper. This section also presents video-based AR displays. Figure 8 present a classified representation of different display technologies pertaining to video-based, head-mounted, and sensory-based approaches. The functions and applications of various display technologies are provided in Table 2 Each of the display technologies presented has its applicability in various realms whose details are summarized in the same Table 2 .

5. Walking and Distance Estimation in AR

The effectiveness of AR technologies depends on the perception of distance of users from both real and virtual objects [ 214 , 215 ]. Mikko et al. performed some experiments to judge depth using stereoscopic depth perception [ 216 ]. The perception can be changed if the objects are on the ground or off the ground. In this regard, Carlos et al. also proposed a comparison between the perception of distance of these objects on the ground and off the ground. The experiment was done where the participant perceived the distance from cubes on the ground and off the ground as well. The results showed that there is a difference between both perceptions. However, it was also shown that this perception depends on whether the vision is monocular or binocular [ 217 ]. Plenty of research has been done in outdoor navigation and indoor navigation areas with AR [ 214 ]. In this regard, Umair et al. present an indoor navigation system in which Google glass is used as a wearable head-mounted display. A pre-scanned 3D map is used to track an indoor environment. This navigation system is tested on both HMD and handheld devices such as smartphones. The results show that the HMD was more accurate than the handheld devices. Moreover, it is stated that the system needs more improvement [ 218 ].

6. AR Development Tool

In addition to the tracking and display devices, there are some other software tools required for creating an AR experience. As these are hardware devices, they require some software to create an AR experience. This section explores the tools and the software libraries. It will cover both the aspects of the commercially available tools and some that are research related. Different software applications require a separate AR development tool. A complete set of low-level software libraries, plug-ins, platforms, and standalones are presented in Figure 9 so they can be summarized for the reader.

An external file that holds a picture, illustration, etc.
Object name is sensors-23-00146-g009.jpg

Stack of development libraries, plug-ins, platforms, and standalone authoring tools for augmented reality development.

In some tools, computer vision-based tracking (see Section 3.1.2 ) is preferred for creating an indoor experience, while others utilized sensors for creating an outdoor experience. The use of each tool would depend upon the type of platform (web or mobile) for which it is designed. Further in the document, the available AR tools are discussed, which consist of both novel tools and those that are widely known. Broadly, the following tools will be discussed:

  • Low-level software development tools: needs high technological and programming skills.
  • Rapid prototyping: provides a quick experience.
  • Plug-ins that run on the existing applications.
  • Standalone tools that are specifically designed for non-programmers.
  • Next generation of AR developing tools.

6.1. Low-Level Software Libraries and Frameworks

Low-level software and frameworks make the functions of display and core tracking accessible for creating an AR experience. One of the most commonly used AR software libraries, as discussed in the previous section, is ARToolKit. ARToolKit is developed by Billing Hurst and Kato that has two versions [ 219 ]. It works on the principle of a fiducial marker-based registration system [ 220 ]. There are certain advances in the ARToolKit discussed related to the tracking in [ 213 , 221 , 222 , 223 , 224 ]. The first one is an open-source version that provides the marker-based tracking experience, while the second one provides natural tracking features and is a commercial version. It can be operated on Linux, Windows, and Mac OS desktops as it is written in the C language. It does not require complex graphics or built-in support for accomplishing its major function of providing a tracking experience, and it can operate simply by using low-level OpenGL-based rendering. ARToolKit requires some additional libraries such as osgART and OpenScene graph library so it can provide a complete AR experience to AR applications. OpenScene graph library is written in C language and operates as an open-source graph library. For graphic rendering, the OpenScene graph uses OpenGL. Similarly, the osgART library links the OpenScene graph and ARToolKit. It has advanced rendering techniques that help in developing the interacting AR application. OsgART library has a modular structure and can work with any other tracking library such as PTAM and BazAR, if ARtoolkit is not appropriate. BazAR is a workable tracking and geometric calibration library. Similarly, PTAM is a SLAM-based tracking library. It has a research-based and commercial license. All these libraries are available and workable to create a workable AR application. Goblin XNA [ 208 ] is another platform that has the components of interactions based on physics, video capture, a head-mounted AR display on which output is displayed, and a three-dimensional user interface. With Goblin XNA, existing XNA games could be easily modified [ 209 ]. Goblin XNA is available as a research and educational platform. Studierstube [ 210 ] is another AR system through which a complete AR application can be easily developed. It has tracking hardware, input devices, different types of displays, AR HMD, and desktops. Studierstube was specially developed to subsidize the collaborative applications [ 211 , 212 ]. Studierstube is a research-oriented library and is not available as commercial and workable easy-to-use software. Another commercially available SDK is Metaio SDK [ 225 ]. It consists of a variety of AR tracking technologies including image tracking, marker tracking, face tracking, external infrared tracking, and a three-dimensional object tracking. However, in May 2015, it was acquired by Apple and Metaio products and subscriptions are no longer available for purchase. Some of these libraries such as Studierstube and ARToolKit were initially not developed for PDAs. However, they have been re-developed for PDAs [ 226 ]. It added a few libraries in assistance such as open tracker, pocketknife for hardware abstraction, KLIMT as mobile rendering, and the formal libraries of communication (ACE) and screen graphs. All these libraries helped to develop a complete mobile-based AR collaborative experience [ 227 , 228 ]. Similarly, ARToolKit also incorporated the OpenScene graph library to provide a mobile-based AR experience. It worked with Android and iOS with a native development kit including some Java wrapping classes. Vuforia’s Qualcomm low-level library also provided an AR experience for mobile devices. ARToolKit and Vuforia both can be installed as a plug-in in Unity which provides an easy-to-use application development for various platforms. There are a number of sensors and low-level vision and location-based libraries such as Metaio SDK and Droid which were developed for outdoor AR experience. In addition to these low-level libraries, the Hit Lab NZ Outdoor AR library provided high-level abstraction for outdoor AR experience [ 229 ]. Furthermore, there is a famous mobile-based location AR tool that is called Hoppala-Augmentation. The geotags given by this tool can be browsed by any of the AR browsers including Layar, Junaio, and Wikitude [ 230 ].

ARTag is designed to resolve the limitations of ARToolkit. This system was developed to resolve a number of issues:

  • Resolving inaccurate pattern matching by preventing the false positive matches.
  • Enhancing the functioning in the presence of the imbalanced lightening conditions.
  • Making the occlusion more invariant.

However, ARTag is no longer actively under development and supported by the NRC Lab. A commercial license is not available.

6.3. Wikitude Studio

This is also a web-based authoring tool for creating mobile-based AR applications. It allows the utilization of computer vision-based technology for the registration of the real world. Several types of media such as animation and 3D models can be used for creating an AR scene. One of the important features of Wikitude is that the developed mobile AR content can be uploaded not only on the Wikitude AR browser app but also on a custom mobile app [ 231 ]. Wikitude’s commercial plug-in is also available in Unity to enhance the AR experience for developers.

6.4. Standalone AR Tools

Standalone AR tools are mainly designed to enable non-programmer users to create an AR experience. A person the basic computer knowledge can build and use them. The reason lies in the fact that most AR authoring tools are developed on a graphical user interface. It is known as a standalone because it does not require any additional software for its operation. The most common and major functions of standalone are animation, adding interactive behaviors, and construction. The earliest examples of the standalone tools are AMIRE [ 232 ] and CATOMIR [ 233 ]. However, AMIRE and CATOMIR have no support available and are not maintained by the development team.

This standalone AR authoring tool has the advantage of quickly adding to the development of the AR experience. BuildAR has important characteristics. This allows the user to add video, 3D models, sound, text, and images. It has both arbitrary images and the square marker for which it provides computer vision-based tracking. They use the format of proprietary file format for saving the content developed by the user. BuildAR viewer software can be downloaded for free and it helps in viewing the file. However, BuildAR has no support available and the exe file is not available on their website.

Limitation: It does not support adding new interactive features. However, Choi et al. [ 234 ] have provided a solution to this constraint. They have added the desktop authoring tool that helps in adding new interactive experiences.

6.5. Rapid Prototyping/Development Tools

In order to cope with the limitation of low-level libraries, another more fast and more rapid AR application development tool is required. The major idea behind the development of rapid prototyping was that it rapidly shows the user the prototype before executing the hard exercise of developing the application. In the following paragraphs, a number of different tools are explained for developing rapid prototyping. For the creation of multimedia content, Adobe Flashis one of the most famous tools. It was developed on desktop and web platforms. Moreover, the web desktop and mobile experiences can be prototyped by it. Flash developers can use the FLARManager, FLARToolKit, or any other plug-ins for the development of AR experience. Porting the version of ARToolKit over the flash on the web creates the AR experience. Its process is so fast that just by writing a few lines, the developer can:

  • Activate their camera.
  • The AR markers could be viewed in a camera.
  • The virtual content could be overlaid and loaded on the tracked image.

FLARToolkit is the best platform for creating AR prototyping because it has made it very easy for being operated by anyone. Anyone who has a camera and flash-enabled web browser can easily develop the AR experience. Alternatives to Flash: According to the website of Adobe, it no longer supports Flash Player after 31 December 2020 and blocked Flash content from running in Flash Player beginning 12 January 2021. Adobe strongly recommends all users immediately uninstall Flash Player to help protect their systems. However, some AR plug-ins could be used as an alternative to Flash-based AR applications. For instance, Microsoft Silverlight has the SLARToolKit. HTML5 is also recently used by researchers for creating web-based AR experiences. The major benefit of using HTML5 is that the interference of the third-party plug-in is not required. For instance, the AR natural feature tracking is implemented on WebGL, HTML5, and JavaScript. This was developed by Oberhofer and was viewable on mobile web browsers and desktops. Additionally, the normal HTML, with few web component technologies, has been used by Ahn [ 235 ] to develop a complete mobile AR framework.

6.6. Plug-ins to Existing Developer Tools

For the creation of AR experiences, the software libraries require tremendous programming techniques. So, plug-ins could be used as an alternative. Plug-ins are devices that could be plugged into the existing software packages. The AR functionality is added to the software packages that to the existing two-dimensional or three-dimensional content authoring tools. If the user already knows the procedure of using authoring tools that are supported by plug-ins, then AR plug-ins for the non-AR authoring tools are useful. These tools are aimed at:

  • AR tracking and visualization functions for the existing authoring tools.
  • It depends on the content authoring function supplied by the main authoring tool.

There are certain tools available as plug-ins and standalone through which AR applications can be built comparatively simply. These are commercial and some of them are freely available. As discussed earlier, Vuforia can be installed as a plug-in in Unity [ 236 ] and also has a free version. However, with complete support of tools certain amount needs to be paid. Similarly, ARtoolkit is available standalone and a plug-in for Unity is available. It is freely available for various platforms such as Android, iOS, Linux, and Windows. Moreover, ARCore and ARKit are also available for Android and iOS, respectively, and can work with Unity and Unreal authoring tools as a plug-in. ARCore is available and free for developers. MAXST and Wikitude also can work in integration with Unity, though they have a licensing price for the commercial version of the software. MAXST had a free version as well. All these tools, the abovementioned libraries, and standalone tools are depicted in Figure 9 . Cinema 4D, Maya, Trimble SketchUp 3D modeling software, 3Ds Max, and many others were created by a number of plug-ins that acted as authoring tools for three-dimensional content. While 3D animation and modeling tools are not capable of providing interactive features, it is very productive in creating three-dimensional scenes. SketchUp can utilize the AR plug-in by creating a model for the content creators. This model is then viewable in the AR scene provided by a free AR media player. The interactive three-dimensional graphic authoring tools are also available for the creation of highly interactive AR experiences, for instance, Wizard [ 237 ], Quest3D [ 238 ], and Unity [ 236 ]. All of these authoring tools have their own specific field of operation; however, Unity can be utilized to create a variety of experiences. The following are examples that justify the use of Unity over different solutions available:

  • The AR plug-in of the Vuforia tracking library can be used with Unity 3D. This integration will help Vuforia in the creation of AR applications for the android or iOS platform.
  • Similarly, the ARToolkit for Unity also provides marker-based experiences. It provides both image and marker-based AR visualization and tracking.

In such integrations, the highly interactive experiences are created by the normal Unity3D scripting interface and visual programming. Limitations of AR plug-ins: The following are the limitations accrued with the AR plug-in:

  • The need for proprietary software could arise for the content produced by the authoring tool. The design provided by the authoring tools could restrict the user’s interactive and interface designs.
  • Moreover, the authoring tools can also restrict the configurations of hardware or software within a certain limit.

Moreover, Nebeling et al. [ 239 ] reviewed the issues with the authoring tools of AR/VR. The survey of the tools has identified three key issues. To make up for those limitations, new tools are introduced for supporting the gesture-based interaction and rapid prototyping of the AR/VR content. Moreover, this is done without having technical knowledge of programming, gesture recognition, and 3D modeling. Mladenov et al. [ 240 ] review the existing SDKs and aim to find the most efficient SDK for the AR applications used in industrial environments. The paper reveals that currently available SDKs are very helpful for users to create AR applications with the parameters of their choice in industrial settings.

6.7. Summary

This section presents a detailed survey of different software and tools required for creating an AR experience. The section outlines hardware devices used in AR technology and various software to create an AR experience. It further elaborates on the software libraries required and covers bother the aspects of the commercially available tools. Table 3 provides a stack of software libraries, plug-ins, supported platforms, and standalone authoring tools. The figure also presents details of whether the mentioned tools are active or inactive. As an example, BazAR is used in tracking and geometric calibration. It is an open-source library for Linux or windows available under research-based GPL and can be used for research to detect an object via camera, calibrate it, and initiate tracking to put a basic virtual image on it; however, this library is not active at the present. Commercially used AR tools such as plug-ins have the limitations of only working efficiently in the 2D GUI and become problematic when used for 3D content. The advancement of technology may bring about a change in the authoring tools by making them capable of being operated for 3D and developing more active AR interfaces.

A summary of development and authoring tools for augmented reality application development.

7. Collaborative Research on Augmented Reality

In general, collaboration in augmented reality is the interaction of multiple users with virtual objects in the real environment. This interaction is regardless of the users’ location, i.e., they can participate remotely or have the same location. In this regard, we have two types of collaborative AR: co-located collaborative AR and remote collaborative AR. We mention it further in Figure 10 .

An external file that holds a picture, illustration, etc.
Object name is sensors-23-00146-g010.jpg

Collaborative augmented reality research domains.

7.1. Co-Located Collaborative AR

In this type of collaborative AR, the users interact with the virtual content rendered in the real environment while sharing the same place. The participant are not remote in such case. In this regard, Wells et al. [ 241 ] aim to determine the impact on the co-located group activities by varying the complexity of AR models using mobile AR. The paper also discusses different styles of collaborative AR such as:

  • hlActive Discussion: A face-to-face discussion including all participants.
  • Single Shared view: The participants focus on a single device.
  • Disjoint and Shared View: Two to three participants focus on a single device.
  • Disjoint and Distributed View: One to two people focus on their devices while the others are discussing.
  • Distributed View: Participants focus on their devices with no discussion.
  • Distributive View with Discussion: Participants focus on their devices while discussing in the group.

In this paper, the author did not contribute to the technology of co-located collaborative AR, but rather performed analysis on the effectiveness of different collaborative AR.

Grandi et al. [ 242 ] target the development of design approaches for synchronous collaboration to resolve complex manipulation tasks. For this, purpose fundamental concepts of design interface, human collaboration, and manipulation are discussed. This research the spiral model of research methodology which involves the development, planning, analysis, and evaluation. In addition, Dong et al. [ 243 ] introduce “ARVita”, a system where multiple users can interact with virtual simulations of engineering processes by wearing a head-mounted display. This system uses a co-located AR technique where the users are sitting around a table.

7.1.1. Applications of Co-located Collaborative AR

Kim et al. [ 244 ] propose a PDIE model to make a STEAM educational class while incorporating AR technology into the system. Furthermore, the “Aurasma” application is used to promote AR in education. In addition, Kanzanidis et al. [ 245 ] focus on teaching mobile programming using synchronous co-located collaborative AR mobile applications in which students are distributed in groups. The result showed that the students were satisfied with this learning methodology. Moreover, Chang et al. [ 246 ] explore the use of a mobile AR (MAR) application to teach interior design activities to students. The results identified that the students who were exposed to MAR showed more effectiveness in learning than those who were taught traditionally. Lastly, Sarkar et al. [ 247 ] discuss three aspects of synchronous co-located collaboration-based problem-solving: first, students’ perspectives on AR learning activities, either in dyads or individually are determined; second, the approach adopted by students while problem-solving is determined; third, the students’ motivation for using ScholAR is determined. Statistical results suggested that 90.4% students preferred the collaborative AR experience, i.e., in dyads. Meanwhile, motivation level and usability scores were higher for individual experiences. Grandi et al. [ 248 ] introduce the design for the collaborative manipulation of AR objects using mobile AR. This approach has two main features. It provides a shared medium for collaboration and manipulation of 3D objects as well as provides precise control of DoF transformations. Moreover, strategies are presented to make this system more efficient for users in pairs. Akccayir et al. [ 249 ] explore the impact of AR on the laboratory work of university students and their attitudes toward laboratories. This study used the quasi-experimental design with 76 participants—first year students aged 18–20 years. Both qualitative and quantitative methods were used for the analyses of data. A five-week implementation of the experiment proved that the use of AR in the laboratory significantly improved the laboratory skills of the students. However, some teachers and students also discussed some of the negative impacts of other aspects of AR. Rekimoto et al. [ 250 ] propose a collaborative AR system called TransVision. In this system, two or more users use a see-through display to look at the virtual objects rendered in a real environment using synchronous co-located collaborative AR. Oda et al. [ 251 ] propose a technique for avoiding interference for hand-held synchronous co-located collaborative AR. This study is based on first-person two-player shooting AR games. Benko et al. [ 87 ] present a collaborative augmented reality and mixed reality system called “VITA” or “Visual Interaction Tool For Archaeology”. They have an off-site visualization system that allows multiple users to interact with a virtual archaeological object. Franz et al. [ 88 ] present a system of collaborative AR for museums in which multiple users can interact in a shared environment. Huynh et al. [ 252 ] introduce art of defense (AoD), a co-located augmented reality board game that combines handheld devices with physical game pieces to create a unique experience of a merged physical and virtual game. Nilsson et al. [ 253 ] focus on a multi-user collaborative AR application as a tool for supporting collaboration between different organizations such as rescue services, police, and military organizations in a critical situation.

7.1.2. Asynchronous Co-Located Collaborative AR

Tseng et al. [ 254 ] present an asynchronous annotation system for collaborative augmented reality. This system can attribute virtual annotations with the real world due to a number of distinguishing capabilities, i.e., playing back, placing, and organizing. Extra context information is preserved by the recording of the perspective of the annotator. Furthermore, Kashara et al. [ 255 ] introduce “Second Surface”, an asynchronous co-located collaborative AR system. It allows the users to render images, text, or drawings in a real environment. These objects are stored in the data server and can be accessed later on.

7.2. Remote Collaborative AR

In this type of collaborative AR, all the users have different environments. They can interact with virtual objects remotely from any location. A number of studies have been done in this regard. Billinghurst et al. [ 256 ] introduce a wearable collaborative augmented reality system called “WearCom” to communicate with multiple remote people. Stafford et al. [ 257 ] present God-like interaction techniques for collaboration between outdoor AR and indoor tabletop users. This paper also describes a series of applications for collaboration. Gauglitz et al. [ 258 ] focus on a touchscreen interface for creating annotations in a collaborative AR environment. Moreover, this interface is also capable of virtually navigating a scene reconstructed live in 3D. Boonbrahm et al. [ 259 ] aim to develop a design model for remote collaboration. The research introduces the multiple marker technique to develop a very stable system that allows users from different locations to collaborate which also improves the accuracy. Li et al. [ 260 ] suggest the viewing of a collaborative exhibit has been considerably improved by introducing the distance-driven user interface (DUI). Poretski et al. [ 261 ] describe the behavioral challenges faced in interaction with virtual objects during remote collaborative AR. An experiment was performed to study users’ interaction with shared virtual objects in AR. Clergeaud et al. [ 262 ] tackle the limitations of collaboration in aerospace industrial designs. In addition, the authors propose prototype designs to address these limitations. Oda et al. [ 263 ] present the GARDEN (gesturing in an augmented reality depth-mapped environment) technique for 3D referencing in a collaborative augmented reality environment. The result shows that this technique is more accurate than the other comparisons. Muller et al. [ 85 ] investigate the influence of shared virtual landmarks (SVLs) on communication behavior and user experience. The results show that enhancement in user experience when SVLs were provided. Mahmood et al. [ 264 ] present a remote collaborative system for co-presence and sharing information using mixed reality. The results show improvements in user collaborative analysis experience.

7.2.1. Applications of Remote Collaborative AR

Munoz et al. [ 265 ] present a system called GLUEPS-AR to help teachers in learning situations by integrating AR and web technologies i.e., Web 2.0 tools and virtual learning environments (VLEs) [ 266 ]. Bin et al. [ 267 ] propose a system to enhance the learning experience of the students using collaborative mobile augmented reality learning application (CoMARLA). The application was used to teach ICT to students. The results showed improvement in the learning of the students using CoMARLA. Dunleavy et al. [ 268 ] explore the benefits and drawbacks of collaborative augmented reality simulations in learning. Moreover, a collaborative AR system was proposed for computers independent of location, i.e., indoor or outdoor. Maimone et al. [ 269 ] introduce a telepresence system with real-time 3D capture for remote users to improve communication using depth cameras. Moreover, it also discusses the limitations of previous telepresence systems. Gauglitz et al. [ 270 ] present an annotation-based remote collaboration AR system for mobiles. In this system, the remote user can explore the scene regardless of the local user’s camera position. Moreover, they can also communicate through annotations visible on the screen. Guo et al. [ 271 ] introduce an app, known as Block, that enables the users to collaborate irrespective of their geographic position, i.e., they can be either co-located or remote. Moreover, they can collaborate either asynchronously or synchronously. This app allows users to create structures that persist in the real environment. The result of the study suggested that people preferred synchronous and collocated collaboration, particularly one that was not restricted by time and space. Zhang et al. [ 272 ] propose a collaborative augmented reality for socialization app (CARS). This app improves the user’s perception of the quality of the experience. CARS benefits the user, application, and system on various levels. It reduces the use of computer resources, end-to-end latency, and networking. Results of the experiment suggest that CARS acts more efficiently for users of cloud-based AR applications. Moreover, on mobile phones, it reduces the latency level by up to 40%. Grandi et al. [ 242 ] propose an edge-assisted system, known as CollabAR, which combines both collaboration image recognition and distortion tolerance. Collaboration image recognition enhances recognition accuracy by exploiting the “spatial-temporal" correlation. The result of the experiment suggested that this system has significantly decreased the end-to-end system latency up to 17.8 ms for a smartphone. Additionally, recognition accuracy for images with stronger distortions was found to be 96%.

7.2.2. Synchronous Remote Collaborative AR

Lien et al. [ 273 ] present a system called “Pixel-Point Volume Segmentation” in collaborative AR. This system is used for object references. Moreover, one user can locate the objects with the help of circles drawn on the screen by other users in a collaborative environment. Huang et al. [ 274 ] focus on sharing hand gestures and sketches between a local user and a remote user by using collaborative AR. The system is named “HandsinTouch”. Ou et al. [ 275 ] present the DOVE (drawing over video environment) system, which integrates live-video and gestures in collaborative AR. This system is designed to perform remote physical tasks in a collaborative environment. Datcu et al. [ 276 ] present the creation and evaluation of the handheld AR system. This is done particularly to investigate the remote forensic and co-located and to support team-situational awareness. Three experienced investigators evaluated this system in two steps. First, it was investigated with one remote and one local investigator. Secondly, with one remote and two local investigators. Results of the study suggest the use of this technology resolves the limitation of HMDs. Tait et al. [ 277 ] propose the AR-based remote collaboration that supports view independence. The main aim of the system was to enable the remote user to help the local user with object placement. The remote user uses a 3D reconstruction of the environment to independently find the local user’s scene. Moreover, a remote user can also place the virtual cues in the scene visible to the local user. The major advantage of this system is that it allows the remote user to have an independent scene in the shared task space. Fang et al. [ 278 ] focus on enhancing the 3D feel of immersive interaction by reducing communication barriers. WebRTC, a real-time video communication framework, is developed to enable the operator site’s first-hand view of the remote user. Node.js and WebSocket, virtual canvas-based whiteboards, are developed which are usable in different aspects of life. Mora et al. [ 279 ] explain the CroMAR system. The authors aim to help the users in crowd management who are deployed in a planned outdoor event. CroMAR allows the users to share viewpoints via email, and geo-localized tags allow the users to visualize the outdoor environment and rate these tags. Adcock et al. [ 280 ] present three remote spacial augmented reality systems “Composite Wedge”, “Vector Box”, and “Eyelight” for off-surface 3D viewpoints visualization. In this system, the physical world environment of a remote user can be seen by the local user. Lincoln et al. [ 281 ] focus on a system of robotic avatars of humans in a synchronous remote collaborative environment. It uses cameras and projectors to render a humanoid animatronic model which can be seen by multiple users. This system is called “Animatronic Shader Lamps Avatars”. Komiyama et al. [ 282 ] present a synchronous remote collaborative AR system. It can transition between first person and third person view during collaboration. Moreover, the local user can observe the environment of the remote user. Lehment et al. [ 283 ] present an automatically aligned videoconferencing AR system. In this system, the remote user is rendered and aligned on the display of the local user. This alignment is done automatically regarding the local user’s real environment without modifying it. Oda et al. [ 284 ] present a remote collaborative system for guidance in a collaborative environment. In this system, the remote expert can guide a local user with the help of both AR and VR. The remote expert can create virtual replicas of real objects to guide a local user. Piumsomboon et al. [ 285 ] introduce an adaptive avatar system in mixed reality (MR) called “Mini Me” between a remote user using VR and a local user using AR technology. The results show that it improves the overall experience of MR and social presence. Piumsomboon et al. [ 286 ] present “CoVAR”, a collaboration consisting of both AR and VR technologies. A local user can share their environment with a remote VR user. It supports gestures, head, and eye gaze to improve the collaboration experience. Teo et al. [ 287 ] present a system that captures a 360 panorama video of one user and shares it with the other remote user in a mixed reality collaboration. In this system, the users communicate through hand gestures and visual annotation. Thanyadit et al. [ 288 ] introduce a system where the instructor can observe students in a virtual environment. The system is called “ObserVAR” and uses augmented reality to observe students’ gazes in a virtual environment. Results show that this system is more improved and flexible in several scenarios. Sodhi et al. [ 289 ] present a synchronous remote collaborative system called “BeThere” to explore 3D gestures and spatial input. This system enables a remote user to perform virtual interaction in the local user’s real environment. Ong et al. [ 290 ] propose a collaborative system in which 3D objects can be seen by all the users in a collaborative environment. Moreover, the changes made to these objects are also observed by the users. Butz et al. [ 84 ] present EMMIE (environment management for multi-user information environments) in a collaborative augmented reality environment in which virtual objects can be manipulated by the users. In addition, this manipulation is visible to each user of this system.

7.2.3. Asynchronous Remote Collaborative AR

Irlitti et al. [ 291 ] explore the challenges faced during the use of asynchronous collaborative AR. Moreover, the author further discusses how to enhance communication while using asynchronous collaborative AR. Quasi-systems do not fulfill Azuma’s [ 292 ] definition of AR technology. However, they are very good at executing certain aspects of AR as other full AR devices are doing. For instance, mixed-space collaborative work in a virtual theater [ 268 ]. This system explained that if someone wants two groups to pay attention to each other, a common spatial frame of reference should be created to have a better experience of social presence. In the spatially aware educational system, students were using location-aware smartphones to resolve riddles. This was very useful in the educational system because it supported both engagement and social presence [ 245 , 265 , 269 ]. However, this system did not align the 3D virtual content in the virtual space. Therefore, it was not a true AR system. In order to capture a remote 3D scene, Fuchs and Maimone [ 293 ] developed an algorithm. They also developed a proof of concept for teleconferencing. For capturing images, RGB-D cameras were used. The remote scene was displayed on the 3D stereoscopic screen. These systems were not fully AR, but they still exhibited a very good immersion. Akussah et al. [ 294 ] focus on developing a marker-based collaborative augmented reality app for learning mathematics. First, the system focuses on individual experience and later on expands it to collaborative AR.

7.3. Summary

This section provides comprehensive details on collaborative augmented reality which is broadly classified into co-located collaborative AR, where participants collaborate with each other in geographically the same location, and remote collaboration. The applications of both approaches are presented as well. Co-located collaborative AR is mostly adopted in learning realms for sharing information, for example, in museums. On the other hand, in remote collaborative AR the remote user can explore the scene regardless of the local user’s camera position. The applications of this technology are mostly found in education.

8. AR Interaction and Input Technologies

The interaction and input technologies are detailed in this section. There are a number of input methods that are utilized in AR technologies. First, multimode and 3D interfaces such as speech, gesture and handheld wands. Second, the mouse, and keyboard traditional two-dimensional user interfaces (UI). The type of interaction task needed for the interface defines which input method would be utilized in the application. A variety of interfaces have been developed: three-dimensional user interfaces, tangible user interfaces, multimedia interfaces, natural user interfaces, and information browsers.

8.1. AR Information Browsers

Wikitude and Navicam are one of the most popular examples of AR information browsers. The only problem with AR browsers is that they cannot provide direct interaction with the virtual objects.

8.2. Three-Dimensional User Interfaces

A three-dimensional user interface uses the controllers for providing the interaction with virtual content. By using the traditional 3D user interface techniques, we can directly interact with the three-dimensional object in the virtual space. There are a number of 3D user interface interaction techniques as follows: 3D motion tracking sensors are one of the most commonly used devices for AR interaction. The motion tracking sensors allow the following functions: tracking the parts of the user’s body and allow pointing as well as the manipulation of the virtual objects [ 295 ]. Haptic devices are also used for interacting with AR environments [ 296 , 297 , 298 ]. They mainly used as 3D pointing devices. In addition, they provide tactile and forces feedback. This will create the illusion of a physical object existing in the real world. Thereby, it helps in complementing the virtual experience. They are used in training, entertainment, and design-related AR applications.

8.3. Tangible User Interface

The tangible user interface is one of the main concepts of human–computer interface technology research. In this, the physical object is used for interaction [ 299 ]. It bridges the gap between the physical and the virtual object [ 300 ]. Chessa et al. incorporated grasping behavior in a virtual reality systems [ 301 ], while Han et al. presented and evaluated hand interaction techniques using tactile feedback (haptics) and physical grasping by mapping a real object with virtual objects [ 302 ].

8.4. Natural User Interfaces in AR

Recently, more accurate gesture and motion-based interactions for AR and VR applications have become extensively available due to the commercialization of depth cameras such as Microsoft Kinect and technical advances. Bare-hand interaction with a virtual object was made possible by the introduction of a depth camera. It provided physical interaction by tracking the dexterous hand motion. For instance, the physical objects and the user’s hands were recognized by the use of Kinect Camera, designed by the Microsoft HoloDesk [ 299 ]. The virtual objects were shown on the optical see-through AR workbench. It also allowed the users to interact with the virtual objects presented on the AR workbench. The user-defined gestures have been categorized into sets by the Piumsomboon [ 300 ]. This set can be utilized in AR applications for accomplishing different tasks. In addition, some of the mobile-based depth-sensing cameras are also under investigation. For instance, the SoftKinetic and Myo gesture armband controller. SodtKinetic is aimed at developing hand gesture interaction in mobile phones and wearable devices more accurately, while the Myo gesture armband controller is a biometric sensor that provides interaction in wearable and mobile environments.

8.5. Multimodal Interaction in AR

In addition to speech and gesture recognition, there are other types of voice recognition are being investigated. For example, the whistle-recognition system was developed by Lindeman [ 303 ] in mobile AR games. In this, the user had to whistle the right length and pitch to intimidate the virtual creatures in the game. Summary: The common input techniques and input methods have been examined in this section. These included simple information browsers and complex AR interfaces. The simple ones have very little support for the interaction and virtual content, while the complex interfaces were able to recognize even the speech and gesture inputs. A wide range of input methods are available for the AR interface; however, they are needed to be designed carefully. The following section delineates the research into the interface pattern, design, and guideline for AR experiences.

9. Design Guidelines and Interface Pattern

The previous section detailed the wide range of different AR input and interaction technologies; however, more rigorous research is required to design the AR experience. This section explores the interface patterns and design guidelines to develop an AR experience. The development of new interfaces goes through four main steps. First, the prototype is demonstrated. Second, interaction techniques are adopted from the other interface metaphors. Third, new interface metaphors are developed that are appropriate to the medium. Finally, the formal theoretical models are developed for modeling the interaction of users. In this regard, Wang et al. [ 304 ] employ user-centered AR instruction (UcAI) in procedural tasks. Thirty participants were selected for the experiment while having both the control and experiment groups. The result of the experiment suggested that introduction of UcAI increased the user’s spatial cognitive ability, particularly in the high-precision operational task. This research has the potential of guiding advanced AR instruction designs to perform tasks of high cognitive complexity. For instance, WIMP (windows, icons, menus, and pointers) is a very well-known desktop metaphor. In development, it has gone through all of these stages. There are methods developed that are used to predict the time taken by the mouse will select an icon of a given size. These are known as formal theoretical models. Fitts law [ 305 ] is among those models that help in determining the pointing times in the user interfaces. There are also a number of virtual reality interfaces available that are at the third stage with reference to the techniques available. For example, the manipulation and selection in immersive virtual worlds can be done by using the go-go interaction method [ 306 ]. On the other hand, as evident in the previous section, AR interfaces have barely surpassed the first two stages. Similarly, a number of AR interaction methods and technologies are available; however, by and large, they are only the extensions or versions of the existing 3D and 2D techniques present in mobiles, laptops, or AR interfaces. For instance, mobile phone experiences such as the gesture application and the touch screen input are added to AR. Therefore, there is a dire need to develop AR-specific interaction techniques and interface metaphors [ 307 ]. A deeper analysis and study of AR interfaces will help in the development of the appropriate metaphor interfaces. AR interfaces are unique in the sense that they need to develop close interaction between the real and the virtual worlds. A researcher, MacIntyre, has argued that the definition and the fusion of the virtual and real worlds are required for creating an AR design [ 308 ]. The primary goal of this is to depict the physical objects and user input onto the computer-generated graphics. This is done by using a suitable interaction interface. As a result, an AR design should have three components:

  • The physical object.
  • The virtual image to be developed.
  • An interface to create an interaction between the physical world and the virtual objects.

Use of design patterns could be an alternative technique to develop the AR interface design. These design patterns are most commonly used in the fields of computer science and design interface. Alexander has defined the use of design patterns in the following words: “Each pattern describes a problem that occurs over and over again in our environment, and then describes the core of the solution to that problem in such a way that you can use this solution a million times over, without ever doing it the same way twice” [ 309 , 310 ]. The pattern language approach could be used to enhance AR development, as suggested by Reicher [ 311 ]. This idea has evolved from the earlier research works of MacWilliam [ 312 ]. This approach has two main functionalities. First, it is more focused on the software engineering aspect. Secondly, it suggests ways to develop complex AR systems by combining different modules of design patterns. So, they describe each pattern by the number of its aspects such as name, motivation, goal, description, consequences, known project usage, and general usability. One of the most notable examples of it is the DWARF framework [ 313 ]. DWARF is a component-based AR framework that is developed through the design pattern approach. In contrast to the pattern language approach, the user experience of design in the AR handheld device could be used for developing designs. This was described by Xu and the main concern was pre-patterns. Pre-patterns are the components that bridge the gap between the game design and the interaction design. For determining the method of using of design patterns, seamful design could be used. This suggests that the designer should integrate the AR handheld game design and the technology in such a way that they should blend into each other. Some users need more attention for designing effective AR experiences; therefore, the designing of special needs is another intervention to resolve this discrepancy. For instance, as pointed out by Rand and Maclntyre [ 314 ], in designing an AR system for the age group of 6–9, the developmental stages of the children should be accounted for in it. The research has also suggested that a powerful educational experience could be created through the use of AR. In addition to this development, it was also stated that the developmental stages of the students should be considered [ 315 , 316 ]. However, there is no extensive research that suggests the development of AR experiences for children [ 317 ]. Radu, in his paper, has determined the key four areas that should be considered while designing AR for children: attention, motor, special, logic, and memory abilities [ 318 ].

10. Security, Trust, and Collaborative AR

Security is very important in augmented reality, especially in collaborative augmented reality. While using collaborative AR applications, the data are exposed to external attacks, which increases concerns about security relating to AR technologies. Moreover, if the users who share the same virtual collaborative environments are unknown to each other, it also elevates these issues. In [ 319 ], the basic premise of the research is that the developed abstraction device not only improves the privacy but also the performance of the AR apps, which lays the groundwork for the development of future OS support for AR apps. The results suggested that the prototype enables secure offloading of heavyweight, incurs negligible overhead, and improves the overall performance of the app. In [ 320 ], the authors aim to resolve security and privacy challenges in multi-user AR applications. They have introduced an AR-sharing module along with systematized designs and representative case studies for functionality and security. This module is implemented as a prototype known as ArShare for the HoloLens. Finally, it also lays the foundation for the development of fully fledged and secure multi-user AR interaction. In [ 321 ], the authors used AR smart glasses to detail the “security and safety” aspect of AR applications as a case study. In the experiment, cloud-based architecture is linked to the oil extractor in combination with Vuzix Blade smart glasses. For security purposes, this app sends real-time signals if a dangerous situation arrives. In [ 322 ], deep learning is used to make the adaptive policies for generating the visual output in AR devices. Simulations are used that automatically detect the situation and generate policies and protect the system against disastrous malicious content. In [ 323 ], the authors discussed the case study of challenges faced by VR and AR in the field of security and privacy. The results showed that the attack reached the target of distance 1.5 m with 90 percent accuracy when using a four-digit password. In [ 324 ], the authors provide details and goals for developing security. They discuss the challenges faced in the development of edge computing architecture which also includes the discussion regarding reducing security risks. The main idea of the paper is to detail the design of security measures for both AR and non-AR devices. In [ 325 ], the authors presented that the handling of multi-user outputs and handling of data are demonstrated are the two main obstacles in achieving security and privacy of AR devices. It further provides new opportunities that can significantly improve the security and privacy realm of AR. In [ 326 ], the authors introduce the authentication tool for ensuring security and privacy in AR environments. For these purposes, the graphical user password is fused with the AR environments. A doodle password is created by the touch-gesture-recognition on a mobile phone, and then doodles are matched in real-time size. Additionally, doodles are matched with the AR environment. In [ 327 ], the authors discussed the immersive nature of augmented reality engenders significant threats in the realm of security and privacy. They further explore the aspects of securing buggy AR output. In [ 328 ], the authors employ the case study of an Android app, “Google Translator”, to detect and avoid variant privacy leaks. In addition, this research proposes the foundational framework to detect unnecessary privacy leaks. In [ 329 ], the authors discuss the AR security-related issues on the web. The security related vulnerabilities are identified and then engineering guidelines are proposed to make AR implementation secure. In [ 330 ], the past ten years of research work of the author, starting from 2011, in the field of augmented reality is presented. The main idea of the paper is to figure out the potential problems and to predict the future for the next ten years. It also explains the systematization for future work and focuses on evaluating AR security research. In [ 331 ], the authors presented various AR-related security issues and identified managing the virtual content in the real space as a challenge in making AR spaces secure for single and multi-users. The authors in [ 332 ] believe that there is a dire need of cybersecurity risks in the AR world. The introduction of systemized and universal policy modules for the AR architecture is a viable solution for mitigating security risks in AR. In [ 333 ], the authors discuss the challenge of enabling the different AR apps to augment the user’s world experience simultaneously, pointing out the conflicts between the AR applications.

11. Summary

In this paper, the authors have reviewed the literature extensively in terms of tracking and displays technology, AR, and collaborative AR, as can be seen in Figure 10 . It has been observed that collaborative AR has further two classifications i.e., co-located AR and remote collaboration [ 334 ]. Each of these collocated and remote collaborations has two further types i.e., synchronous and asynchronous. In remote collaborative AR, there are a number of use cases wherein it has been observed that trust management is too important a factor to consider because there are unknown parties that participate in remote activities to interact with each other and as such, they are unknown to each other as well [ 21 , 335 , 336 , 337 , 338 ]. There has been a lack of trust and security concerns during this remote collaboration. There are more chances of intrusion and vulnerabilities that can be possibly exploited [ 331 , 339 , 340 ]. One such collaboration is from the tourism sector, which has boosted the economy, especially in the pandemic era when physical interactors were not allowed [ 341 ]. To address these concerns, this research felt the need to ensure that the communication has integrity and for this purpose, the research utilized state-of-the-art blockchain infrastructure for collaborative applications in AR. The paper has proposed a complete secure framework wherein different applications working remotely are having a real feeling of trust in each other [ 17 , 342 , 343 ]. The participants within the collaborative AR subscribed to a trusted environment to further make interaction with each other in a secure fashion while their communication was protected through state-of-the-art blockchain infrastructure [ 338 , 344 ]. A model of such an application is shown in Figure 11 .

An external file that holds a picture, illustration, etc.
Object name is sensors-23-00146-g011.jpg

A model of blockchain-based trusted and secured collaborative AR system.

Figure 12 demonstrates the initiation of the AR App in step 1, while in step 2 of Figure 12 , the blockchain is initiated to record transactions related to sign-up, record audio calls, proceed with payment/subscription, etc. In step 3, when the transaction is established, AR is initiated, which enables the visitor to receive guidance from the travel guide. The app creates a map of the real environment. The created map and the vision provide a SLAM, i.e., SLAM provides an overall vision and details of different objects in the real world. Inertial tracking controls the movement and direction in the augmented reality application. The virtual objects are then placed after identifying vision and tracking. In a collaborative environment, the guides are provided with an option of annotation so they can circle a particular object or spot different locations and landmarks or point to different incidents [ 16 ].

An external file that holds a picture, illustration, etc.
Object name is sensors-23-00146-g012.jpg

Sharing of the real-time environment of CAR tourist app for multiple users [ 16 ].

12. Directions for Research

The commercialization efforts of companies have made AR a mainstream field. However, for the technology to reach its full potential, the number of research areas should be expanded. Azuma has explained the three major obstacles in the way of AR: interface limitation, technological limitations, and the issue of social acceptance. In order to overcome these barriers, the two major models are developed: first, Roger’s innovation diffusion theory [ 345 ] and the technology acceptance model (developed by Martinez) [ 346 ]. Roger has explained the following major restriction towards the adoption of this technology: limited computational power of AR technology, social acceptance, no AR standards, tracking inaccuracy, and overloading of information. The main research trends in display technology, user interface, and tracking were identified by Zho by evaluating ten years of ISMAR papers. The research has been conducted in a wide number of areas except for social acceptance. This section aims at exploring future opportunities and ongoing research in the field of AR, particularly in the four key areas: display, tracking, interaction, and social acceptance. Moreover, there are a number of other topics including evaluation techniques, visualization methods, applications, authoring and content-creating tools, rendering methods, and some other areas.

13. Conclusions

This document has detailed a number of research papers that address certain problems of AR. For instance, AR tracking techniques are detailed in Section 3 . Display technologies, such as VST and OST, and its related calibration techniques in Section 4 , authoring tools in Section 6 , collaborative AR in Section 7 , AR interaction in Section 8 , and design guidelines in Section 9 . Finally, promising security and trust-related papers are discussed in the final section. We presented the problem statement and a short solution to the problem is provided. These aspects should be covered in future research and the most pertinent among these are the hybrid AR interfaces, social acceptance, etc. The speed of research is significantly increasing, and AR technology is going to dramatically impact our lives in the next 20 years.

Acknowledgments

Thanks to the Deanship of Research, Islamic University of Madinah. We would like to extend special thanks to our other team members (Anas and his development team at 360Folio, Ali Ullah and Sajjad Hussain Khan) who participated in the development, writeup, and finding of historical data. Ali Ullah has a great ability to understand difficult topics in AR, such as calibration and tracking.

Funding Statement

This project is funded by the Deputyship For Research and Innovation, Ministry of Education, Kingdom of Saudi Arabia, under project No (20/17), titled Digital Transformation of Madinah Landmarks using Augmented Reality.

Author Contributions

Conceptualization of the paper is done by T.A.S. Sections Organization, is mostly written by T.A.S. and S.J.; The protype implementation is done by the development team, however, the administration and coordination is performed by A.A., A.N. (Abdullah Namoun) and A.B.A.; Validation is done by A.A. and A.N. (Adnan Nadeem); Formal Analysis is done by T.A.S. and S.J.; Investigation, T.A.S.; Resources and Data Curation, is done by A.N. (Adnan Nadeem); Writing—Original Draft Preparation, is done by T.A.S. and S.J., Writing—Review & Editing is carried out by H.B.A.; Visualization is mostly done by T.A.S. and M.S.S.; Supervision, is done by T.A.S.; Project Administration, is done by A.A.; Funding Acquisition, T.A.S. All authors have read and agreed to the published version of the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

COMMENTS

  1. Analyzing augmented reality (AR) and virtual reality (VR) recent

    Augmented Reality (AR) and Virtual Reality (VR) technologies have revolutionized learning approaches through immersive digital experience, interactive environment, simulation and engagement. ... One more step to finalize characteristics of selected papers in this research is to determine the related journals and contribution of each one of them ...

  2. Augmented reality and virtual reality displays: emerging ...

    With rapid advances in high-speed communication and computation, augmented reality (AR) and virtual reality (VR) are emerging as next-generation display platforms for deeper human-digital ...

  3. Augmented Reality Technology: Current Applications, Challenges and its

    The term augmented reality (AR) refers to a technology that unites virtual things with the actual environment and communicate directly with one another. Nowadays, augmented reality is receiving a lot of study attention. It is one of the few ideas that, though formerly deemed impractical and unattainable and can today be used quite successfully. Research and development on the AR are still in ...

  4. Augmented Reality in Education: Current Technologies and the Potential

    Augmented Reality is a new medium, combining aspects from ubiquitous computing, tangible computing, and social computing. This medium offers unique affordances, combining physical and virtual worlds, with continuous and implicit user control of the point of view and interactivity.

  5. The Past, Present, and Future of Virtual and Augmented Reality Research

    Augmented reality is a more recent technology than VR and shows an interdisciplinary application framework, in which, nowadays, education and learning seem to be the most field of research. Indeed, AR allows supporting learning, for example increasing-on content understanding and memory preservation, as well as on learning motivation.

  6. (PDF) Augmented Reality: A Systematic Review of Its Benefits and

    The key benefits of using AR in e-learning included support of kinesthetic (tactile) learning, collaborative learning, distance/remote learning, learner-centered learning, and creative learning ...

  7. Frontiers

    Augmented Reality (AR) interfaces have been studied extensively over the last few decades, with a growing number of user-based experiments. In this paper, we systematically review 10 years of the most influential AR user studies, from 2005 to 2014. A total of 291 papers with 369 individual user studies have been reviewed and classified based on their application areas. The primary contribution ...

  8. PDF Augmented Reality: A Comprehensive Review

    of the research papers that have been published in the journal on augmented reality-based applications, this article aims to provide a comprehensive overview of augmented reality-based applications. It is hard to nd a eld that does not make use of the amazing features of AR.

  9. Virtual, mixed, and augmented reality: a systematic review for

    It is hoped that this paper will stimulate research on immersive systems that focuses on assessing how the performance of these developing technologies can be matched with the needs of users in various domains. ... Javornik A (2016) Augmented reality: research agenda for studying the impact of its media characteristics on consumer behavior. J ...

  10. Modern Augmented Reality: Applications, Trends, and Future Directions

    Augmented reality (AR) is one of the relatively old, yet trending areas in the intersection of computer vision and computer graphics with numerous applications in several areas, from gaming and entertainment, to education and healthcare. Although it has been around for nearly fifty years, it has seen a lot of interest by the research community in the recent years, mainly because of the huge ...

  11. Augmented Reality: A Comprehensive Review

    Augmented Reality (AR) aims to modify the perception of real-world images by overlaying digital data on them. A novel mechanic, it is an enlightening and engaging mechanic that constantly strives for new techniques in every sphere. The real world can be augmented with information in real-time. AR aims to accept the outdoors and come up with a novel and efficient model in all application areas ...

  12. (PDF) Augmented Reality

    This research work investigates the effect of Augmented Reality(AR) in electronics, electrical and science education on university level students. This paper aims to elaborate the understanding of ...

  13. Revealing the true potential and prospects of augmented reality in

    Augmented Reality (AR) technology is one of the latest developments and is receiving ever-increasing attention. Many researches are conducted on an international scale in order to study the effectiveness of its use in education. The purpose of this work was to record the characteristics of AR applications, in order to determine the extent to which they can be used effectively for educational ...

  14. Systematic review and meta-analysis of augmented reality in medicine

    This paper presents a detailed review of the applications of augmented reality (AR) in three important fields where AR use is currently increasing. The objective of this study is to highlight how AR improves and enhances the user experience in entertainment, medicine, and retail. The authors briefly introduce the topic of AR and discuss its differences from virtual reality.

  15. (PDF) An Overview of Augmented Reality

    Virtual reality (VR) is, in turn, related to the concept of augmented reality (AR). It represents a technology still in solid expansion but which was created and imagined several decades ago.

  16. Virtual and Augmented Reality

    Virtual and augmented reality technologies have entered a new near-commodity era, accompanied by massive commercial investments, but still are subject to numerous open research questions. This special issue of IEEE Computer Graphics and Applications aims at broad views to capture the state of the art, important achievements, and impact of several areas in these dynamic disciplines. It contains ...

  17. A research agenda for augmented and virtual reality in architecture

    A combination of qualitative and quantitative data collection and analysis methods were used, as those were found to be a more effective way to analyse complex systems [36]. Fig. 1 details the main parts of the research method, i.e. qualitative and quantitative analyses; and the three research outcomes, i.e. use-cases, levels of adoption, and the research agenda presented in sections 3, 4, and ...

  18. Virtual reality and augmented reality displays: advances and future

    Abstract. Virtual reality (VR) and augmented reality (AR) are revolutionizing the ways we perceive and interact with various types of digital information. These near-eye displays have attracted significant attention and efforts due to their ability to reconstruct the interactions between computer-generated images and the real world.

  19. (PDF) Augmented Reality in Education: An Overview of ...

    ORCID: 0000-0003-2351-2693. Received: 8 Jul 2020 Accepted: 3 Feb 2021. Abstract. Research on augment ed reality (AR) in education is gaining momen tum worldwide. This field has been. actively ...

  20. In-Depth Review of Augmented Reality: Tracking Technologies

    Figure 1 provides an overview of reviewed topics of augmented reality in this paper. Open in a separate window. ... After going through a critical review process of collaborative augmented reality, the research has identified that some security flaws and missing trust parameters need to be addressed to ensure a pristine environment is provided ...

  21. A Systematic Literature Review on Extended Reality: Virtual, Augmented

    PDF | Extended reality (XR), here jointly referring to virtual, augmented, and mixed (VR, AR, MR) reality, is becoming more common in everyday working... | Find, read and cite all the research you ...

  22. PDF Augmented Reality: Challenges and Opportunities for Security and Privacy

    Microsoft Research Abstract Augmented reality (AR) technologies overlay computer-generated visual, audio, and haptic feedback onto an in-dividual's perception of the world. Early-generation AR ... In this paper, we identify new challenges posed by AR technologies, including issues arising from always-on interfaces. We discuss existing risks ...

  23. Applying augmented reality multimedia technology to ...

    This paper investigates the integration of augmented reality (AR) technology into English translation teaching for college students, emphasizing the pivotal role of innovative teaching methods in enhancing students' translation skills and learning experiences. To address the issue of insufficient interest in English translation teaching, the paper initially assesses the purpose and ...

  24. Google's new technique gives LLMs infinite context

    Find out how you can attend here. A new paper by researchers at Google claims to give large language models (LLMs) the ability to work with text of infinite length. The paper introduces Infini ...

  25. Big banks are in a race to get AI right

    JP Morgan Chase now employs more than 2,000 AI specialists. The bank also dominates AI research, per Evident's report, with 45% of all papers published by banks in 2023, while Bank of America and Capital One account for 67% of all AI-focused patent filings. Yes, but: Research is only one piece of the AI puzzle for banks.

  26. Geology & Geophysics Professor Overcomes Adversity To Land Dream Job At

    I completed and published the final paper on this research just last year. ... I also had the opportunity to implement new teaching tools for our department — an augmented reality sandbox to help students visualize 3-D features on Earth and a stream table to help students see long-term hydrological processes in real time. I am so lucky I have ...

  27. (PDF) Introduction to augmented reality

    1 INTRODUCTION. Augmented Reality (AR) is a new tec hnology. that involv es the overla y of computer graph-. ics on the real world (Figure 1). One of the. best overviews of the technology is [4 ...