One Reality: Augmenting How the Physical World is Experienced by combining Multiple Mixed Reality Modalities (original) (raw)
https://doi.org/10.1145/3126594.3126638
Sign up for access to the world's latest research
checkGet notified about relevant papers
checkSave papers to use in your research
checkJoin the discussion with peers
checkTrack your impact
Abstract
. The presented system, exemplified here by an augmented volcano mock-up, allows one or more users to use and transition between multiple mixed reality modalities while interacting with augmented artifacts. The increase in instrumentation provides increasing flexibility, while keeping the interaction framed in the physical world.
Related papers
The augurscope: a mixed reality interface for outdoors
2002
The augurscope is a portable mixed reality interface for outdoors. A tripod-mounted display is wheeled to different locations and rotated and tilted to view a virtual environment that is aligned with the physical background. Video from an onboard camera is embedded into this virtual environment. Our design encompasses physical form, interaction and the combination of a GPS receiver, electronic compass, accelerometer and rotary encoder for tracking. An initial application involves the public exploring a medieval castle from the site of its modern replacement. Analysis of use reveals problems with lighting, movement and relating virtual and physical viewpoints, and shows how environmental factors and physical form affect interaction. We suggest that problems might be accommodated by carefully constructing virtual and physical content.
A Survey of Interaction in Mixed Reality Systems
2000
This paper surveys types of user interaction in Mixed Reality systems. Describes the basics concepts of this kind of applications and classifies some interfaces based in the type of augmentation that provides to users, namely interaction, action and perception augmentation.
CHI, 2019
What is Mixed Reality (MR)? To revisit this question given the many recent developments, we conducted interviews with ten AR/VR experts from academia and industry, as well as a literature survey of 68 papers. We find that, while there are prominent examples, there is no universally agreed on, one-size-fits-all definition of MR. Rather, we identified six partially competing notions from the literature and experts' responses. We then started to isolate the different aspects of reality relevant for MR experiences, going beyond the primarily visual notions and extending to audio, motion, haptics, taste, and smell. We distill our findings into a conceptual framework with seven dimensions to characterize MR applications in terms of the number of environments, number of users, level of immersion, level of virtuality, degree of interaction , input, and output. Our goal with this paper is to support classification and discussion of MR applications' design and provide a better means to researchers to contextualize their work within the increasingly fragmented MR landscape.
2009
This chapter presents an overview of the Mixed Reality (MR) paradigm, which proposes to overlay our real-world environment with digital, computer-generated objects. It presents example applications and outlines limitations and solutions for their technical implementation. In MR systems, users perceive both the physical environment around them and digital elements presented through, for example, the use of semitransparent displays. By its very nature, MR is a highly interdisciplinary field engaging signal processing, computer vision, computer graphics, user interfaces, human factors, wearable computing, mobile computing, information visualization, and the design of displays and sensors. This chapter presents potential MR applications, technical challenges in realizing MR systems, as well as issues related to usability and collaboration in MR. It separately presents a section offering a selection of MR projects which have either been partly or fully undertaken at Swiss universities and rounds off with a section on current challenges and trends.
Mixed Reality: A Known Unknown
2020
Mixed reality (MR) is an area of computer research dealing with the combination of real-world and computer-generated data (virtual reality), where computer-generated graphical objects are visually mixed into the real environment and vice versa in real time. This chapter contains an introduction to this modern technology. Mixed reality combines real and virtual and is interactive, real-time processed, and registered in three dimensions. We can create mixed reality by using at least one of the following technologies: augmented reality and augmented virtuality. The mixed reality system can be considered as the ultimate immersive system. MR systems are usually constructed as optical see-through systems (usually by using transparent displays) or video see-through. Implementation of MR systems is as marker systems (real scene will be added with special markers. These will be recognized during runtime and replaced with virtual objects) or (semi) markerless systems (processing and inserting...
UX Challenges in Mixed Reality: Designing for Seamless Interaction
International Journal Research of Leading Publication , 2024
The development of mixed reality (MR) has provided new opportunities for fully immersive user experience, merging physical and virtual worlds in real time. Yet, making intuitive interfaces in MR environments is still a major problem, since developers are required to deal with varying interaction modalities, device constraints, and cognitive loads on people. This paper discusses the major challenges of creating seamless and user-focused MR interfaces with the focus on smooth switching between tangible and virtual environments. The challenges in this regard are gesture recognition, spatial awareness, multimodal feedback, latency, and ergonomics in MR devices such as HoloLens and smart glasses. The study points out case studies in cultural heritage, museums, and collaborative design settings by applying real-world implementations and user experiences to find crucial design traps. In addition, the paper covers hybrid interaction models and adaptive UI models to allow for more natural and fluid interactions. Evaluation techniques like heuristic walkthroughs, affective computing, and eye-tracking are also discussed to quantify user experience. Priority is given to obtaining a unified sense of presence and embodiment, which is critical for task performance and engagement. Finally, the research suggests best practices and design principles to overcome these hurdles, helping develop more intuitive MR systems. The research promotes interdisciplinary collaboration to close technical limitations and human factors, making MR technologies deliver on their full potential across different fields.
Pushing mixed reality boundaries
1999
We report on task 7b.1, the eRENA workshop on pushing mixed reality boundaries. We introduce the concept of a mixed reality boundary that distinguishes our approach to mixed reality from other approaches such as augmented reality and augmented virtuality. We then review the history of boundaries in theatre in order to raise new requirements for mixed reality boundaries.
Rapid Prototyping of Mixed Reality Applications That Entertain and Inform
Entertainment Computing, 2003
This paper describes a prototyping environment for rapid application development. We combine existing AR-technologies with a component-based 3D animation library and a scripting API. Through the development of an interface to a high-level 3D modelling system we are able to use this visual tool for modelling and basic animation features in MR design. This provides content experts with a powerful tool to quickly design and test mixed reality prototypes. We consider applications in the area of interactive mixed reality illustrations in the context of technical descriptions / user manuals and interactive exhibitions in museums.
Proceedings of the …, 2012
Collaborative technologies increasingly permeate our everyday lives. Mixed reality games use these technologies to entertain, motivate, educate, and inspire. We understand mixed reality games as goaldirected, structured play experiences that are not fully contained by virtual or physical worlds. They transform existing technologies, relationships, and places into a platform for gameplay. While the design of mixed reality games and interactive entertainments have received increasing attention across multiple disciplines, a focus on the collaborative potential of mixed reality formats, such as augmented and alternate reality games, has been lacking. We believe the CSCW community can play an essential and unique role in examining and designing the next generation of mixed reality games and technologies that support them. To this end, we seek to bring together researchers, designers, and players to advance an integrated mixed reality games' research canon and outline key opportunities and challenges for future research and development.

Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
References (46)
- Dragomir Anguelov, Carole Dulong, Daniel Filip, Christian Frueh, Stéphane Lafon, Richard Lyon, Abhijit Ogale, Luc Vincent, and Josh Weaver. 2010. Google street view: Capturing the world at street level. Computer 43, 6 (2010), 32-38.
- Albert Barille. 1987. Once Upon a Time... Life. (1987).
- Hrvoje Benko, Eyal Ofek, Feng Zheng, and Andrew D Wilson. 2015. Fovear: Combining an optically see-through near-eye display with projector-based spatial augmented reality. In UIST '15. ACM, 129-135.
- Mark Billinghurst, Hirokazu Kato, and Ivan Poupyrev. 2001. The magicbook-moving seamlessly between reality and virtuality. IEEE CGandA 21, 3 (2001), 6-8.
- Oliver Bimber and Ramesh Raskar. 2006. Modern approaches to augmented reality. In ACM SIGGRAPH 2006 Courses. ACM, 1.
- Mark Blythe. 2014. Research through design fiction: narrative in real and imaginary abstracts. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 703-712.
- Sophia Sophia Agnes Brueckner. 2014. Out of network: technologies to connect with strangers. Ph.D. Dissertation. Massachusetts Institute of Technology.
- D Alex Butler, Shahram Izadi, Otmar Hilliges, David Molyneaux, Steve Hodges, and David Kim. 2012. Shake'n'sense: reducing interference for overlapping structured light depth cameras. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 1933-1936.
- Joanna Cole and Bruce Degen. 1994. The Magic School Bus. (1994).
- Carolina Cruz-Neira, Daniel J Sandin, and Thomas A DeFanti. 1993. Surround-screen projection-based virtual reality: the design and implementation of the CAVE. In Proceedings of the 20th annual conference on Computer graphics and interactive techniques. ACM, 135-142.
- Nicolas J Dedual, Ohan Oda, and Steven K Feiner. 2011. Creating hybrid user interfaces with a 2D multi-touch tabletop and a 3D see-through head-worn display. In ISMAR 2011. IEEE, 231-232.
- Mike Eissele, Oliver Siemoneit, and Thomas Ertl. 2006. Transition of mixed, virtual, and augmented reality in smart production environments-an interdisciplinary view. In Robotics, Automation and Mechatronics, 2006 IEEE Conference on. IEEE, 1-6.
- Valentin Heun, Shunichi Kasahara, and Pattie Maes. 2013. Smarter objects: using AR technology to program physical objects and their interactions. In CHI'13 Extended Abstracts. ACM, 961-966.
- David Holman and Roel Vertegaal. 2008. Organic User Interfaces: Designing Computers in Any Way, Shape, or Form. Commun. ACM 51, 6 (June 2008), 48-55. DOI: http://dx.doi.org/10.1145/1349026.1349037
- David Holman, Roel Vertegaal, Mark Altosaar, Nikolaus Troje, and Derek Johns. 2005. Paper windows: interaction techniques for digital paper. In Proceedings of the SIGCHI conference on Human factors in computing systems. ACM, 591-599.
- Hikaru Ibayashi, Yuta Sugiura, Daisuke Sakamoto, Natsuki Miyata, Mitsunori Tada, Takashi Okuma, Takeshi Kurata, Masaaki Mochimaru, and Takeo Igarashi. 2015. Dollhouse VR: a multi-view, multi-user collaborative design workspace with VR technology. In SIGGRAPH Asia 2015 Emerging Technologies. ACM, 8.
- Adrian Ilie, Kok-Lim Low, Greg Welch, Anselmo Lastra, Henry Fuchs, and Bruce Cairns. 2004. Combining head-mounted and projector-based displays for surgical training. Presence: Teleoperators and Virtual Environments 13, 2 (2004), 128-145.
- Hiroshi Ishii, Dávid Lakatos, Leonardo Bonanni, and Jean-Baptiste Labrune. 2012. Radical atoms: beyond tangible bits, toward transformable materials. interactions 19, 1 (2012), 38-51.
- Robert JK Jacob, Audrey Girouard, Leanne M Hirshfield, Michael S Horn, Orit Shaer, Erin Treacy Solovey, and Jamie Zigelbaum. 2008. Reality-based interaction: a framework for post-WIMP interfaces. In Proceedings of the SIGCHI conference on Human factors in computing systems. ACM, 201-210.
- Brett Jones, Rajinder Sodhi, Michael Murdock, Ravish Mehra, Hrvoje Benko, Andrew Wilson, Eyal Ofek, Blair MacIntyre, Nikunj Raghuvanshi, and Lior Shapira. 2014. RoomAlive: magical experiences enabled by scalable, adaptive projector-camera units. In Proceedings of the 27th annual ACM symposium on User interface software and technology. ACM, 637-644.
- Shunichi Kasahara, Ryuma Niiyama, Valentin Heun, and Hiroshi Ishii. 2013. exTouch: Spatially-aware Embodied Manipulation of Actuated Objects Mediated by Augmented Reality. In TEI '13 (TEI '13). ACM, New York, NY, USA, 223-228. DOI: http://dx.doi.org/10.1145/2460625.2460661
- Kiyoshi Kiyokawa, Haruo Takemura, and Naokazu Yokoya. 2000. SeamlessDesign for 3D object creation. IEEE multimedia 7, 1 (2000), 22-33.
- Ryohei Komiyama, Takashi Miyaki, and Jun Rekimoto. 2017. JackIn Space: Designing a Seamless Transition Between First and Third Person View for Effective Telepresence Collaborations. In Proceedings of the 8th Augmented Human International Conference (AH '17). ACM, New York, NY, USA, Article 14, 9 pages. DOI: http://dx.doi.org/10.1145/3041164.3041183
- Jérémy Laviole and Martin Hachet. 2012. PapARt: interactive 3D graphics and multi-touch augmented paper for artistic creation. In 3D User Interfaces (3DUI), 2012 IEEE Symposium on. IEEE, 3-6.
- Michael R Marner, Bruce H Thomas, and Christian Sandor. 2009. Physical-virtual tools for spatial augmented reality user interfaces. In ISMAR, Vol. 9. 205-206.
- Paul Milgram, Haruo Takemura, Akira Utsumi, and Fumio Kishino. 1995. Augmented reality: A class of displays on the reality-virtuality continuum. In Photonics for industrial applications. International Society for Optics and Photonics, 282-292.
- Richard A Newcombe, Dieter Fox, and Steven M Seitz. 2015. Dynamicfusion: Reconstruction and tracking of non-rigid scenes in real-time. In Proceedings of the IEEE conference on computer vision and pattern recognition. 343-352.
- Ben Piper, Carlo Ratti, and Hiroshi Ishii. 2002. Illuminating clay: a 3-D tangible interface for landscape analysis. In Proceedings of the SIGCHI conference on Human factors in computing systems. ACM, 355-362.
- Ramesh Raskar, Greg Welch, Matt Cutts, Adam Lake, Lev Stesin, and Henry Fuchs. 1998b. The office of the future: A unified approach to image-based modeling and spatially immersive displays. In Proceedings of the 25th annual conference on Computer graphics and interactive techniques. ACM, 179-188.
- Ramesh Raskar, Greg Welch, and Henry Fuchs. 1998a. Spatially augmented reality. In First IEEE Workshop on Augmented Reality (IWARâ Ȃ Ź98). Citeseer, 11-20.
- Ramesh Raskar, Greg Welch, Kok-Lim Low, and Deepak Bandyopadhyay. 2001. Shader lamps: Animating real objects with image-based illumination. In Rendering Techniques 2001. Springer, 89-102.
- Jun Rekimoto and Katashi Nagao. 1995. The world through the computer: Computer augmented interaction with real world environments. In Proceedings of the 8th annual ACM symposium on User interface and software technology. ACM, 29-36.
- Carl Sagan, Steven Soter, and Ann Druyan. 1989. Cosmos: A personal voyage. (1989).
- Ari Shapiro, Andrew Feng, Ruizhe Wang, Hao Li, Mark Bolas, Gerard Medioni, and Evan Suma. 2014. Rapid avatar capture and simulation using commodity depth sensors. Computer Animation and Virtual Worlds 25, 3-4 (2014), 201-211.
- Ross T Smith, Guy Webber, Maki Sugimoto, Michael Marner, and Bruce H Thomas. 2013. [Invited Paper] Automatic Sub-pixel Projector Calibration. ITE Transactions on Media Technology and Applications 1, 3 (2013), 204-213.
- Ivan E Sutherland. 1965. The ultimate display. Multimedia: From Wagner to virtual reality (1965).
- Brygg Ullmer and Hiroshi Ishii. 1997. The metaDESK: models and prototypes for tangible user interfaces. In UIST '97. ACM, 223-232.
- Brygg Ullmer and Hiroshi Ishii. 2000. Emerging frameworks for tangible user interfaces. IBM systems journal 39, 3.4 (2000), 915-931.
- Bret Victor. 2014. Humane Representation of Thought: A Trail Map for the 21st Century. In Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology (UIST '14). ACM, New York, NY, USA, 699-699. DOI: http://dx.doi.org/10.1145/2642918.2642920
- James A Walsh, Stewart von Itzstein, and Bruce H Thomas. 2014. Ephemeral interaction using everyday objects. In Proceedings of the Fifteenth Australasian User Interface Conference-Volume 150. Australian Computer Society, Inc., 29-37.
- Mark Weiser. 1993. Some computer science issues in ubiquitous computing. Commun. ACM 36, 7 (1993), 75-84.
- Mark Weiser and John Seely Brown. 1997. The coming age of calm technology. In Beyond calculation. Springer, 75-85.
- Pierre Wellner. 1993. Interacting with Paper on the DigitalDesk. Commun. ACM 36, 7 (July 1993), 87-96. DOI:http://dx.doi.org/10.1145/159544.159630
- Robert Xiao, Chris Harrison, and Scott E. Hudson. 2013a. WorldKit: Rapid and Easy Creation of Ad-hoc Interactive Applications on Everyday Surfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '13). ACM, New York, NY, USA, 879-888. DOI:http://dx.doi.org/10.1145/2470654.2466113
- Robert Xiao, Chris Harrison, and Scott E Hudson. 2013b. WorldKit: rapid and easy creation of ad-hoc interactive applications on everyday surfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 879-888.
- Jianlong Zhou, Ivan Lee, Bruce Thomas, Roland Menassa, Anthony Farrant, and Andrew Sansome. 2011. Applying Spatial Augmented Reality to Facilitate In-situ Support for Automotive Spot Welding Inspection. In Proceedings of the 10th International Conference on Virtual Reality Continuum and Its Applications in Industry (VRCAI '11). ACM, New York, NY, USA, 195-200. DOI: http://dx.doi.org/10.1145/2087756.2087784
Related papers
Mixed reality (MR) is something that can be defined as a combination of real and virtual words. It can be used to provide newer visualizations and environments in the digital world and can interact with each other. It is a true mixture of how virtual and physical worlds can coexist. It is a sophisticated approach which uses a specific model and architecture to support 3-D display for free and augmented virtuality. The first phase conceptual design benefits like the first stage of the prototype by additional shapes, patterns and annotations. Both workspaces share a common interface and allow collaboration with different experts, who can configure the system for a specific task .A speedy design workflow and CAD data consistency can be naturally achieved .There is no similar approach that integrates the creation and editing phase of 3D curves and surfaces in Virtual and Augmented Reality (VR/AR).Herein we see the major contributions of our new application.
Mixed reality - beyond conventions
Computers & Graphics, 2001
The rapid advances in computing and communications are dramatically changing all aspects of our lives. In particular, sophisticated 3D visualization, display, and interaction technologies are being used to complement our familiar physical world with computer-generated augmentations. These new interaction and display techniques are expected to make our work, learning, and leisure environments vastly more efficient and appealing.
The Evolution of a Framework for Mixed Reality Experiences
This chapter describes the evolution of a software system specifically designed to support the creation and delivery of Mixed Reality (MR) experiences. We first describe some of the attributes required of such a system. We then present a series of MR experiences that we have developed over the last four years, with companion sections on lessons learned and lessons applied. We conclude with several sample scripts that one might write to create experiences within the current version of this system. The authors' goals are to show the readers the unique challenges in developing an MR system for multimodal, multi-sensory experiences and to demonstrate how developing MR applications informs the evolution of such a framework.
Mixed Reality Interaction Techniques
ArXiv, 2021
This chapter gives an overview of interaction techniques for mixed reality including augmented and virtual reality (AR/VR). Various modalities for input and output are discussed. Specifically, techniques for tangible and surface-based interaction, gesture-based, pen-based, gaze-based, keyboard and mouse-based, as well as haptic interaction are discussed. Furthermore, the combination of multiple modalities in multisensory and multimodal interaction as well as interaction using multiple physical or virtual displays are presented. Finally, interaction with intelligent virtual agents is considered.
Cited by
An Aligned Rank Transform Procedure for Multifactor Contrast Tests
The 34th Annual ACM Symposium on User Interface Software and Technology, 2021
Data from multifactor HCI experiments often violates the assumptions of parametric tests (i.e., nonconforming data). The Aligned Rank Transform (ART) has become a popular nonparametric analysis in HCI that can find main and interaction effects in nonconforming data, but leads to incorrect results when used to conduct post hoc contrast tests. We created a new algorithm called ART-C for conducting contrast tests within the ART paradigm and validated it on 72,000 synthetic data sets. Our results indicate that ART-C does not inflate Type I error rates, unlike contrasts based on ART, and that ART-C has more statistical power than a t-test, Mann-Whitney U test, Wilcoxon signed-rank test, and ART. We also extended an open-source tool called ARTool with our ART-C algorithm for both Windows and R. Our validation had some limitations (e.g., only six distribution types, no mixed factorial designs, no random slopes), and data drawn from Cauchy distributions should not be analyzed with ART-C.
Additive Manufacturing in Bespoke Interactive Devices—A Thematic Analysis
Applied Sciences
Additive Manufacturing (AM) facilitates product development due to the various native advantages of AM when compared to traditional manufacturing processes. Efficiency, customisation, innovation, and ease of product modifications are a few advantages of AM. This manufacturing process can therefore be applied to fabricate customisable devices, such as bespoke interactive devices for rehabilitation purposes. In this context, a two-day workshop titled Design for Additive Manufacturing: Future Interactive Devices (DEFINED) was held to discuss the design for AM issues encountered in the development of an innovative bespoke controller and supporting platform, in a Virtual Reality (VR)-based environment, intended for people with limited dexterity in their hands. The workshop sessions were transcribed, and a thematic analysis was carried out to identify the main topics discussed. The themes were Additive Manufacturing, Generative Design Algorithms, User-Centred Design, Measurement Devices f...
Proceedings of the Virtual Reality International Conference - Laval Virtual, 2018
HAL is a multidisciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L'archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Enhancing Military Training Using Extended Reality: A Study of Military Tactics Comprehension
Frontiers in virtual reality, 2022
This study identifies that increasing the fidelity of terrain representation does not necessarily increase overall understanding of the terrain in a simulated mission planning environment using the Battlefield Visualization and Interaction software (BVI; formerly known as ARES (M. W. Boyce et al., International Conference on Augmented Cognition, 2017, 411-422). Prior research by M. Boyce et al. (Military Psychology, 2019, 31(1), 45-59) compared human performance on a flat surface (tablet) versus topographically-shaped surface (BVI on a sand table integrated with top-down projection). Their results demonstrated that the topographically-shaped surface increased the perceived usability of the interface and reduced cognitive load relative to the flat interface, but did not affect overall task performance (i.e., accuracy and response time). The present study extends this work by adding BVI onto a Microsoft HoloLens™. A sample of 72 United States Military Academy cadets used BVI on three different technologies: a tablet, a sand table (a projection-based display onto a military sand table), and on the HoloLens™ in a within-subjects design. Participants answered questions regarding military tactics in the context of conducting an attack in complex terrain. While prior research (Dixon et al., Display Technologies and Applications for Defense, Security, and Avionics III, 2009, 7327) suggested that the full 3D visualization used by the Hololens™ should improve performance relative to the sand table and tablet, our results demonstrated that the HoloLens™ performed relatively worse than the other modalities in accuracy, response time, cognitive load, and usability. Implications and limitations of this work will be discussed.