Stefanie Zollmann | University of Otago (original) (raw)
Papers by Stefanie Zollmann
Social psychology is fundamentally the study of individuals in groups, yet there remain basic una... more Social psychology is fundamentally the study of individuals in groups, yet there remain basic unanswered questions about group formation, structure, and change. We argue that the problem is methodological. Until recently, there was no way to track who was interacting with whom with anything approximating valid resolution and scale. In the current study we describe a new method that applies recent advances in image-based tracking to study incipient group formation and evolution with experimental precision and control. In this method, which we term " in vivo behavioral tracking, " we track individuals' movements with a high definition video camera mounted atop a large field laboratory. We report results of an initial study that quantifies the composition, structure, and size of the incipient groups. We also apply in-vivo spatial tracking to study participants' tendency to cooperate as a function of their embeddedness in those crowds. We find that participants form groups of seven on average, are more likely to approach others of similar attractiveness and (to a lesser extent) gender, and that participants' gender and attractiveness are both associated with their proximity to the spatial center of groups (such that women and attractive individuals are more likely than men and unattractive individuals to end up in the center of their groups). Furthermore , participants' proximity to others early in the study predicted the effort they exerted in a subsequent cooperative task, suggesting that submergence in a crowd may predict social loafing. We conclude that in vivo behavioral tracking is a uniquely powerful new tool for answering longstanding, fundamental questions about group dynamics.
PLOS ONE, 2016
Social psychology is fundamentally the study of individuals in groups, yet there remain basic una... more Social psychology is fundamentally the study of individuals in groups, yet there remain basic unanswered questions about group formation, structure, and change. We argue that the problem is methodological. Until recently, there was no way to track who was interacting with whom with anything approximating valid resolution and scale. In the current study we describe a new method that applies recent advances in image-based tracking to study incipient group formation and evolution with experimental precision and control. In this method, which we term "in vivo behavioral tracking," we track individuals' movements with a high definition video camera mounted atop a large field laboratory. We report results of an initial study that quantifies the composition, structure, and size of the incipient groups. We also apply in-vivo spatial tracking to study participants' tendency to cooperate as a function of their embeddedness in those crowds. We find that participants form groups of seven on average, are more likely to approach others of similar attractiveness and (to a lesser extent) gender, and that participants' gender and attractiveness are both associated with their proximity to the spatial center of groups (such that women and attractive individuals are more likely than men and unattractive individuals to end up in the center of their groups). Furthermore, participants' proximity to others early in the study predicted the effort they exerted in a subsequent cooperative task, suggesting that submergence in a crowd may predict social loafing. We conclude that in vivo behavioral tracking is a uniquely powerful new tool for answering longstanding, fundamental questions about group dynamics.
IEEE Transactions on Visualization and Computer Graphics, 2016
Augmented Reality is a technique that enables users to interact with their physical environment t... more Augmented Reality is a technique that enables users to interact with their physical environment through the overlay of digital information. While being researched for decades, more recently, Augmented Reality moved out of the research labs and into the field. While most of the applications are used sporadically and for one particular task only, current and future scenarios will provide a continuous and multi-purpose user experience. Therefore, in this paper, we present the concept of Pervasive Augmented Reality, aiming to provide such an experience by sensing the user's current context and adapting the AR system based on the changing requirements and constraints. We present a taxonomy for Pervasive Augmented Reality and context-aware Augmented Reality, which classifies context sources and context targets relevant for implementing such a context-aware, continuous Augmented Reality experience. We further summarize existing approaches that contribute towards Pervasive Augmented Reality. Based our taxonomy and survey, we identify challenges for future research directions in Pervasive Augmented Reality.
Proceedings of the 25th Australian Computer Human Interaction Conference Augmentation Application Innovation Collaboration, Nov 25, 2013
This paper describes spatially aligned user-generated audio annotations and the integration with ... more This paper describes spatially aligned user-generated audio annotations and the integration with visual augmentations into a single mobile AR system. Details of our prototype system are presented, along with an explorative usability study and technical evaluation of the design. Mobile Augmented Reality applications allow for visual augmentations as well as tagging and annotation of the surrounding environment. Texts and graphics are currently the media of choice for these applications with GPS coordinates used to determine spatial location. Our research demonstrates that the use of visually guided audio annotations that are positioned and orientated in augmented outdoor space successfully provides for additional, novel, and enhanced mobile user experience.
Proceedings of the 26th Australian Computer Human Interaction Conference, Dec 2, 2014
This paper evaluates different state-of-the-art approaches for implementing an X-ray view in Augm... more This paper evaluates different state-of-the-art approaches for implementing an X-ray view in Augmented Reality (AR). Our focus is on approaches supporting a better scene understanding and in particular a better sense of depth order between physical objects and digital objects. One of the main goals of this work is to provide effective X-ray visualization techniques that work in unprepared outdoor environments. In order to achieve this goal, we focus on methods that automatically extract depth cues from video images. The extracted depth cues are combined in ghosting maps that are used to assign each video image pixel a transparency value to control the overlay in the AR view. Within our study, we analyze three different types of ghosting maps, 1) alpha-blending which uses a uniform alpha value within the ghosting map, 2) edge-based ghosting which is based on edge extraction and 3) image-based ghosting which incorporates perceptual grouping, saliency information, edges and texture details. Our study results demonstrate that the latter technique helps the user to understand the subsurface location of virtual objects better than using alpha-blending or the edge-based ghosting.
The integration of the auditory modality in virtual reality environments is known to promote the ... more The integration of the auditory modality in virtual reality environments is known to promote the sensations of immersion and presence. However it is also known from psychophysics studies that auditory-visual interaction obey to complex rules and that multisensory conflicts may disrupt the adhesion of the participant to the presented virtual scene. It is thus important to measure the accuracy of the auditory spatial cues reproduced by the auditory display and their consistency with the spatial visual cues. This study evaluates auditory localization performances under various unimodal and auditory-visual bimodal conditions in a virtual reality (VR) setup using a stereoscopic display and binaural reproduction over headphones in static conditions. The auditory localization performances observed in the present study are in line with those reported in real conditions, suggesting that VR gives rise to consistent auditory and visual spatial cues. These results validate the use of VR for fut...
Proceedings of the 25th Australian Computer-Human Interaction Conference on Augmentation, Application, Innovation, Collaboration - OzCHI '13, 2013
This paper describes spatially aligned user-generated audio annotations and the integration with ... more This paper describes spatially aligned user-generated audio annotations and the integration with visual augmentations into a single mobile AR system. Details of our prototype system are presented, along with an explorative usability study and technical evaluation of the design. Mobile Augmented Reality applications allow for visual augmentations as well as tagging and annotation of the surrounding environment. Texts and graphics are currently the media of choice for these applications with GPS coordinates used to determine spatial location. Our research demonstrates that the use of visually guided audio annotations that are positioned and orientated in augmented outdoor space successfully provides for additional, novel, and enhanced mobile user experience.
This paper describes spatially aligned user-generated audio annotations and the integration with ... more This paper describes spatially aligned user-generated audio annotations and the integration with visual augmentations into a single mobile AR system. Details of our prototype system are presented, along with an explorative usability study and technical evaluation of the design. Mobile Augmented Reality applications allow for visual augmentations as well as tagging and annotation of the surrounding environment. Texts and graphics are currently the media of choice for these applications with GPS coordinates used to determine spatial location. Our research demonstrates that the use of visually guided audio annotations that are positioned and orientated in augmented outdoor space successfully provides for additional, novel, and enhanced mobile user experience.
Proceedings of the 26th Australian Computer-Human Interaction Conference on Designing Futures the Future of Design - OzCHI '14, 2014
This paper evaluates different state-of-the-art approaches for implementing an X-ray view in Augm... more This paper evaluates different state-of-the-art approaches for implementing an X-ray view in Augmented Reality (AR). Our focus is on approaches supporting a better scene understanding and in particular a better sense of depth order between physical objects and digital objects. One of the main goals of this work is to provide effective X-ray visualization techniques that work in unprepared outdoor environments. In order to achieve this goal, we focus on methods that automatically extract depth cues from video images. The extracted depth cues are combined in ghosting maps that are used to assign each video image pixel a transparency value to control the overlay in the AR view. Within our study, we analyze three different types of ghosting maps, 1) alpha-blending which uses a uniform alpha value within the ghosting map, 2) edge-based ghosting which is based on edge extraction and 3) image-based ghosting which incorporates perceptual grouping, saliency information, edges and texture details. Our study results demonstrate that the latter technique helps the user to understand the subsurface location of virtual objects better than using alpha-blending or the edge-based ghosting.
2013 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2013
a) (b) (c) (d) Figure 1: Common Ghosted View versus Adaptive Ghosted View. (a) Clarity of an x-ra... more a) (b) (c) (d) Figure 1: Common Ghosted View versus Adaptive Ghosted View. (a) Clarity of an x-ray visualization may suffer from an unfortunate low contrast between occluder and occludee in areas that are important for scene comprehension. (b) Blue color coding of the important areas lacking contrast. (c) Closeup on areas that are lacking contrast. (d) Our adaptive x-ray visualization technique ensures sufficient contrast, while leaving the original color design untouched.
Proceedings of the IEEE, 2000
ABSTRACT Augmented reality (AR) allows for an on-site presentation of information that is registe... more ABSTRACT Augmented reality (AR) allows for an on-site presentation of information that is registered to the physical environment. Applications from civil engineering, which require users to process complex information, are among those which can benefit particularly highly from such a presentation. In this paper, we will describe how to use AR to support monitoring and documentation of construction site progress. For these tasks, the responsible staff usually requires fast and comprehensible access to progress information to enable comparison to the as-built status as well as to as-planned data. Instead of tediously searching and mapping related information to the actual construction site environment, our AR system allows for the access of information right where it is needed. This is achieved by superimposing progress as well as as-planned information onto the user's view of the physical environment. For this purpose, we present an approach that uses aerial 3-D reconstruction to automatically capture progress information and a mobile AR client for on-site visualization. Within this paper, we will describe in greater detail how to capture 3-D, how to register the AR system within the physical outdoor environment, how to visualize progress information in a comprehensible way in an AR overlay, and how to interact with this kind of information. By implementing such an AR system, we are able to provide an overview about the possibilities and future applications of AR in the construction industry.
IEEE Transactions on Visualization and Computer Graphics, 2000
Fig. 1. Augmented Reality supported flight management for aerial reconstruction. (Left) Aerial re... more Fig. 1. Augmented Reality supported flight management for aerial reconstruction. (Left) Aerial reconstruction of a building. (Middle) The depth estimation for a hovering MAV in the distance is complicated due to missing depth cues. (Right) Augmented Reality provides additional graphical cues for understanding the position of the vehicle.
Abstract. Micro Aerial Vehicles (MAVs) equipped with high resolution cameras have the ability of ... more Abstract. Micro Aerial Vehicles (MAVs) equipped with high resolution cameras have the ability of cost efficient and autonomous image acquisition from unconventional viewpoints. To fully exploit the limited flight-time of current MAVs view planning is essential for complete and precise 3D scene sampling. We propose a novel camera network design algorithm suitable for MAVs for close range photogrammetry that exploits prior knowledge of the surrounding.
Abstract. The segmentation of images as input for image analysis is used in various applications.... more Abstract. The segmentation of images as input for image analysis is used in various applications. The resulting segments are often called superpixels and can be used for further analysis to compute certain information about the objects in the picture. Unfortunately, the majority of superpixel algorithms are computationally expensive. Especially for real-time video analysis it is hard to find a proper algorithm to compute superpixel representations without decreasing the quality of the results.
Advances in Visual …, 2012
Most civil engineering tasks require accessing, surveying and modifying geospatial data in the fi... more Most civil engineering tasks require accessing, surveying and modifying geospatial data in the field and referencing this virtual, geospatial information to the real world situation. Augmented Reality (AR) can be a useful tool to create, edit and update geospatial data representing real world artifacts by interacting with the 3D graphical representation of the geospatial data augmented in the user's view. One of the main challenges of interactive AR visualizations of data from professional geographic information systems (GIS) is the ...
icg.tugraz.at
In this paper we present an approach for visualizing time-oriented data of dynamic scenes in an o... more In this paper we present an approach for visualizing time-oriented data of dynamic scenes in an on-site AR view. Visualizations of time-oriented data have special challenges compared to the visualization of arbitrary virtual objects. Usually, the 4D data occludes a large part of the real scene. Additionally, the data sets from different points in time may occlude each other. Thus, it is important to design adequate visualization techniques that provide a comprehensible visualization. In this paper we introduce a visualization concept that uses overview and detail techniques to present 4D data in different detail levels. These levels provide at first an overview of the 4D scene, at second information about the 4D change of a single object and at third detailed information about object appearance and geometry for specific points in time. Combining the three levels of detail with interactive transitions such as magic lenses or distorted viewing techniques enables the user to understand the relationship between them. Finally we show how to apply this concept for construction site documentation and monitoring.
Social psychology is fundamentally the study of individuals in groups, yet there remain basic una... more Social psychology is fundamentally the study of individuals in groups, yet there remain basic unanswered questions about group formation, structure, and change. We argue that the problem is methodological. Until recently, there was no way to track who was interacting with whom with anything approximating valid resolution and scale. In the current study we describe a new method that applies recent advances in image-based tracking to study incipient group formation and evolution with experimental precision and control. In this method, which we term " in vivo behavioral tracking, " we track individuals' movements with a high definition video camera mounted atop a large field laboratory. We report results of an initial study that quantifies the composition, structure, and size of the incipient groups. We also apply in-vivo spatial tracking to study participants' tendency to cooperate as a function of their embeddedness in those crowds. We find that participants form groups of seven on average, are more likely to approach others of similar attractiveness and (to a lesser extent) gender, and that participants' gender and attractiveness are both associated with their proximity to the spatial center of groups (such that women and attractive individuals are more likely than men and unattractive individuals to end up in the center of their groups). Furthermore , participants' proximity to others early in the study predicted the effort they exerted in a subsequent cooperative task, suggesting that submergence in a crowd may predict social loafing. We conclude that in vivo behavioral tracking is a uniquely powerful new tool for answering longstanding, fundamental questions about group dynamics.
PLOS ONE, 2016
Social psychology is fundamentally the study of individuals in groups, yet there remain basic una... more Social psychology is fundamentally the study of individuals in groups, yet there remain basic unanswered questions about group formation, structure, and change. We argue that the problem is methodological. Until recently, there was no way to track who was interacting with whom with anything approximating valid resolution and scale. In the current study we describe a new method that applies recent advances in image-based tracking to study incipient group formation and evolution with experimental precision and control. In this method, which we term "in vivo behavioral tracking," we track individuals' movements with a high definition video camera mounted atop a large field laboratory. We report results of an initial study that quantifies the composition, structure, and size of the incipient groups. We also apply in-vivo spatial tracking to study participants' tendency to cooperate as a function of their embeddedness in those crowds. We find that participants form groups of seven on average, are more likely to approach others of similar attractiveness and (to a lesser extent) gender, and that participants' gender and attractiveness are both associated with their proximity to the spatial center of groups (such that women and attractive individuals are more likely than men and unattractive individuals to end up in the center of their groups). Furthermore, participants' proximity to others early in the study predicted the effort they exerted in a subsequent cooperative task, suggesting that submergence in a crowd may predict social loafing. We conclude that in vivo behavioral tracking is a uniquely powerful new tool for answering longstanding, fundamental questions about group dynamics.
IEEE Transactions on Visualization and Computer Graphics, 2016
Augmented Reality is a technique that enables users to interact with their physical environment t... more Augmented Reality is a technique that enables users to interact with their physical environment through the overlay of digital information. While being researched for decades, more recently, Augmented Reality moved out of the research labs and into the field. While most of the applications are used sporadically and for one particular task only, current and future scenarios will provide a continuous and multi-purpose user experience. Therefore, in this paper, we present the concept of Pervasive Augmented Reality, aiming to provide such an experience by sensing the user's current context and adapting the AR system based on the changing requirements and constraints. We present a taxonomy for Pervasive Augmented Reality and context-aware Augmented Reality, which classifies context sources and context targets relevant for implementing such a context-aware, continuous Augmented Reality experience. We further summarize existing approaches that contribute towards Pervasive Augmented Reality. Based our taxonomy and survey, we identify challenges for future research directions in Pervasive Augmented Reality.
Proceedings of the 25th Australian Computer Human Interaction Conference Augmentation Application Innovation Collaboration, Nov 25, 2013
This paper describes spatially aligned user-generated audio annotations and the integration with ... more This paper describes spatially aligned user-generated audio annotations and the integration with visual augmentations into a single mobile AR system. Details of our prototype system are presented, along with an explorative usability study and technical evaluation of the design. Mobile Augmented Reality applications allow for visual augmentations as well as tagging and annotation of the surrounding environment. Texts and graphics are currently the media of choice for these applications with GPS coordinates used to determine spatial location. Our research demonstrates that the use of visually guided audio annotations that are positioned and orientated in augmented outdoor space successfully provides for additional, novel, and enhanced mobile user experience.
Proceedings of the 26th Australian Computer Human Interaction Conference, Dec 2, 2014
This paper evaluates different state-of-the-art approaches for implementing an X-ray view in Augm... more This paper evaluates different state-of-the-art approaches for implementing an X-ray view in Augmented Reality (AR). Our focus is on approaches supporting a better scene understanding and in particular a better sense of depth order between physical objects and digital objects. One of the main goals of this work is to provide effective X-ray visualization techniques that work in unprepared outdoor environments. In order to achieve this goal, we focus on methods that automatically extract depth cues from video images. The extracted depth cues are combined in ghosting maps that are used to assign each video image pixel a transparency value to control the overlay in the AR view. Within our study, we analyze three different types of ghosting maps, 1) alpha-blending which uses a uniform alpha value within the ghosting map, 2) edge-based ghosting which is based on edge extraction and 3) image-based ghosting which incorporates perceptual grouping, saliency information, edges and texture details. Our study results demonstrate that the latter technique helps the user to understand the subsurface location of virtual objects better than using alpha-blending or the edge-based ghosting.
The integration of the auditory modality in virtual reality environments is known to promote the ... more The integration of the auditory modality in virtual reality environments is known to promote the sensations of immersion and presence. However it is also known from psychophysics studies that auditory-visual interaction obey to complex rules and that multisensory conflicts may disrupt the adhesion of the participant to the presented virtual scene. It is thus important to measure the accuracy of the auditory spatial cues reproduced by the auditory display and their consistency with the spatial visual cues. This study evaluates auditory localization performances under various unimodal and auditory-visual bimodal conditions in a virtual reality (VR) setup using a stereoscopic display and binaural reproduction over headphones in static conditions. The auditory localization performances observed in the present study are in line with those reported in real conditions, suggesting that VR gives rise to consistent auditory and visual spatial cues. These results validate the use of VR for fut...
Proceedings of the 25th Australian Computer-Human Interaction Conference on Augmentation, Application, Innovation, Collaboration - OzCHI '13, 2013
This paper describes spatially aligned user-generated audio annotations and the integration with ... more This paper describes spatially aligned user-generated audio annotations and the integration with visual augmentations into a single mobile AR system. Details of our prototype system are presented, along with an explorative usability study and technical evaluation of the design. Mobile Augmented Reality applications allow for visual augmentations as well as tagging and annotation of the surrounding environment. Texts and graphics are currently the media of choice for these applications with GPS coordinates used to determine spatial location. Our research demonstrates that the use of visually guided audio annotations that are positioned and orientated in augmented outdoor space successfully provides for additional, novel, and enhanced mobile user experience.
This paper describes spatially aligned user-generated audio annotations and the integration with ... more This paper describes spatially aligned user-generated audio annotations and the integration with visual augmentations into a single mobile AR system. Details of our prototype system are presented, along with an explorative usability study and technical evaluation of the design. Mobile Augmented Reality applications allow for visual augmentations as well as tagging and annotation of the surrounding environment. Texts and graphics are currently the media of choice for these applications with GPS coordinates used to determine spatial location. Our research demonstrates that the use of visually guided audio annotations that are positioned and orientated in augmented outdoor space successfully provides for additional, novel, and enhanced mobile user experience.
Proceedings of the 26th Australian Computer-Human Interaction Conference on Designing Futures the Future of Design - OzCHI '14, 2014
This paper evaluates different state-of-the-art approaches for implementing an X-ray view in Augm... more This paper evaluates different state-of-the-art approaches for implementing an X-ray view in Augmented Reality (AR). Our focus is on approaches supporting a better scene understanding and in particular a better sense of depth order between physical objects and digital objects. One of the main goals of this work is to provide effective X-ray visualization techniques that work in unprepared outdoor environments. In order to achieve this goal, we focus on methods that automatically extract depth cues from video images. The extracted depth cues are combined in ghosting maps that are used to assign each video image pixel a transparency value to control the overlay in the AR view. Within our study, we analyze three different types of ghosting maps, 1) alpha-blending which uses a uniform alpha value within the ghosting map, 2) edge-based ghosting which is based on edge extraction and 3) image-based ghosting which incorporates perceptual grouping, saliency information, edges and texture details. Our study results demonstrate that the latter technique helps the user to understand the subsurface location of virtual objects better than using alpha-blending or the edge-based ghosting.
2013 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2013
a) (b) (c) (d) Figure 1: Common Ghosted View versus Adaptive Ghosted View. (a) Clarity of an x-ra... more a) (b) (c) (d) Figure 1: Common Ghosted View versus Adaptive Ghosted View. (a) Clarity of an x-ray visualization may suffer from an unfortunate low contrast between occluder and occludee in areas that are important for scene comprehension. (b) Blue color coding of the important areas lacking contrast. (c) Closeup on areas that are lacking contrast. (d) Our adaptive x-ray visualization technique ensures sufficient contrast, while leaving the original color design untouched.
Proceedings of the IEEE, 2000
ABSTRACT Augmented reality (AR) allows for an on-site presentation of information that is registe... more ABSTRACT Augmented reality (AR) allows for an on-site presentation of information that is registered to the physical environment. Applications from civil engineering, which require users to process complex information, are among those which can benefit particularly highly from such a presentation. In this paper, we will describe how to use AR to support monitoring and documentation of construction site progress. For these tasks, the responsible staff usually requires fast and comprehensible access to progress information to enable comparison to the as-built status as well as to as-planned data. Instead of tediously searching and mapping related information to the actual construction site environment, our AR system allows for the access of information right where it is needed. This is achieved by superimposing progress as well as as-planned information onto the user's view of the physical environment. For this purpose, we present an approach that uses aerial 3-D reconstruction to automatically capture progress information and a mobile AR client for on-site visualization. Within this paper, we will describe in greater detail how to capture 3-D, how to register the AR system within the physical outdoor environment, how to visualize progress information in a comprehensible way in an AR overlay, and how to interact with this kind of information. By implementing such an AR system, we are able to provide an overview about the possibilities and future applications of AR in the construction industry.
IEEE Transactions on Visualization and Computer Graphics, 2000
Fig. 1. Augmented Reality supported flight management for aerial reconstruction. (Left) Aerial re... more Fig. 1. Augmented Reality supported flight management for aerial reconstruction. (Left) Aerial reconstruction of a building. (Middle) The depth estimation for a hovering MAV in the distance is complicated due to missing depth cues. (Right) Augmented Reality provides additional graphical cues for understanding the position of the vehicle.
Abstract. Micro Aerial Vehicles (MAVs) equipped with high resolution cameras have the ability of ... more Abstract. Micro Aerial Vehicles (MAVs) equipped with high resolution cameras have the ability of cost efficient and autonomous image acquisition from unconventional viewpoints. To fully exploit the limited flight-time of current MAVs view planning is essential for complete and precise 3D scene sampling. We propose a novel camera network design algorithm suitable for MAVs for close range photogrammetry that exploits prior knowledge of the surrounding.
Abstract. The segmentation of images as input for image analysis is used in various applications.... more Abstract. The segmentation of images as input for image analysis is used in various applications. The resulting segments are often called superpixels and can be used for further analysis to compute certain information about the objects in the picture. Unfortunately, the majority of superpixel algorithms are computationally expensive. Especially for real-time video analysis it is hard to find a proper algorithm to compute superpixel representations without decreasing the quality of the results.
Advances in Visual …, 2012
Most civil engineering tasks require accessing, surveying and modifying geospatial data in the fi... more Most civil engineering tasks require accessing, surveying and modifying geospatial data in the field and referencing this virtual, geospatial information to the real world situation. Augmented Reality (AR) can be a useful tool to create, edit and update geospatial data representing real world artifacts by interacting with the 3D graphical representation of the geospatial data augmented in the user's view. One of the main challenges of interactive AR visualizations of data from professional geographic information systems (GIS) is the ...
icg.tugraz.at
In this paper we present an approach for visualizing time-oriented data of dynamic scenes in an o... more In this paper we present an approach for visualizing time-oriented data of dynamic scenes in an on-site AR view. Visualizations of time-oriented data have special challenges compared to the visualization of arbitrary virtual objects. Usually, the 4D data occludes a large part of the real scene. Additionally, the data sets from different points in time may occlude each other. Thus, it is important to design adequate visualization techniques that provide a comprehensible visualization. In this paper we introduce a visualization concept that uses overview and detail techniques to present 4D data in different detail levels. These levels provide at first an overview of the 4D scene, at second information about the 4D change of a single object and at third detailed information about object appearance and geometry for specific points in time. Combining the three levels of detail with interactive transitions such as magic lenses or distorted viewing techniques enables the user to understand the relationship between them. Finally we show how to apply this concept for construction site documentation and monitoring.