Sonification of Large Datasets in a 3D Immersive Environment: A Neuroscience Case Study (original) (raw)

INTERACTIVE VISUALIZATION OF COMPLEX NETWORKS DYNAMICS: A NEUROSCIENCE TEST CASE Layout of a Simulated Environment Based on Connectome Dataset

2014

Simulated interactive layouts can play a key role in the understanding of hidden patterns in large datasets, especially the information related to connection weights. In this study, we investigated the role of physical simulations applied to an immersive 3D visualization of a complex network. As a test case, we used a 3D interactive visualization of the dataset of the human brain connections called connectome, in the immersive space “eXperience Induction Machine (XIM)”. We conducted an empirical validation where 10 subjects were asked to visualize different simulation principles to compare the information from the anatomical structure against the structure formed after 10 seconds of simulation, and subsequently we tested their understanding of the dataset. As a comparative validation of the new version, we systematically measured participants’ understanding and visual memory of the connectomic structure, pointing to the areas of more connectivity as well to the clusters as a consequ...

AIive: Interactive Visualization and Sonification of Neural Network in Virtual Reality

2021

Artificial Intelligence (AI), especially Neural Network (NN), has become increasingly popular. However, people usually treat AI as a tool, focusing on improving outcome, accuracy, and performance while paying less attention to the representation of AI itself. We present AIive, an interactive visualization of AI in Virtual Reality (VR) that brings AI "alive". AIive enables users to manipulate the parameters of NN with virtual hands and provides auditory feedback for the real-time values of loss, accuracy, and hyperparameters. Thus, AIive contributes an artistic and intuitive way to represent AI by integrating visualization, sonification, and direct manipulation in VR, potentially targeting a wide range of audiences. Keywords—artificial intelligence, virtual reality, human-computer interaction

Understanding large network datasets through embodied interaction in virtual reality

Proceedings of the 2014 Virtual Reality International Conference, 2014

The quantity of information we are producing is soaring: this generates large amounts of data that are frequently left unexplored. Novel tools are needed to stem this "data deluge". We developed a system that enhances the understanding of large datasets through embodied navigation and natural gestures using the immersive mixed reality space called "eXperience Induction Machine" (XIM). One of the applications of our system is in the exploration of the human brain connectome: the network of nodes and connections that defines the information flow in the brain. We exposed participants to a connectome dataset using either our system or a state of the art software for visualization and analysis of connectomic data. We measured their understanding and visual memory of the connectome structure. Our results showed that participants retained more information about the structure of the network when using our system. Overall, our system constitutes a novel approach in the exploration and understanding of large network datasets.

The effect of spatialization in a data sonification exploration task

Proceedings of the …, 2008

This study presents an exploration task using interactive sonification to compare different sonification mapping concepts. Based on the real application of protein-protein docking within the CoRSAIRe project («Combinaisons de Rendus Sensori-moteurs pour l'Analyse Immersive de Résultats», or Combination of sensori-motor rendering for the immersive analysis of results), an abstraction of the task was developed which simulates the basic concepts involved. Two conditions were evaluated, the inclusion or absence of spatialized ...

BrainX 3 : embodied exploration of neural data

We present BrainX 3 as a novel immersive and interactive technology for the exploration of large biological data, which in this paper is customized towards brain networks. Unlike traditional machine-inference systems, BrainX 3 posits a two-way coupling of human intuition to powerful machine computation to tackle the big data challenge. Furthermore, through unobtrusive wearable sensors, BrainX 3 can infer user's states in terms of arousal and cognitive work-load, thus changing the visualization and the sonification parameters to boost the exploration process.

Immersive Sonification for Displaying Brain Scan Data

Proceedings of the International Conference on Health Informatics, 2013

Scans of brains result in data that can be challenging to display due to its complexity, multi-dimensionality, and range. Visual representations of such data are limited due to the nature of the display, the number of possible dimensions that can be represented visually, and the capacity of our visual system to perceive and interpret visual data. This paper describes the use of sonification to interpret brain scans and use sound as a complementary tool to view, analyze, and diagnose. The sonification tool SoniScan is described and evaluated as a method to augment visual brain data display. 2 CHALLENGES OF IMAGING A broad category of diagnostic testing, which has revolutionized diagnosis and management of disease, is medical imaging. A variety of techniques based on x-rays, mechanical vibration, fluorescence, rotation of atoms and radioactive decay have been developed and produce multi-dimensional and timevarying arrays of spatial information. Current methodologies used to "perceive" and interpret medical image data are largely based on the human visual system, in some cases enhanced by use 24 Roginska A

Sound and meaning in auditory data display

2004

Auditory data display is an interdisciplinary field linking auditory perception research, sound engineering, data mining and human-computer interaction in order to make semantic contents of data perceptually accessible in the form of (non-verbal) audible sound. For this goal it is important to understand the different ways in which sound can encode meaning. We discuss this issue from the perspectives of language, music, functionality, listening modes and physics, and point out some limitations of current techniques for auditory data display, in particular when targeting high-dimensional data sets. As a promising, potentially very widely applicable approach we discuss the method of model-based sonification (MBS) introduced recently by the authors and point out how its natural semantic grounding in the physics of a sound generation process supports the design of sonifications that are accessible even to untrained, everyday listening. We then proceed to show that MBS also facilitates the design of an intuitive, active navigation through "acoustic aspects", somewhat analogous to the use of successive 2D views in 3D-visualization. Finally, we illustrate the concept with a first prototype of a "tangible" sonification interface which allows to "perceptually map" sonification responses into active exploratory hand motions of a user, and give an outlook on some planned extensions.

Towards an auditory representation of complexity

Proc. of ICAD, 2005

In applications of sonification, the information inferred by the sonification strategy applied often supersedes the amount of information which can be retrieved by the ear about the object of sonification. This paper focuses on the representation of complex geometric formation through sound, drawing on the development of an interactive installation sonifying escape time fractals as an example. The terms "auditory emergence and formation" are introduced and an attempt is made to interpret them for music composition, data display and information theory. The example application,. "Audio Fraktal", is a public installation in the permanent exhibition of the Museum for Media Art at ZKM, Karlsruhe. The design of the audiovisual display system that allows the shared experience of interactive spatial auditory formation is described.