Editorial: Sonification, aesthetic representation of physical quantities (original) (raw)

Real-Time Sonification of Physiological Data in an Artistic Performance Context

This paper presents an approach for real-time sonification of physiological measurements and its extension to artistic creation. Three sensors where used to measure heart pulse, breathing, and thoracic volume expansion. A different sound process based on sound synthesis and digital audio effects was used for each sensor. We designed the system in order to produce three different streams clearly separables and to allow listeners to perceive as clearly as possible the physiological phenomena. The data were measured in the context of an artistic performance. Because the first purpose of this sonification is to participate to an artistic project we tried to produce an interesting sound results from an aesthetic point of view, but at the same time we tried to keep an auditory display highly correlated to the data flows.

Data Analysis through Auditory Display: Applications in Heart Rate Variability

This thesis draws from music technology to create novel sonifications of heart rate information that may be of clinical utility to physicians. Current visually-based methods of analysis involve filtering the data, so that by definition some aspects are illuminated at the expense of others, which are decimated. However, earlier research has demonstrated the suitability of the auditory system for following multiple streams of information. With this in mind, sonification may offer a means to display a potentially unlimited number of signal processing operations simultaneously, allowing correlations among various analytical techniques to be observed. This study proposes a flexible listening environment in which a cardiologist or researcher may adjust the rate of playback and relative levels of several parallel sonifications that represent different processing operations. Each sonification “track” is meant to remain perceptually segregated so that the listener may create an optimal audio mix. A distinction is made between parameters that are suited for illustrating information and parameters that carry less perceptual weight, which are employed as stream separators. The proposed sonification model is assessed with a perception test in which participants are asked to identify four different cardiological conditions by auditory and visual displays. The results show a higher degree of accuracy in the identification of obstructive sleep apnea by the auditory displays than by visual displays. The sonification model is then fine-tuned to reflect unambiguously the oscillatory characteristics of sleep apnea that may not be evident from a visual representation. Since the identification of sleep apnea through the heart rate is a current priority in cardiology, it is thus feasible that sonification could become a valuable component in apnea diagnosis. See also: http://www.music.psu.edu/Faculty%20Pages/Ballora/sonification/sonex.html

A model-based sonification system for directional movement behavior

Interactive Sonification Workshop …, 2010

Computational algorithms are presented that create a virtual model of a person’s kinesphere (i.e. a concept of Laban denoting the space immediately surrounding a person’s body and reachable by the upper limbs). This model is approached as a virtual sound object/instrument (VSO) that could be “played” by moving the upper limbs in particular directions. As such, it provides an alternative for visual qualitative movement analysis tools, like bar plots. This model-based sonification system emphasizes the role of interaction in sonification. Moreover, this study claims that the integration of intentionality and expressivity in auditory biofeedback interaction systems is necessary in order to make the sonification process more precise and transparent. A method is proposed – based on the embodied music cognition theory – that is able to do this without disclaiming the scientific, systematic principles underlying the process of sonification.

A Toolkit for Perceptually Meaningful Sonification

Citeseer

People naturally use their hearing to obtain information to support their everyday activities. Sonification is the application of hearing to support computer based information processing tasks where numerical or other data replaces the environment as a source of sounds. Turning numbers into sounds is easy with current music technology. But creating intuitive and informative sonification mappings is not. The auditory display of scientific data requires consideration of the task at hand, an understanding of data ...

Sonic Information Design: Proceedings of the 22nd Annual International Conference on Auditory Display

2016

This paper presents a brief description of surface electromyography (sEMG), what it can be used for, as well as some of the problems associated with visual displays of sEMG data. Sonifications of sEMG data have shown potential for certain applications in data monitoring and movement training, however there are still challenges related to the design of these sonifications that need to be addressed. Our previous research has shown that different sonification designs resulted in better listener performance for different sEMG evaluation tasks (e.g. identifying muscle activation time vs. muscle exertion level). Based on this finding, we speculated that sonifications may benefit from being designed to be task-specific, and that integrating a task analysis into the sonification design process may help sonification designers identify intuitive and meaningful sonification designs. This paper presents a brief introduction to what a task analysis is, provides an example of how a task analysis ...

Personify: a Toolkit for Perceptually Meaningful Sonification

ACMA'95, 1995

People naturally use their hearing to obtain information to support their everyday activities. Sonification is the application of hearing to support computer based information processing tasks where numerical or other data replaces the environment as a source of sounds. Turning numbers into sounds is easy with current music technology. But creating intuitive and informative sonification mappings is not. The auditory display of scientific data requires consideration of the task at hand, an understanding of data characteristics, and an expert knowledge of psychoacoustics. The display designer must depend on experience, a few heuristic guidelines, patience and luck - this is why successful examples are rare. Personify is a suite of software tools which enable a non-expert to quickly and easily craft meaningful and effective sonifications. Interaction with these tools is focused on what a person hears, rather than on how a device makes sounds. The toolkit provides a systematic framework which consists of a number of data sensitive mapping techniques based on a perceptually linearised sound space. The user is able to select sequences in the sound space using geometric paths such as lines, spirals and planes. Constrained guidance embodies the expert knowledge in the system. This paper will describe the Personify toolkit, its purpose, theoretical foundation, the component tools, and the multimedia user interface. Keywords: Sonification, auditory display, perceptual psychology, visualisation, human-computer interface

MotionLab Sonify: A Framework for the Sonification of Human Motion Data

Ninth International Conference on Information Visualisation (IV'05), 2005

Here a flexible framework for the sonification of human movement data is presented, capable of processing standard kinematic motion capture data as well as derived quantities such as force data. Force data are computed by inverse dynamics algorithms and can be used as input parameters for real time sonification. Simultaneous visualization is provided using OpenGL.

The Sonification of Emg Data

Proceedings of the International Conference on …, 2006

This paper describes the sonification of electromyographic (EMG) data and an experiment that was conducted to verify its efficacy as an auditory display of the data. A real-time auditory display for EMG has two main advantages over the graphical representation: it frees the eyes of the analyst, or the physiotherapist, and it can be heard by the patient too who can then try to match with his/her movement the target sound of a healthy person. The sonification was found to be effective in displaying known characteristics of the data. The 'roughness' of the sound was found to be related to the age of the patients. The sound produced by the sonification was also judged to be appropriate as an audio metaphor of the data it displays; a factor that contributes to its potential to become a useful feedback tool for the patients.

Data sonification and sound visualization

Computing in Science & Engineering, 1999

This article describes a collaborative project between researchers in the Mathematics and Computer Science Division at Argonne National Laboratory and the Computer Music Project of the University of Illinois at Urbana-Champaign. The project focuses on the use of sound for the exploration and analysis of complex data sets in scientific computing. The article addresses digital sound synthesis in the context of DIASS (Digital Instrument for Additive Sound Synthesis) and sound visualization in a virtual-reality environment by means of M4CAVE. It describes the procedures and preliminary results of some experiments in scientific sonification and sound visualization.