Research to Auditory Display Design (original) (raw)

From Signal to Substance and Back: Insights from Environmental Sound Research to Auditory Display Design

2009

A persistent concern in the field of auditory display design has been how to effectively use environmental sounds, which are naturally occurring familiar non-speech, non-musical sounds. Environmental sounds represent physical events in the everyday world, and thus they have a semantic content that enables learning and recognition. However, unless used appropriately, their functions in auditory displays may cause problems. One of the main considerations in using environmental sounds as auditory icons is how to ensure the identifiability of the sound sources. The identifiability of an auditory icon depends on both the intrinsic acoustic properties of the sound it represents, and on the semantic fit of the sound to its context, i.e., whether the context is one in which the sound naturally occurs or would be unlikely to occur. Relatively recent research has yielded some insights into both of these factors. A second major consideration is how to use the source properties to represent events in the auditory display. This entails parameterizing the environmental sounds so the acoustics will both relate to source properties familiar to the user and convey meaningful new information to the user. Finally, particular considerations come into play when designing auditory displays for special populations, such as hearing impaired listeners who may not have access to all the acoustic information available to a normal hearing listener, or to elderly or other individuals whose cognitive resources may be diminished. Some guidelines for designing displays for these populations will be outlined.

Auditory displays and auditory user interfaces: art, design, science, and research

Journal on Multimodal User Interfaces

For almost 3 decades, research on auditory displays and sonification has been well advanced. Now, the auditory display community has arrived at the stage of sonic information design with a more systematic, refined necessity, going beyond random mappings between the referents and sounds. Due to its innate transdisciplinary nature of auditory display, it would be difficult to unify the methods to study it. This special issue covers a diverse collection of approaches to auditory displays, involving art, design, science, and research. Accordingly, the works in the present special issue included new theories, frameworks, methods, and applications about auditory displays and auditory user interfaces. We hope that this special issue can provide the state of art of auditory display research and auditory user interface design, offering fresh inspiration and motivation to researchers and designers for their future works.

Some golden rules for designing auditory displays

The Csound Book, 2000

Information sizzles and rustles, it can whooosh or splash, or rumble. You can hear information in the naturalenvironment, but can you hear information in digital data? The soundtracks of computer games demonstrate thatsynthetic sounds can provide information that may be difficult to see, and that sounds may enhance enjoyment and performance in an activity. However game design is not driven by issues of faithful representation that are primaryfor the designer of an auditory display of data structure. The design of sounds to support information processingactivities is a new challenge because of the types of information involved, and the need for a consistentunderstanding by a cross-section of people. This chapter introduces some principles for designing auditoryinformation that address these challenges. These principles are demonstrated by individual examples, and thenapplied as a whole to the design of an auditory display that can allow a listener to quickly answer the question “isthere gold in this pile of dirt?”

Barrass 1 Some golden rules for designing auditory displays

2015

Information sizzles and rustles, it can whooosh or splash, or rumble. You can hear information in the natural environment, but can you hear information in digital data? The soundtracks of computer games demonstrate that synthetic sounds can provide information that may be difficult to see, and that sounds may enhance enjoyment and performance in an activity. However game design is not driven by issues of faithful representation that are primary for the designer of an auditory display of data structure. The design of sounds to support information processing activities is a new challenge because of the types of information involved, and the need for a consistent understanding by a cross-section of people. This chapter introduces some principles for designing auditory information that address these challenges. These principles are demonstrated by individual examples, and then applied as a whole to the design of an auditory display that can allow a listener to quickly answer the questio...

The SonicFinder: An interface that uses auditory icons

Human-Computer Interaction, 1989

The appropriate use of non-speech sounds has the potential to add a great deal to the functionality of computer interfaces. Sound is a largely unexploited medium of output, even though it plays an integral role in our everyday encounters with the world, one that is complementary to vision. I argue that sound should be used in computers as it is in the world, where it conveys information about the nature of sound-producing events. Such a strategy leads to auditory icons, which are everyday sounds meant to convey information about computer events by analogy with everyday events. Auditory icons are an intuitively accessible way to use sound to provide multidimensional, organized information to users.

Sound and meaning in auditory data display

2004

Auditory data display is an interdisciplinary field linking auditory perception research, sound engineering, data mining and human-computer interaction in order to make semantic contents of data perceptually accessible in the form of (non-verbal) audible sound. For this goal it is important to understand the different ways in which sound can encode meaning. We discuss this issue from the perspectives of language, music, functionality, listening modes and physics, and point out some limitations of current techniques for auditory data display, in particular when targeting high-dimensional data sets. As a promising, potentially very widely applicable approach we discuss the method of model-based sonification (MBS) introduced recently by the authors and point out how its natural semantic grounding in the physics of a sound generation process supports the design of sonifications that are accessible even to untrained, everyday listening. We then proceed to show that MBS also facilitates the design of an intuitive, active navigation through "acoustic aspects", somewhat analogous to the use of successive 2D views in 3D-visualization. Finally, we illustrate the concept with a first prototype of a "tangible" sonification interface which allows to "perceptually map" sonification responses into active exploratory hand motions of a user, and give an outlook on some planned extensions.

Auditory Information Design

Auditory Information Design, 1998

The prospect of computer applications making “noises” is disconcerting to some. Yet thesoundscape of the real world does not usually bother us. Perhaps we only notice a nui-sance? This thesis is an approach for designing sounds that are useful information ratherthan distracting “noise”. The approach is called TaDa because the sounds are designed tobe useful in a Task and true to the Data.Previous researchers in auditory display have identified issues that need to be addressedfor the field to progress. The TaDa approach is an integrated approach that addresses anarray of these issues through a multifaceted system of methods drawn from HCI, visuali-sation, graphic design and sound design. A task-analysis addresses the issue of usefulness.A data characterisation addresses perceptual faithfulness. A case-based method providessemantic linkage to the application domain. A rule-based method addresses psychoacous-tic control. A perceptually linearised sound space allows transportable auditory specifica-tions. Most of these methods have not been used to design auditory displays before, andeach has been specially adapted for this design domain.The TaDa methods have been built into computer-aided design tools that can assist thedesign of a more effective display, and may allow less than experienced designers to makeeffective use of sounds. The case-based method is supported by a database of examplesthat can be searched by an information analysis of the design scenario. The rule-basedmethod is supported by a direct manipulation interface which shows the available soundgamut of an audio device as a 3D coloured object that can be sliced and picked with themouse. These computer-aided tools are the first of their kind to be developed in auditorydisplay.The approach, methods and tools are demonstrated in scenarios from the domains of min-ing exploration, resource monitoring and climatology. These practical applications showthat sounds can be useful in a wide variety of information processing activities which havenot been explored before. The sounds provide information that is difficult to obtain visu-ally, and improve the directness of interactions by providing additional affordances.