Auditory Presentation of Graphical User Interfaces (original) (raw)

Mapping GUIs to auditory interfaces

Proceedings of the 5th annual ACM symposium on User interface software and technology - UIST '92, 1992

little trouble accessing standard ASCII terminals. The lineoriented textual output displayed on the screen was stored in the computer's framebuffer. An access program could simply copy the contents of the framebuffer to a speech synthesizer, a Braille terminal or a Braille printer. Conversely, the contents of the framebuffer for a graphical interface are simple pixel values. To provide access to GUIs, it is necessary to intercept application output before it reaches the screen. This intercepted application output becomes the basis for an offscreen model of the application interface.The information in the off-screen model is then used to create alternative, accessible interfaces.

Representing Graphical User Interfaces with Sound: A Review of Approaches

Journal of Visual Impairment and Blindness, 2005

The inability of computer users who are visually impaired to access graphical user interfaces (GUIs) has led researchers to propose approaches for adapting GUIs to auditory interfaces, with the goal of providing access for visually impaired people. This article outlines the issues involved in nonvisual access to graphical user interfaces, reviews current research in this field, classifies methods and approaches, and discusses the extent to which researchers have resolved these issues.

Sonic Grid: an auditory interface for the visually impaired to navigate GUI-based environments

2008

This paper explores the prototype design of an auditory interface enhancement called the Sonic Grid that helps visually impaired users navigate GUI-based environments. The Sonic Grid provides an auditory representation of GUI elements embedded in a twodimensional interface, giving a 'global' spatial context for use of auditory icons, ear-cons and speech feedback. This paper introduces the Sonic Grid, discusses insights gained through participatory design with members of the visually impaired community, and suggests various applications of the technique, including its use to ease the learning curve for using computers by the visually impaired.

Platform-independent 3D Sound Iconic Interface to Facilitate Access of Visually Impaired Users to Computers

While the introduction of Graphic User Interfaces (GUIs) facilitated the interaction of normally sighted users with computers, these benefits are not equally extended to partially sighted individuals that suffer from limited visual acuity. The Lighthouse Inc. has indicated that approximately 8.7 million Americans who are 45 or older report a severe vision impairment that cannot be corrected by wearing eyeglasses or contact lenses. When a computer user has uncorrected visual impairments, the identification and selection of icons in a GUI may become significantly more difficult than for a normally sighted user. This paper describes the development of a platform-independent implementation of " 3D Sound Icons ". In addition to their graphical representation, each icon in this interface has a characteristic spatial sound (3D-sound), which is perceived by the user according to the spatial relationship between the screen cursor (listener), and the graphical icon (sound source), in the plane of the interface screen. This way, the user can supplement the visual information with spatial auditory information to identify the target icon and navigate towards it. The platform-independent implementation uses Digital Signal Processing functions that are capable of transforming an audio signal lacking spatial characteristics, into audio signals that provide the illusion of a point sound source located in a specific spatial location with respect to the listener. Platform-independence is critical in this application, because it will extend the benefits of the enhanced interface to users of a variety of operating systems (Windows,

Audio User Interface for Visually Impaired Computer Users: in a Two Dimensional Audio Environment

World Academy of Science, Engineering and Technology, International Journal of Computer, Electrical, Automation, Control and Information Engineering, 2012

In this paper we discuss a set of guidelines which could be adapted when designing an audio user interface for the visually impaired. It is based on an audio environment that is focused on audio positioning. Unlike current applications which only interpret Graphical User Interface (GUI) for the visually impaired, this particular audio environment bypasses GUI to provide a direct auditory output. It presents the capability of two dimensional (2D) navigation on audio interfaces. This paper highlights the significance of a 2D audio environment with spatial information in the context of the visually impaired. A thorough usability study has been conducted to prove the applicability of proposed design guidelines for these auditory interfaces. While proving these guidelines, previously unearthed design aspects have been revealed in this study. Keywords—Human Computer Interaction, Audio User Interfaces, 2D Audio Environment, Visually Impaired Users

Visually supported design of auditory user Interfaces

2003

This paper describes the functionality, the used models and an evaluation of the Visual Auditory Interface Design (Visual-AID) prototyping tool which assists the Usability Engineering Process for Auditory User Interfaces (AUI). The process generalizes the experiences made in the INVITE-ZIB-Project, which creates an auditory Web browser for blind users (Donker et al. 2002). The complete user-centered design process defines three main components: guidelines and rules, auditory interaction objects (audIO) and a tool to generate evaluation mock-ups.

USE OF 3-D SOUND AUDITORY ICONS IN THE ENHANCEMENT OF HUMAN-COMPUTER INTERFACES FOR PARTIALLY SIGHTED USERS

Graphical user interfaces (GUIs) have facilitated the interaction between most humans and computers. But they are not equally accessible to users with diminished visual capabilities. Our research proposes the addition of spatial auditory feedback to assist these users in the search for icons in GUIs. In the enhanced GUI, a unique 3-D sound is associated with each icon. As the cursor traverses the screen, the user hears the sounds of nearby icons, spatially, according to the relative position of each icon with respect to the cursor, through the dynamic implementation of Head-Related Transfer Functions that tracks cursor movements. A software prototype of the concept described was developed to evaluate the performance of users under artificially imposed visual impairments, in the search of icons within the proposed interface.

The use of non-speech sounds in non-visual interfaces to the MS-Windows GUI for blind computer users

… Conference on Auditory Display, Glasgow, UK, 1998

Two studies investigated the use of non-speech sounds (auditory icons and earcons) in non-visual interfaces to MS-Windows for blind computer users. The first study presented sounds in isolation and blind and sighted participants rated them for their recognisability, and appropriateness of the mapping between the sound and the interface object/event. As a result, the sounds were revised and incorporated into the interfaces. The second study investigated the effects of the sounds on user performance and perceptions. Ten blind participants evaluated the interfaces, and task completion time was significantly shorter with the inclusion of sounds, although interesting effects on user perceptions were found.

A Generic, Semantically-Based Design Approach for Spatial Auditory Computer Displays

2004

This paper describes a design approach for creating generic computer user interfaces with spatial auditory displays. It proposes a structured depiction process from formulating mode independent descriptions of user interfaces (UIs), to audio rendering methods for virtual environments. As the key step in the process a semantic taxonomy of user interface content is proposed. Finding semantic classifications of UI entities corresponding to properties of auditory objects is the ultimate goal. We beleive that this abstract approach detaches the process from visual paradigms and will reveal valuable insights into the representation of user interfaces in the auditory domain.