Dominic Massaro | University of California, Santa Cruz (original) (raw)

Dominic Massaro

Address: Santa Cruz, California, United States

less

Uploads

Papers by Dominic Massaro

Research paper thumbnail of Discriminating Visible Speech Tokens Using Multimodality

We present a multimodal interactive data exploration tool that fa- cilitates discrimination betwe... more We present a multimodal interactive data exploration tool that fa- cilitates discrimination between visible speech tokens. The multi- modal tool uses visualization and sonification (non-speech sound) of data. Visible speech tokens is a class of multidimensional data that have been used extensively in designing talking head that has been used in training of deaf individuals by watching speech (1). Visible

Research paper thumbnail of What's the bottom line?

Delicate Balance: Technics, Culture and Consequences, 1990

Research paper thumbnail of Models of Integration Given Multiple Sources of Information

Psychological Review, 1990

Research paper thumbnail of Perceiving asynchronous bimodal speech in consonant-vowel and vowel syllables

Speech Communication, 1993

Research paper thumbnail of Modeling Coarticulation in Synthetic Visual Speech

Research paper thumbnail of UNIVERSAL SPEECH TOOLS: THE CSLU TOOLKIT

A set of freely available, universal speech tools is needed to accelerate progress in the speech ... more A set of freely available, universal speech tools is needed to accelerate progress in the speech technology. The CSLU Toolkit represents an effort to make the core technology and fundamental infrastructure accessible, affordable and easy to use. The CSLU Toolkit has been under ...

Research paper thumbnail of A Comparison of Learning Models

Journal of Mathematical Psychology, 1995

Research paper thumbnail of Picture My Voice: Audio to Visual Speech Synthesis using Artificial Neural Networks

Research paper thumbnail of Visual information and redundancy in reading

Journal of Experimental Psychology, 1973

Abstract 1. Discusses previous studies demonstrating that a letter is better identified when embe... more Abstract 1. Discusses previous studies demonstrating that a letter is better identified when embedded in a valid spelling pattern than when presented alone. An experiment with 9 undergraduates replicated earlier findings in a paradigm that controlled for redundancy by ...

Research paper thumbnail of New tools for interactive speech and language training: Using animated conversational agents in the classrooms of profoundly deaf children

Research paper thumbnail of Perception of asynchronous and conflicting visual and auditory speech

Journal of The Acoustical Society of America, 1996

Research paper thumbnail of Developing and Evaluating Conversational Agents

Research paper thumbnail of Intelligent animated agents for interactive language training

ACM Sigcaph Computers and The Physically Handicapped, 1998

Research paper thumbnail of Integration of featural information in speech perception

Psychological Review, 1978

Research paper thumbnail of Evaluation and integration of visual and auditory information in speech perception

Journal of Experimental Psychology-human Perception and Performance, 1983

Three experiments were carried out to investigate the evaluation and integration of visual and au... more Three experiments were carried out to investigate the evaluation and integration of visual and auditory information in speech perception. In the first two experiments, subjects identified /ba/ or /da/ speech events consisting of high-quality synthetic syllables ranging from /ba/ to /da/ combined with a videotaped /ba/ or /da/ or neutral articulation. Although subjects were specifically instructed to report what they heard, visual articulation made a large contribution to identification. The tests of quantitative models provide evidence for the integration of continuous and independent, as opposed to discrete or nonindependent, sources of information. The reaction times for identification were primarily correlated with the perceived ambiguity of the speech event. In a third experiment, the speech events were identified with an unconstrained set of response alternatives. In addition to /ba/ and /da/ responses, the /bda/ and /tha/ responses were well described by a combination of continuous and independent features. This body of results provides strong evidence for a fuzzy logical model of perceptual recognition.

Research paper thumbnail of RECENT DEVELOPMENTS IN FACIAL ANIMATION: AN INSIDE VIEW

Research paper thumbnail of 2012-07Massaro Final2

Research paper thumbnail of Discriminating Visible Speech Tokens Using Multimodality

We present a multimodal interactive data exploration tool that fa- cilitates discrimination betwe... more We present a multimodal interactive data exploration tool that fa- cilitates discrimination between visible speech tokens. The multi- modal tool uses visualization and sonification (non-speech sound) of data. Visible speech tokens is a class of multidimensional data that have been used extensively in designing talking head that has been used in training of deaf individuals by watching speech (1). Visible

Research paper thumbnail of What's the bottom line?

Delicate Balance: Technics, Culture and Consequences, 1990

Research paper thumbnail of Models of Integration Given Multiple Sources of Information

Psychological Review, 1990

Research paper thumbnail of Perceiving asynchronous bimodal speech in consonant-vowel and vowel syllables

Speech Communication, 1993

Research paper thumbnail of Modeling Coarticulation in Synthetic Visual Speech

Research paper thumbnail of UNIVERSAL SPEECH TOOLS: THE CSLU TOOLKIT

A set of freely available, universal speech tools is needed to accelerate progress in the speech ... more A set of freely available, universal speech tools is needed to accelerate progress in the speech technology. The CSLU Toolkit represents an effort to make the core technology and fundamental infrastructure accessible, affordable and easy to use. The CSLU Toolkit has been under ...

Research paper thumbnail of A Comparison of Learning Models

Journal of Mathematical Psychology, 1995

Research paper thumbnail of Picture My Voice: Audio to Visual Speech Synthesis using Artificial Neural Networks

Research paper thumbnail of Visual information and redundancy in reading

Journal of Experimental Psychology, 1973

Abstract 1. Discusses previous studies demonstrating that a letter is better identified when embe... more Abstract 1. Discusses previous studies demonstrating that a letter is better identified when embedded in a valid spelling pattern than when presented alone. An experiment with 9 undergraduates replicated earlier findings in a paradigm that controlled for redundancy by ...

Research paper thumbnail of New tools for interactive speech and language training: Using animated conversational agents in the classrooms of profoundly deaf children

Research paper thumbnail of Perception of asynchronous and conflicting visual and auditory speech

Journal of The Acoustical Society of America, 1996

Research paper thumbnail of Developing and Evaluating Conversational Agents

Research paper thumbnail of Intelligent animated agents for interactive language training

ACM Sigcaph Computers and The Physically Handicapped, 1998

Research paper thumbnail of Integration of featural information in speech perception

Psychological Review, 1978

Research paper thumbnail of Evaluation and integration of visual and auditory information in speech perception

Journal of Experimental Psychology-human Perception and Performance, 1983

Three experiments were carried out to investigate the evaluation and integration of visual and au... more Three experiments were carried out to investigate the evaluation and integration of visual and auditory information in speech perception. In the first two experiments, subjects identified /ba/ or /da/ speech events consisting of high-quality synthetic syllables ranging from /ba/ to /da/ combined with a videotaped /ba/ or /da/ or neutral articulation. Although subjects were specifically instructed to report what they heard, visual articulation made a large contribution to identification. The tests of quantitative models provide evidence for the integration of continuous and independent, as opposed to discrete or nonindependent, sources of information. The reaction times for identification were primarily correlated with the perceived ambiguity of the speech event. In a third experiment, the speech events were identified with an unconstrained set of response alternatives. In addition to /ba/ and /da/ responses, the /bda/ and /tha/ responses were well described by a combination of continuous and independent features. This body of results provides strong evidence for a fuzzy logical model of perceptual recognition.

Research paper thumbnail of RECENT DEVELOPMENTS IN FACIAL ANIMATION: AN INSIDE VIEW

Research paper thumbnail of 2012-07Massaro Final2

Log In