Rashmi Prasad - Academia.edu (original) (raw)

Address: Bangalore, India

less

Uploads

Papers by Rashmi Prasad

Research paper thumbnail of What's the trouble: automatically identifying problematic dialogues in DARPA communicator dialogue systems

Spoken dialogue systems promise efficient and natural access to information services from any pho... more Spoken dialogue systems promise efficient and natural access to information services from any phone. Recently, spoken dialogue systems for widely used applications such as email, travel information, and customer care have moved from research labs into commercial use. These applications can receive millions of calls a month. This huge amount of spoken dialogue data has led to a need for fully automatic methods for selecting a subset of caller dialogues that are most likely to be useful for further system improvement, to be stored, transcribed and further analyzed. This paper reports results on automatically training a Problematic Dialogue Identifier to classify problematic human-computer dialogues using a corpus of 1242 DARPA Communicator dialogues in the travel planning domain. We show that using fully automatic features we can identify classes of problematic dialogues with accuracies from 67% to 89%.

Research paper thumbnail of Trainable Sentence Planning for Complex Information Presentations in Spoken Dialog Systems

A challenging problem for spoken dialog systems is the design of utterance generation modules tha... more A challenging problem for spoken dialog systems is the design of utterance generation modules that are fast, flexible and general, yet produce high quality output in particular domains. A promising approach is trainable generation, which uses general-purpose linguistic knowledge automatically adapted to the application domain. This paper presents a trainable sentence planner for the MATCH dialog system. We show that trainable sentence planning can produce output comparable to that of MATCH's template-based generator even for quite complex information presentations.

Research paper thumbnail of DARPA communicator evaluation: progress from 2000 to 2001

This paper describes the evaluation methodology and results of the DARPA Communicator spoken dial... more This paper describes the evaluation methodology and results of the DARPA Communicator spoken dialog system evaluation experiments in 2000 and 2001. Nine spoken dialog systems in the travel planning domain participated in the experiments resulting in a total corpus of 1904 dialogs. We describe and compare the experimental design of the 2000 and 2001 DARPA evaluations. We describe how we established a performance baseline in 2001 for complex tasks. We present our overall approach to data collection, the metrics collected, and the application of PARADISE to these data sets. We compare the results we achieved in 2000 for a number of core metrics with those for 2001. These results demonstrate large performance improvements from 2000 to 2001 and show that the Communicator program goal of conversational interaction for complex tasks has been achieved.

Research paper thumbnail of DARPA communicator: cross-system results for the 2001 evaluation

This paper describes the evaluation methodology and results of the 2001 DARPA Communicator evalua... more This paper describes the evaluation methodology and results of the 2001 DARPA Communicator evaluation. The experiment spanned 6 months of 2001 and involved eight DARPA Communicator systems in the travel planning domain. It resulted in a corpus of 1242 dialogs which include many more dialogues for complex tasks than the 2000 evaluation. We describe the experimental design, the approach to data collection, and the results. We compare the results by the type of travel plan and by system. The results demonstrate some large differences across sites and show that the complex trips are clearly more difficult.

Research paper thumbnail of Learning to Generate Naturalistic Utterances Using Reviews in Spoken Dialogue Systems

Spoken language generation for dialogue systems requires a dictionary of mappings between semanti... more Spoken language generation for dialogue systems requires a dictionary of mappings between semantic representations of concepts the system wants to express and realizations of those concepts. Dictionary creation is a costly process; it is currently done by hand for each dialogue domain. We propose a novel unsupervised method for learning such mappings from user reviews in the target domain, and test it on restaurant reviews. We test the hypothesis that user reviews that provide individual ratings for distinguished attributes of the domain entity make it possible to map review sentences to their semantic representation with high precision. Experimental analyses show that the mappings learned cover most of the domain ontology, and provide good linguistic variation. A subjective user evaluation shows that the consistency between the semantic representations and the learned realizations is high and that the naturalness of the realizations is higher than a hand-crafted baseline.

Research paper thumbnail of A trainable generator for recommendations in multimodal dialog

As the complexity of spoken dialogue systems has increased, there has been increasing interest sp... more As the complexity of spoken dialogue systems has increased, there has been increasing interest spoken language generation (SLG). SLG promises portability across application domains and dialogue situations through the development of applicationindependent linguistic modules. However in practice, rulebased SLGs often have to be tuned to the application. Recently, a number of research groups have been developing hybrid methods for spoken language generation, combining general linguistic modules with methods for training parameters for particular applications. This paper describes the use of boosting to train a sentence planner to generate recommendations for restaurants in MATCH, a multimodal dialogue system providing entertainment information for New York.

Research paper thumbnail of What's the trouble: automatically identifying problematic dialogues in DARPA communicator dialogue systems

Spoken dialogue systems promise efficient and natural access to information services from any pho... more Spoken dialogue systems promise efficient and natural access to information services from any phone. Recently, spoken dialogue systems for widely used applications such as email, travel information, and customer care have moved from research labs into commercial use. These applications can receive millions of calls a month. This huge amount of spoken dialogue data has led to a need for fully automatic methods for selecting a subset of caller dialogues that are most likely to be useful for further system improvement, to be stored, transcribed and further analyzed. This paper reports results on automatically training a Problematic Dialogue Identifier to classify problematic human-computer dialogues using a corpus of 1242 DARPA Communicator dialogues in the travel planning domain. We show that using fully automatic features we can identify classes of problematic dialogues with accuracies from 67% to 89%.

Research paper thumbnail of Trainable Sentence Planning for Complex Information Presentations in Spoken Dialog Systems

A challenging problem for spoken dialog systems is the design of utterance generation modules tha... more A challenging problem for spoken dialog systems is the design of utterance generation modules that are fast, flexible and general, yet produce high quality output in particular domains. A promising approach is trainable generation, which uses general-purpose linguistic knowledge automatically adapted to the application domain. This paper presents a trainable sentence planner for the MATCH dialog system. We show that trainable sentence planning can produce output comparable to that of MATCH's template-based generator even for quite complex information presentations.

Research paper thumbnail of DARPA communicator evaluation: progress from 2000 to 2001

This paper describes the evaluation methodology and results of the DARPA Communicator spoken dial... more This paper describes the evaluation methodology and results of the DARPA Communicator spoken dialog system evaluation experiments in 2000 and 2001. Nine spoken dialog systems in the travel planning domain participated in the experiments resulting in a total corpus of 1904 dialogs. We describe and compare the experimental design of the 2000 and 2001 DARPA evaluations. We describe how we established a performance baseline in 2001 for complex tasks. We present our overall approach to data collection, the metrics collected, and the application of PARADISE to these data sets. We compare the results we achieved in 2000 for a number of core metrics with those for 2001. These results demonstrate large performance improvements from 2000 to 2001 and show that the Communicator program goal of conversational interaction for complex tasks has been achieved.

Research paper thumbnail of DARPA communicator: cross-system results for the 2001 evaluation

This paper describes the evaluation methodology and results of the 2001 DARPA Communicator evalua... more This paper describes the evaluation methodology and results of the 2001 DARPA Communicator evaluation. The experiment spanned 6 months of 2001 and involved eight DARPA Communicator systems in the travel planning domain. It resulted in a corpus of 1242 dialogs which include many more dialogues for complex tasks than the 2000 evaluation. We describe the experimental design, the approach to data collection, and the results. We compare the results by the type of travel plan and by system. The results demonstrate some large differences across sites and show that the complex trips are clearly more difficult.

Research paper thumbnail of Learning to Generate Naturalistic Utterances Using Reviews in Spoken Dialogue Systems

Spoken language generation for dialogue systems requires a dictionary of mappings between semanti... more Spoken language generation for dialogue systems requires a dictionary of mappings between semantic representations of concepts the system wants to express and realizations of those concepts. Dictionary creation is a costly process; it is currently done by hand for each dialogue domain. We propose a novel unsupervised method for learning such mappings from user reviews in the target domain, and test it on restaurant reviews. We test the hypothesis that user reviews that provide individual ratings for distinguished attributes of the domain entity make it possible to map review sentences to their semantic representation with high precision. Experimental analyses show that the mappings learned cover most of the domain ontology, and provide good linguistic variation. A subjective user evaluation shows that the consistency between the semantic representations and the learned realizations is high and that the naturalness of the realizations is higher than a hand-crafted baseline.

Research paper thumbnail of A trainable generator for recommendations in multimodal dialog

As the complexity of spoken dialogue systems has increased, there has been increasing interest sp... more As the complexity of spoken dialogue systems has increased, there has been increasing interest spoken language generation (SLG). SLG promises portability across application domains and dialogue situations through the development of applicationindependent linguistic modules. However in practice, rulebased SLGs often have to be tuned to the application. Recently, a number of research groups have been developing hybrid methods for spoken language generation, combining general linguistic modules with methods for training parameters for particular applications. This paper describes the use of boosting to train a sentence planner to generate recommendations for restaurants in MATCH, a multimodal dialogue system providing entertainment information for New York.

Log In