Strategies and mechanisms to enable dialogue agents to respond appropriately to indirect speech acts (original) (raw)
Related papers
A Hybrid Architectural Approach to Understanding and Appropriately Generating Indirect Speech Acts
Proceedings of the AAAI Conference on Artificial Intelligence
Current approaches to handling indirect speech acts (ISAs) do not account for their sociolinguistic underpinnings (i.e., politeness strategies). Deeper understanding and appropriate generation of indirect acts will require mechanisms that integrate natural language (NL) understanding and generation with social information about agent roles and obligations,which we introduce in this paper. Additionally, we tackle the problem of understanding and handling indirect answers that take the form of either speech acts or physical actions, which requires an inferential, plan-reasoning approach. In order to enable artificial agents to handle an even wider-variety of ISAs, we present a hybrid approach, utilizing both the idiomatic and inferential strategies. We then demonstrate our system successfully generating indirect requests and handling indirect answers, and discuss avenues of future research.
Enabling Robots to Understand Indirect Speech Acts in Task-Based Interactions
Journal of Human-Robot Interaction, 2017
An important open problem for enabling truly taskable robots is the lack of task-general natural language mechanisms within cognitive robot architectures that enable robots to understand typical forms of human directives and generate appropriate responses. In this paper, we first provide experimental evidence that humans tend to phrase their directives to robots indirectly, especially in socially conventionalized contexts. We then introduce pragmatic and dialogue-based mechanisms to infer intended meanings from such indirect speech acts and demonstrate that these mechanisms can handle all indirect speech acts found in our experiment as well as other common forms of requests.
Proceedings of the AAAI Conference on Artificial Intelligence
The ultimate goal of human natural language interaction is to communicate intentions. However, these intentions are often not directly derivable from the semantics of an utterance (e.g., when linguistic modulations are employed to convey polite-ness, respect, and social standing). Robotic architectures withsimple command-based natural language capabilities are thus not equipped to handle more liberal, yet natural uses of linguistic communicative exchanges. In this paper, we propose novel mechanisms for inferring in-tentions from utterances and generating clarification requests that will allow robots to cope with a much wider range of task-based natural language interactions. We demonstrate the potential of these inference algorithms for natural human-robot interactions by running them as part of an integrated cognitive robotic architecture on a mobile robot in a dialogue-based instruction task.
Service Robots Dealing with Indirect Speech Acts
2006
Successful interaction between a service robot and its human users depends on the robot's ability to understand not only direct commands, but also more indirect ways for a human to express what she would like the robot to do. Such indirect ways are pervasive in human-human interaction; enabling the robot to understand them can make human-robot interaction more human-friendly. This paper presents a model for a robot that pursues its serving duties by trying to interpret indirect ways of expressing requests to execute certain actions. In case of uncertainty about the proper interpretation the robot can ask for clarification and adapt its interpretation for future interactions.
Indirect Speech Acts and Collaborativeness in Human-Machine Dialogue Systems
2005
In human-machine dialogue systems, indirect and composite speech acts have to be treated in a proper way. First because they appears frequently in human-human dialogue, and then constitute an important aspect of spontaneous communication. Second because they are linked to collaborative aspects. We describe some complex speech acts phenomena and some methods for a system to treat them, with the
Toward a Cognitive Approach to Human-Robot Dialogue
A theory of language sufficient for building conversationally-adequate human-robot dialogue systems must account for the communicative act as a whole, from the inferential mechanism of intersubjective joint attention-sharing up through the conceptualization processes that respond to those inferences. However, practitioners of AI have in the past tended to adopt linguistic theories that either emphasize or tacitly assume the modularity of linguistic mental processes that is, their isolation from the pressures and influences of other cognitive processes. These assumptions have precluded satisfactory mod-eling of human language use. An adequate theory of language will account naturally and holistically (without ad hoc computational machinery) for discourse structure, referential flexibility, lexical non-compositionality, deixis, pragmatic effects , gesture, and intonation. This paper makes the argument that certain theories in the field of cognitive linguistics already exhibit these desiderata, and briefly describes work to implement one.
MAi : An Interface for Declarative Specification of Goal-Directed Dialogue Agents
2019
The state of the art of dialogue agents requires a lengthy design process spanning months with experts in the loop who specify complex conversation patterns manually. Our work proposes a paradigm shift in bot design by adopting a declarative approach which composes the full dialog tree automatically. This allows the designer to construct complex dialogue agents from scratch and interact with them in a matter of hours. The demonstration will allow the audience to interact with this new design paradigm and construct their own bots on the spot. Current enterprise-level goal-directed dialogue agents require significant expertise, time and effort to build (Sreedhar 2018). The process usually requires domain experts to sit down with engineers to construct complex interaction patterns in the form of explicit dialogue trees. This process quickly becomes intractable. Existing end-to-end solutions to chat-bots, on the other hand, require little expertise to build (but a lot of data) and offer...
An Analysis of Conditional Responses in Dialogue
We analyze naturally occurring collaborative responses of the form "Not (if)¨/Yes if¨". We distinguish two cases: when¨is established in the context, the conditional response indicates a possible need to revise¨, and thus opens negotiation; otherwise, the conditional response raises the question whether¨. We discuss the contexts where such responses are used and the dialogue acts they realize. We propose a uniform approach to their generation and interpretation.
NATURAL LANGUAGE DIALOGUE ARCHITECTURES FOR TACTICAL QUESTIONING CHARACTERS
In this paper we contrast three architectures for natural language questioning characters. We contrast the relative costs and benefits of each approach in building characters for tactical questioning. The first architecture works purely at the textual level, using cross-language information retrieval techniques to learn the best output for any input from a training set of linked questions and answers. The second architecture adds a global emotional model and computes a compliance model, which can result in different outputs for different levels, given the same inputs. The third architecture works at a semantic level and allows authoring of different policies for response for different kinds of information. We describe these architectures and their strengths and weaknesses with respect to expressive capacity, performance, and authoring demands.
Predictable and Adaptive Goal-oriented Dialog Policy Generation
2021 IEEE 15th International Conference on Semantic Computing (ICSC)
Most existing commercial goal-oriented chatbots are diagram-based; i.e., they follow a rigid dialog flow to fill the slot values needed to achieve a user's goal. Diagram-based chatbots are predictable, thus their adoption in commercial settings; however, their lack of flexibility may cause many users to leave the conversation before achieving their goal. On the other hand, state-of-the-art research chatbots use Reinforcement Learning (RL) to generate flexible dialog policies. However, such chatbots can be unpredictable, may violate the intended business constraints, and require large training datasets to produce a mature policy. We propose a framework that achieves a middle ground between the diagram-based and RL-based chatbots: we constrain the space of possible chatbot responses using a novel structure, the chatbot dependency graph, and use RL to dynamically select the best valid responses. Dependency graphs are directed graphs that conveniently express a chatbot's logic by defining the dependencies among slots: all valid dialog flows are encapsulated in one dependency graph. Our experiments in several domains show that our framework quickly adapts to user characteristics and achieves up to 23.77% improved success rate compared to a state-of-the-art RL model.