Model Acquisition Task for Dialogue Planning (original) (raw)
Related papers
D3WA+ - A Case Study of XAIP in a Model Acquisition Task for Dialogue Planning
2020
Recently, the D3WA system was proposed as a paradigm shift in how complex goal-oriented dialogue agents can be specified by taking a declarative view of design. However, it turns out actual users of the system have a hard time evolving their mental model and grasping the imperative consequences of declarative design. In this paper, we adopt ideas from existing works in the field of Explainable AI Planning (XAIP) to provide guidance to the dialogue designer during the model acquisition process. We will highlight in the course of this discussion how the setting presents unique challenges to the XAIP setting, including having to deal with the user persona of a domain modeler rather than the end-user of the system, and consequently having to deal with the unsolvability of models in addition to explaining generated plans.Quickview http://ibm.biz/d3wa-xaip
Information-seeking dialogue for explainable artificial intelligence: Modelling and analytics
Argument & Computation
Explainable artificial intelligence has become a vitally important research field aiming, among other tasks, to justify predictions made by intelligent classifiers automatically learned from data. Importantly, efficiency of automated explanations may be undermined if the end user does not have sufficient domain knowledge or lacks information about the data used for training. To address the issue of effective explanation communication, we propose a novel information-seeking explanatory dialogue game following the most recent requirements to automatically generated explanations. Further, we generalise our dialogue model in form of an explanatory dialogue grammar which makes it applicable to interpretable rule-based classifiers that are enhanced with the capability to provide textual explanations. Finally, we carry out an exploratory user study to validate the corresponding dialogue protocol and analyse the experimental results using insights from process mining and argument analytics....
MAi : An Interface for Declarative Specification of Goal-Directed Dialogue Agents
2019
The state of the art of dialogue agents requires a lengthy design process spanning months with experts in the loop who specify complex conversation patterns manually. Our work proposes a paradigm shift in bot design by adopting a declarative approach which composes the full dialog tree automatically. This allows the designer to construct complex dialogue agents from scratch and interact with them in a matter of hours. The demonstration will allow the audience to interact with this new design paradigm and construct their own bots on the spot. Current enterprise-level goal-directed dialogue agents require significant expertise, time and effort to build (Sreedhar 2018). The process usually requires domain experts to sit down with engineers to construct complex interaction patterns in the form of explicit dialogue trees. This process quickly becomes intractable. Existing end-to-end solutions to chat-bots, on the other hand, require little expertise to build (but a lot of data) and offer...
User Modeling and User-adapted Interaction, 1991
This article investigates the implications ofactive user model acquisition upon plan recognition, domain planning, and dialog planning in dialog architectures. A dialog system performs active user model acquisition by querying the user during the course of the dialog. Existing systems employ passive strategies that rely on inferences drawn from passive observation of the dialog. Though passive acquisition generally reduces unnecessary dialog, in some cases the system can effectively shorten the overall dialog length by selectively initiating subdialogs for acquiring information about the user. We propose a theory identifying conditions under which the dialog system should adoptactive acquisition goals. Active acquisition imposes a set ofrationality requirements not met by current dialog architectures. To ensure rational dialog decisions, we propose significant extensions to plan recognition, domain planning, and dialog planning models, incorporating decision-theoretic heuristics for expected utility. The most appropriate framework for active acquisition is a multi-attribute utility model wherein plans are compared along multiple dimensions of utility. We suggest a general architectural scheme, and present an example from a preliminary implementation.
Levels of explainable artificial intelligence for human-aligned conversational explanations
Artificial Intelligence, 2021
Over the last few years there has been rapid research growth into eXplainable Artificial Intelligence (XAI) and the closely aligned Interpretable Machine Learning (IML). Drivers for this growth include recent legislative changes and increased investments by industry and governments, along with increased concern from the general public. People are affected by autonomous decisions every day and the public need to understand the decision-making process to accept the outcomes. However, the vast majority of the applications of XAI/IML are focused on providing low-level 'narrow' explanations of how an individual decision was reached based on a particular datum. While important, these explanations rarely provide insights into an agent's: beliefs and motivations; hypotheses of other (human, animal or AI) agents' intentions; interpretation of external cultural expectations; or, processes used to generate its own explanation. Yet all of these factors, we propose, are essential to providing the explanatory depth that people require to accept and trust the AI's decision-making. This paper aims to define levels of explanation and describe how they can be integrated to create a human-aligned conversational explanation system. In so doing, this paper will survey current approaches and discuss the integration of different technologies to achieve these levels with Broad eXplainable Artificial Intelligence (Broad-XAI), and thereby move towards high-level 'strong' explanations. Recently there has been a growth in interest in eXplainable Artificial Intelligence (XAI) and Interpretable Machine Learning (IML) 1 [12, 13]. One major driver for current research has been the XAI project launched by the Defense Advanced Research Projects Agency (DARPA), with twelve research programs receiving USD 75 million in funding [14] to create a suite of explainable machine learning techniques [15]. The DARPA Project is wide ranging with aims to develop both models and interfaces for explainability. The need for XAI has been further driven by governments beginning to legislate requirements for autonomous systems to provide explanations of their decisions. For instance, the European Union's new General Data Protection Regulation [16] requires autonomous systems to be able to provide explanations of any decisions that are based on an individual's data. As autonomous systems increase in their level of societal integration these legislative requirements for explanation are likely to increase. There has also been significant interest in XAI from futurists and innovation based companies such as AGI Innovations [17] and bons.ai [18]. Finally, a number of conferences and workshops have been established looking at the issue from different perspectives [12].
Design Decision Framework for AI Explanations
2021
Explanations can help users of Artificial Intelligent (AI) systems gain a better understanding of the reasoning behind the model's decision, facilitate their trust in AI, and assist them in making informed decisions. Due to its numerous benefits in improving how users interact and collaborate with AI, this has stirred the AI/ML community towards developing understandable or interpretable models to a larger degree, while design researchers continue to study and research ways to present explanations of these models' decisions in a coherent form. However, there is still the lack of intentional design effort from the HCI community around these explanation system designs. In this paper, we contribute a framework to support the design and validation of explainable AI systems; one that requires carefully thinking through design decisions at several important decision points. This framework captures key aspects of explanations ranging from target users, to the data, to the AI models...
Can we do better explanations? A proposal of user-centered explainable AI
2019
Artificial Intelligence systems are spreading to multiple applications and they are used by a more diverse audience. With this change of the use scenario, AI users will increasingly require explanations. The first part of this paper makes a review of the state of the art of Explainable AI and highlights how the current research is not paying enough attention to whom the explanations are targeted. In the second part of the paper, it is suggested a new explainability pipeline, where users are classified in three main groups (developers or AI researchers, domain experts and lay users). Inspired by the cooperative principles of conversations, it is discussed how creating different explanations for each of the targeted groups can overcome some of the difficulties related to creating good explanations and evaluating them.
Designing Model-Based Intelligent Dialogue Systems
Information Modeling in the New Millennium, 2001
Intelligent Systems are served by Intelligent User Interfaces aimed to improve the efficiency, effectiveness and adaptation of the interaction between the user and the computer by representing, understanding and implementing models. The Intelligent User Interface Model (IUIM) helps to design and develop Intelligent Systems considering its architecture and its behavior. It focuses the Interaction and Dialogue between User and System at the heart of an Intelligent Interactive System. An architectural model, which defines the components of the model, and a conceptual model, which relates to its contents and behavior, compose the IUIM. The conceptual model defines three elements: an Adaptive User Model (including components for building and updating the user model), a Task Model (including general and domain specific knowledge) and an Adaptive Discourse Model (to be assisted by an intelligent help and a learning module). We will show the implementation of the model by describing an application named Stigma -A STereotypical Intelligent General Matching Agent for Improving Search Results on the Internet. Finally, we compared the new model with others, stating the differences and the advantages of the proposed model.
Considerations on explainable AI and users’ mental models
2019
As the aim of explaining is understanding, XAI is successful when the user has a good understanding of the AI system. This paper shows, using theories from the social sciences and HCI, that appropriately capturing and accounting for the user’s mental model while explaining is key to successful XAI.
EUCA: the End-User-Centered Explainable AI Framework
arXiv (Cornell University), 2021
The ability to explain decisions to end-users is a necessity to deploy AI as critical decision support. Yet making AI explainable to non-technical end-users is a relatively ignored and challenging problem. To bridge the gap, we first identify twelve end-userfriendly explanatory forms that do not require technical knowledge to comprehend, including feature-, example-, and rule-based explanations. We then instantiate the explanatory forms as prototyping cards in four AI-assisted critical decision-making tasks, and conduct a user study to co-design low-fidelity prototypes with 32 layperson participants. The results confirm the relevance of using explanatory forms as building blocks of explanations, and identify their proprieties-pros, cons, applicable explanation goals, and design implications. The explanatory forms, their proprieties, and prototyping supports (including a suggested prototyping process, design templates and exemplars, and associated algorithms to actualize explanatory forms) constitute the End-User-Centered explainable AI framework EUCA, and is available at http://weinajin.github.io/end-user-xai. It serves as a practical prototyping toolkit for HCI/AI practitioners and researchers to understand user requirements and build end-user-centered explainable AI. CCS Concepts: • Computing methodologies → Artificial intelligence; • Human-centered computing → User studies.