2 nd Global Conference on Artificial Intelligence Application-Independent and Integration-Friendly Natural Language Understanding (original) (raw)

Exploiting deep semantics and compositionality of natural language for Human-Robot-Interaction

2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2016

We develop a natural language interface for human robot interaction that implements reasoning about deep semantics in natural language. To realize the required deep analysis, we employ methods from cognitive linguistics, namely the modular and compositional framework of Embodied Construction Grammar (ECG) [16]. Using ECG, robots are able to solve finegrained reference resolution problems and other issues related to deep semantics and compositionality of natural language. This also includes verbal interaction with humans to clarify commands and queries that are too ambiguous to be executed safely. We implement our NLU framework as a ROS package and present proof-of-concept scenarios with different robots, as well as a survey on the state of the art.

Processing Natural Language About Ongoing Actions

Cornell University - arXiv, 2016

Actions may not proceed as planned; they may be interrupted, resumed or overridden. This is a challenge to handle in a natural language understanding system. We describe extensions to an existing implementation for the control of autonomous systems by natural language, to enable such systems to handle incoming language requests regarding actions. Language Communication with Autonomous Systems (LCAS) has been extended with support for X-nets, parameterized executable schemas representing actions. X-nets enable the system to control actions at a desired level of granularity, while providing a mechanism for language requests to be processed asynchronously. Standard semantics supported include requests to stop, continue, or override the existing action. The specific domain demonstrated is the control of motion of a simulated robot, but the approach is general, and could be applied to other domains.

Towards the rapid development of a natural language understanding module

When developing a conversational agent, there is often an urgent need to have a prototype available in order to test the application with real users. A Wizard of Oz is a possibility, but sometimes the agent should be simply deployed in the environment where it will be used. Here, the agent should be able to capture as many interactions as possible and to understand how people react to failure. In this paper, we focus on the rapid development of a natural language understanding module by non experts. Our approach follows the learning paradigm and sees the process of understanding natural language as a classification problem. We test our module with a conversational agent that answers questions in the art domain. Moreover, we show how our approach can be used by a natural language interface to a cinema database.

From Text to Understanding: A Deep Dive into Natural Language Processing and Their AI Applications

Artificial Intelligence , 2023

This paper delves into the historical trajectory of natural language processing (NLP) in artificial intelligence (AI), tracing its origins from early concepts to its contemporary applications. We explore the significant milestones that have propelled AI from theoretical frameworks to practical implementations, focusing on breakthroughs in machine learning, neural networks, and NLP. Additionally, this paper examines the features of human-machine interaction. Looking ahead, we provide a forward-looking analysis of the potential power and challenges posed by AI in the future. This report sheds light on the evolutionary journey of AI, specifically its progression from text understanding to conversational virtual worlds, and its transformative potential in the years to come.

Natural Language Understanding and Communication for Multi-Agent Systems

2015

Natural Language Understanding (NLU) studies machine language comprehension and action without human intervention. We describe an implemented system that supports deep semantic NLU for controlling systems with multiple simulated robot agents. The system supports bidirectional communication for both human-agent and agent-agent interaction. This interaction is achieved with the use of N-tuples, a novel form of Agent Communication Language using shared protocols with content expressing actions or intentions. The system’s portability and flexibility is facilitated by its division into unchanging “core” and “applicationspecific” components.

Generating Grammars for Natural Language Understanding from Knowledge about Actions and Objects

International Conference on Robotics and Biomimetics (ROBIO), 2015

Many applications in the fields of Service Robotics and Industrial Human-Robot Collaboration, require interaction with a human in a potentially unstructured environment. In many cases, a natural language interface can be helpful, but it requires powerful means of knowledge representation and processing, e.g., using ontologies and reasoning. In this paper we present a framework for the automatic generation of natural language grammars from ontological descriptions of robot tasks and interaction objects, and their use in a natural language interface. Robots can use it locally or even share this interface component through the RoboEarth framework in order to benefit from features such as referent grounding, ambiguity resolution, task identification, and task assignment.

Effective and Robust Natural Language Understanding for Human-Robot Interaction

Robots are slowly becoming part of everyday life, as they are being marketed for commercial applications (viz. telepresence, cleaning or entertainment). Thus, the ability to interact with non-expert users is becoming a key requirement. Even if user utterances can be efficiently recognized and transcribed by Automatic Speech Recognition systems, several issues arise in translating them into suitable robotic actions. In this paper, we will discuss both approaches providing two existing Natural Language Understanding workflows for Human Robot Interaction. First, we discuss a grammar based approach: it is based on grammars thus recognizing a restricted set of commands. Then, a data driven approach, based on a free-from speech recognizer and a statistical semantic parser, is discussed. The main advantages of both approaches are discussed, also from an engineering perspective, i.e. considering the effort of realizing HRI systems, as well as their reusability and robustness. An empirical evaluation of the proposed approaches is carried out on several datasets, in order to understand performances and identify possible improvements towards the design of NLP components in HRI.

Natural Language Understanding: From Laboratory Predictions to Real Interactions

Lecture Notes in Computer Science, 2012

In this paper we target Natural Language Understanding in the context of Conversational Agents that answer questions about their topics of expertise, and have in their knowledge base question/answer pairs, limiting the understanding problem to the task of finding the question in the knowledge base that will trigger the most appropriate answer to a given (new) question. We implement such an agent and different state of the art techniques are tested, covering several paradigms, and moving from lab experiments to tests with real users. First, we test the implemented techniques in a corpus built by the agent's developers, corresponding to the expected questions; then we test the same techniques in a corpus representing interactions between the agent and real users. Interestingly, results show that the best "lab" techniques are not necessarily the best for real scenarios, even if only in-domain questions are considered.

Natural Language Understanding Systems Within the A. I. Paradigm: A survey and Some Comparisons

1974

The paper surveys the major projects on the understanding of natural language that fall within what may now be callea the artificial intelligence paradigm of natural language systems. Some space is devoted to arguing that the paradigmis now a reality and different in significant respects from the generative paradigm of present-day linguistics. The comparisons between systems center round questions about the level, centrality and "phenomenological plausibility" of the knowledge and inferences that must be available to a system that is to understand everyday language. (Author)