The Language Grounding Problem and its Relation to the Internal Structure of Cognitive Agents (original) (raw)

Meaning in Artificial Agents: The Symbol Grounding Problem Revisited

The Chinese room argument has presented a persistent headache in the search for Artificial Intelligence. Since it first appeared in the literature, various interpretations have been made, attempting to understand the problems posed by this thought experiment. Throughout all this time, some researchers in the Artificial Intelligence community have seen Symbol Grounding as proposed by Harnad as a solution to the Chinese Room Argument. The main thesis in this paper is that although related, these two issues present different problems in the framework presented by Harnad himself. The work presented here attempts to shed some light on the relationship between Searle’s intentionality notion and Harnad’s Symbol Grounding Problem.

Symbol Grounding Problem and Causal Theories of Reference

The Symbol Grounding Problem (SGP), which remains difficult for AI and philosophy of information, was recently scrutinized by M. Taddeo and L. Floridi (2005, 2007). However, their own solution to SGP, underwritten by Action-based Semantics, although different from other solutions, does not seem to be satisfactory. Moreover, it does not satisfy the authors’ principle, which they dub ‘Zero Semantic Commitment Condition’. In this paper, Taddeo and Floridi’s solution is criticized in particular because of the excessively liberal relationship between symbols and internal states of agents, which is conceived in terms of levels of abstraction. Also, the notion of action seems to be seriously defective in their theory. Due to the lack of the possibility of symbols to misrepresent, the grounded symbols remain useless for the cognitive system itself, and it is unclear why they should be grounded in the first place, as the role of grounded symbols is not specified by the proposed solution. At the same time, it is probably one of the best developed attempts to solve SGP and shows that naturalized semantics can benefit from taking artificial intelligence seriously.

Not cheating on the Turing Test: towards grounded language learning in Artificial Intelligence

Master's Thesis, 2020

In this thesis, I carry out a novel and interdisciplinary analysis of various complex factors involved in human natural-language acquisition, use and comprehension, aimed at uncovering some of the basic requirements for if we were to try and develop artificially intelligent (AI) agents with similar capacities. Inspired by a recent publication wherein I explored the complexities and challenges involved in enabling AI systems to deal with the grammatical(i.e. syntactic and morphological) irregularities and ambiguities inherent in natural language (Alberts, 2019), I turn my focus here towards appropriately inferring the content of symbols themselves—as ‘grounded’ in real-world percepts, actions, and situations. I first introduce the key theoretical problems I aim to address in theories of mind and language. For background, I discuss the co-development of AI and the controverted strands of computational theories of mind in cognitive science, and the grounding problem(or ‘internalist trap’) faced by them. I then describe the approach I take to address the grounding problem in the rest of the thesis. This proceeds in chapter I. To unpack and address the issue, I offer a critical analysis of the relevant theoretical literature in philosophy of mind, psychology, cognitive science and (cognitive) linguistics in chapter II. I first evaluate the major philosophical/psychological debates regarding the nature of concepts; theories regarding how concepts are acquired, used, and represented in the mind; and, on that basis, offer my own account of conceptual structure, grounded in current (cognitively plausible) connectionist theories of thought. To further explicate how such concepts are acquired and communicated, I evaluate the relevant embodied (e.g. cognitive, perceptive, sensorimotor, affective, etc.) factors involved in grounded human (social) cognition, drawing from current scientific research in the areas of4E Cognition and social cognition. On that basis, I turn my focus specifically towards grounded theories of language, drawing from the cognitive linguistics programme that aims to develop a naturalised, cognitively plausible understanding of human concept/language acquisition and use. I conclude the chapter with a summary wherein I integrate my findings from these various disciplines, presenting a general theoretical basis upon which to evaluate more practical considerations for its implementation in AI—the topic of the following chapter. In chapter III, I offer an overview of the different major approaches(and their integrations) in the area of Natural Language Understanding in AI, evaluating their respective strengths and shortcomings in terms of specific models. I then offer a critical summary wherein I contrast and contextualise the different approaches in terms of the more fundamental theoretical convictions they seem to reflect. On that basis, in the final chapter, I re-evaluate the aforementioned grounding problem and the different ways in which it has been interpreted in different (theoretical and practical) disciplines, distinguishing between a stronger and weaker reading. I then present arguments for why implementing the stronger version in AI seems, both practically and theoretically, problematic. Instead, drawing from the theoretical insights I gathered, I consider some of the key requirements for ‘grounding’ (in the weaker sense) as much as possible of natural language use with robotic AI agents, including implementational constraints that might need to be put in place to achieve this. Finally, I evaluate some of the key challenges that may be involved, if indeed the aim were to meet all the requirements specified.

Open-ended Grounded Semantics

2010

Artificial agents trying to achieve communicative goals in situated interactions in the real-world need powerful computational systems for conceptualizing their environment. In order to provide embodied artificial systems with rich semantics reminiscent of human language complexity, agents need ways of both conceptualizing complex compositional semantic structure and actively reconstructing semantic structure, due to uncertainty and ambiguity in transmission. Furthermore, the systems must be open-ended and adaptive and allow agents to adjust their semantic inventories in order to reach their goals. This paper presents recent progress in modeling open-ended, grounded semantics through a unified software system that addresses these problems.

Computational Mechanisms for the Grounding of Concepts, Indexicals, and Names

lagrammar.net, 2019

This paper investigates the theoretical consequences which follow from grounding the Content kinds concept, indexical, and name as computational mechanisms of an artificial agent. The mechanism for the recognition and realization (action) of concepts is pattern matching between types and raw data. The mechanism for the interpretation of indexicals is pointing at values of the agent's on-board orientation system (STAR). The reference mechanism of names relies on markers inserted in an act of baptism into the cogni-tive representation of referents. The empirical result is three universals: (1) figurative use is restricted to the matching mechanism of concepts, but uses the Semantic kinds ref-erent, property, and relation; (2) reference is restricted to nouns, but utilizes the mechanisms of matching, pointing, and baptism. As computational mechanisms, (3) concepts use pattern matching as a direct interaction with cognition-external raw data, while indexicals and names use it indirectly.

Toward a Cognitive Semantics

Journal of Pragmatics, 2006

Leonard Talmy is a leading light of cognitive linguistics, known especially for his work in cognitive semantics, an approach to linguistics that aims to describe the linguistic representation of conceptual structure. The two-volume set ''Toward a Cognitive Semantics'' is a collection of 16 of Talmy's papers spanning roughly 30 years of his thinking and writing. The papers have been updated, expanded, revised, and arranged by concept into chapters. This review of the volumes is tailored to a non-specialist linguist or cognitive scientist interested in a general orientation to the contents and presentation. In the introduction common to the two books, Talmy situates cognitive linguistics within the discipline of linguistics and identifies his primary methodology as introspection. The ''overlapping systems model'' of cognitive organization is outlined, in which cognitive systems, such as language, vision, kinesthetics, and reasoning can and do interact. Talmy proposes ''the general finding that each system has certain structural properties that are uniquely its own, certain structural properties that it shares with only one or a few other systems, and certain structural properties that it shares with most or all the other systems. These last properties would constitute the most fundamental properties of conceptual structuring in human cognition.'' The reader is guided to specific chapters in which the linguistic system is compared to other cognitive systems of visual perception, kinesthetic perception, attention, understanding/reasoning, pattern integration, cognitive culture, and affect. Each volume of the set is about 500 pages long, with eight chapters organized into three or four major sections. The first volume, ''Concept structuring systems'' expounds Talmy's vision of the fundamental systems of conceptual structuring in language. Part 1 presents a theoretical orientation, Part 2 addresses configurational structure, Part 3 discusses the distribution of attention, and Part 4 describes force dynamics. The second volume, ''Typology and process in concept structuring,'' turns from conceptual systems themselves to the processes that structure concepts and the typologies that emerge from these. Part 1 looks at the processes on a long-term scale, longer than an individual's lifetime, that deal with the representation of event structure. Part 2 considers the short-term scale of cognitive processing with a look at online processing, and Part 3 addresses medium-term processes in the acquisition of culture and the processing of narrative. In volume 1, Chapter 1, ''The relation of grammar to cognition,'' is a greatly revised and expanded version of a 1988 paper, itself an expansion of papers from 1977 and 1978. This paper details the ''semantics of grammar'' in language, toward the larger goal of determining the character of conceptual structure in general. Talmy proposes that the fundamental design feature www.elsevier.com/locate/pragma

Petra Hendriks (2010). Empirical evidence for embodied semantics. In: Maria Aloni, Harald Bastiaanse, Tikitu de Jager & Katrin Schulz (Eds), Logic, Language and Meaning. Lecture Notes in Artificial Intelligence (LNAI) 6042, Springer, Heidelberg, pp. 1-10.

This paper addresses the question whether and under which conditions hearers take into account the perspective of the speaker, and vice versa. Empirical evidence from computational modeling, psycholinguistic experimentation and corpus research suggests that a distinction should be made between speaker meanings and hearer meanings. Literal sentence meanings result from the hearer's failure to calculate the speaker meaning in situations where the hearer's selected meaning and the speaker meaning differ. Similarly, non-recoverable forms result from the speaker's failure to calculate the hearer meaning in situations where the speaker's intended meaning and the hearer meaning differ.