Meaning in Artificial Agents: The Symbol Grounding Problem Revisited (original) (raw)
Related papers
Proceeding of AISB/IACAP World Congress 2012. Symposium: Natural Computing/Unconventional Computing and its Philosophical Significance, 2012
The Turing Test (TT), the Chinese Room Argument (CRA), and the Symbol Grounding Problem (SGP) are about the question “can machines think?”. We propose to look at that question through the capability for Artificial Agents (AAs) to generate meaningful information like humans. We present TT, CRA and SGP as being about generation of human-like meanings and analyse the possibility for AAs to generate such meanings. We use for that the existing Meaning Generator System (MGS) where a system submitted to a constraint generates a meaning in order to satisfy its constraint. Such system approach allows comparing meaning generation in animals, humans and AAs. The comparison shows that in order to design AAs capable of generating human-like meanings, we need the possibility to transfer human constraints to AAs. That requirement raises concerns coming from the unknown natures of life and human consciousness which are at the root of human constraints. Corresponding implications for the TT, the CRA and the SGP are highlighted. The usage of the MGS shows that designing AAs capable of thinking and feeling like humans needs an understanding about the natures of life and human mind that we do not have today. Following an evolutionary approach, we propose as a first entry point an investigation about extending life to AAs in order to design AAs carrying a “stay alive” constraint. Ethical concerns are raised from the relations between human constraints and human values. Continuations are proposed.
The Turing Test (TT), the Chinese Room Argument (CRA), and the Symbol Grounding Problem (SGP) are about the question "can machines think?" We propose to look at these approaches to Artificial Intelligence (AI) by showing that they all address the possibility for Artificial Agents (AAs) to generate meaningful information (meanings) as we humans do. The initial question about thinking machines is then reformulated into "can AAs generate meanings like humans do?" We correspondingly present the TT, the CRA and the SGP as being about generation of human-like meanings. We model and address such possibility by using the Meaning Generator System (MGS) where a system submitted to an internal constraint generates a meaning in order to satisfy the constraint. The system approach of the MGS allows comparing meaning generations in animals, humans and AAs. The comparison shows that in order to have AAs capable of generating human-like meanings, we need the AAs to carry human constraints. And transferring human constraints to AAs raises concerns coming from the unknown natures of life and human mind which are at the root of human constraints. Implications for the TT, the CRA and the SGP are highlighted. It is shown that designing AAs capable of thinking like humans needs an understanding about the natures of life and human mind that we do not have today. Following an evolutionary approach, we propose as a first entry point an investigation about the possibility for extending a "stay alive" constraint into AAs. Ethical concerns are raised from the relations between human constraints and human values. Continuations are proposed. (This paper is an extended version of the proceedings of an AISB/IACAP 2012 presentation (http://www.mrtc.mdh.se/\~gdc/work/AISB-IACAP-2012/NaturalComputingProceedings-2012-06-22.pdf).
Intentionality and Background: Searle and Dreyfus against Classical AI Theory
Filosofia Unisinos
According to the theory of artifi cial intelligence (AI), the human mind is a formal system made of symbols that operate following a set of instructions, which allow the manipulation of symbols according to their physical form. Against the conception that mental states and processes can be defi ned only from a syntactic perspective, John Searle used the Chinese Room thought experiment by which he intended to demonstrate that the human mind is more than a formal structure, having a semantic content as well. The semantic content of the human mind is given by intentionality, a feature that belongs exclusively to biological organisms. Starting from here, Searle shows that the logical structure of intentionality and the conditions for functioning of the intentional states cannot be explained by the computational approach of the human mind. Another perspective, which invalidated the AI theory, belongs to Hubert Dreyfus who considers that an adequate understanding of the human mind needs to start from the understanding of the phenomenological structures by means of which we relate to the world. Therefore, cognition and intentionality are explained from the perspective of an embodied being that, due to his body skills, is ontologically and dynamically coupled to the world. In this case, it is not the biological dimension of the human body that matters, but the phenomenological one that does not treat intentionality as knowingthat, whose role is to grasp the world's objective features, but as a way of constituting the world of the subject according to his concerns and interests.
Symbol Grounding Problem and Causal Theories of Reference
The Symbol Grounding Problem (SGP), which remains difficult for AI and philosophy of information, was recently scrutinized by M. Taddeo and L. Floridi (2005, 2007). However, their own solution to SGP, underwritten by Action-based Semantics, although different from other solutions, does not seem to be satisfactory. Moreover, it does not satisfy the authors’ principle, which they dub ‘Zero Semantic Commitment Condition’. In this paper, Taddeo and Floridi’s solution is criticized in particular because of the excessively liberal relationship between symbols and internal states of agents, which is conceived in terms of levels of abstraction. Also, the notion of action seems to be seriously defective in their theory. Due to the lack of the possibility of symbols to misrepresent, the grounded symbols remain useless for the cognitive system itself, and it is unclear why they should be grounded in the first place, as the role of grounded symbols is not specified by the proposed solution. At the same time, it is probably one of the best developed attempts to solve SGP and shows that naturalized semantics can benefit from taking artificial intelligence seriously.
Symbol Grounding - the Emperor's New Theory of Meaning
1993
Abstract What is the relationship between cognitive theories of symbol grounding and philosophical theories of meaning? In this paper we argue that, although often considered to be fundamentally distinct, the two are actually very similar. Both set out to explain how non-referring atomic tokens or states of a system can acquire status as semantic primitives within that system. In view of this close relationship, we consider what attempts to solve these problems can gain from each other.
Computation and Intentionality: A Recipe for Epistemic Impasse
Minds and Machines, 2005
Searle’s celebrated Chinese room thought experiment was devised as an attempted refutation of the view that appropriately programmed digital computers literally are the possessors of genuine mental states. A standard reply to Searle, known as the ‘‘robot reply’’ (which, I argue, reflects the dominant approach to the problem of content in contemporary philosophy of mind), consists of the claim that the problem he raises can be solved by supplementing the computational device with some ‘‘appropriate’’ environmental hookups. I argue that not only does Searle himself casts doubt on the adequacy of this idea by applying to it a slightly revised version of his original argument, but that the weakness of this encodingbased approach to the problem of intentionality can also be exposed from a somewhat different angle. Capitalizing on the work of several authors and, in particular, on that of psychologist Mark Bickhard, I argue that the existence of symbol-world correspondence is not a property that the cognitive system itself can appreciate, from its own perspective, by interacting with the symbol and therefore, not a property that can constitute intrinsic content. The foundational crisis to which Searle alluded is, I conclude, very much alive.
Philosophical Review, 2005
The anthology begins with a lengthy introduction by Preston which discusses the key intellectual developments leading up to John Searle's formulation of his famous Chinese Room Argument (CRA). It then situates the contributions of the volume's other contributors relative to the multi-faceted discussion that ensued. Excellent though it is, this essay would have benefited from a slightly more detailed discussion of Shank and Abelson's (1977) SAM model of story understanding, for this is precisely the kind of program that Searle imagines himself running from inside the Chinese Room. Another minor drawback is Preston's attempt to motivate the book's extended treatment of the CRA by claiming that its target, Strong AI, is a core commitment for a great many cognitive scientists. This is not even close to being the case (Waskan 2003, 648), but the CRA is still important enough to warrant the book's extended treatment of it. After all, particular theories of mind limit the space of possible analyses of the CRA, and so the plausibility of such theories can be, and often is, indirectly assessed in terms of the plausibility of those possible analyses. It also bears noting that even if Searle's argument fails to generalize to the extent that he claims, he at least provides some compelling reasons for thinking that the SAM model does not, by itself, understand language in anything like the way that we do. If he is right about this, then the following questions cry out for answers: Why does SAM not understand? Could we augment it in some way so that it would understand? If not, why not, and what alternative mechanisms could possibly fit the bill? These, of course, are the precise sorts of issues that have been discussed directly in the wake of the CRA, and they are the sorts of issues that should interest anyone who is searching for a mechanistic explanation of human mental states.
From the Chinese room argument to the Church-Turing thesis
2018
Searle’s Chinese Room thought experiment incorporates a number of assumptions about the role and nature of programs within the computational theory of mind. Two assumptions are analysed in this paper. One is concerned with how interactive we should expect programs to be for a complex cognitive system to be interpreted as having understanding about its environment and its own inner processes. The second is about how self-reflective programs might analyse their own processes. In particular, how self-reflection, and a high level of interactivity with the environment and other intelligent agents in the environment, may give rise to understanding in artificial cognitive systems. A further contribution that this paper makes is to demonstrate that the Church-Turing Thesis does not apply to interactive systems, and to self-reflective systems that incorporate interactivity. This is an important finding because it means that claims about interactive and self-reflective systems need to be cons...