Computational Mechanisms for the Grounding of Concepts, Indexicals, and Names (original) (raw)
Related papers
The Language Grounding Problem and its Relation to the Internal Structure of Cognitive Agents
J. Univers. Comput. Sci., 2005
An original approach to modelling internal structure of artificial cognitive agents and the phenomenon of language grounding is presented. The accepted model for the internal cognitive space reflects basic structural properties of human cognition and assumes the partition of cognitive phenomena into conscious and 'non-conscious'. The language is treated as a set of semiotic symbols and is used in semantic communication. Semiotic symbols are related to the internal content of empirical knowledge bases in which they are grounded. This relation is given by the so-called epistemic satisfaction relations defining situations in which semiotic symbols are adequate (grounded) representations of embodied experience. The importance of non-conscious embodied knowledge in language grounding and production is accepted. An example of application of the proposed approach to the analysis of grounding requirements is given for the case of logic equivalences extended with modal operators of p...
In this work, the problems of knowledge acquisition and information processing are explored in relation to the definitions of concepts and conceptual processing, and their implications for artificial agents. The discussion focuses on views of cognition as a dynamic property in which the world is actively represented in grounded mental states which only have meaning in the action context. Reasoning is understood as an emerging property consequence of actions-environment couplings achieved through experience, and concepts as situated and dynamic phenomena enabling behaviours. Re-framing the characteristics of concepts is considered crucial to overcoming settled beliefs and reinterpreting new understandings in artificial systems. The first part presents a review of concepts from cognitive sciences. Support is found for views on grounded and embodied cognition, describing concepts as dynamic, flexible, context-dependent, and distributedly coded. That is argued to contrast with many technical implementations assuming concepts as categories, whilst explains limitations when grounding amodal symbols, or in unifying learning, perception and reasoning. The characteristics of concepts are linked to methods of active inference, self-organization, and deep learning to address challenges posed and to reinterpret emerging techniques. In a second part, an architecture based on deep generative models is presented to illustrate arguments elaborated. It is evaluated in a navigation task, showing that sufficient representations are created regarding situated behaviours with no semantics imposed on data. Moreover, adequate behaviours are achieved through a dynamic integration of perception and action in a single representational domain and process.
Roles and Anchors of Semantic Situations
Although neither theoretical nor computational linguists did provide sufficiently careful insight into the problem of semantic roles, recently some progress is being achieved in robotics (study of the simulation of human interaction), and mostly in multi-agent systems. Taking advantage of this motivation and applying it to the study of languages, I distinguish between various abstract ontological levels. Instead of using such concepts as agentive, objective, experiencer, etc., on the highest (generic) ontological level, I postulate generalised agents which are defined by the following ontological features, among others: (1) features of control (autonomy): goal and feedback, (2) features of emotion (character): desire and intention, (3) epistemic features (reason): belief and cognition, (4) communication features (language faculty): verbal and visual. In accordance with such ontological concepts, natural and artificial entities are obviously suited to fulfil the semantic roles of agents and figures respectively in the widest sense of these terms. I further propose to distinguish between three classes of generic ontological roles, namely ACTIVE, MEDIAN or PASSIVE. Here are examples of generic roles: (1) active role (Initiator, Causer, Enabler, Benefactor, Executor, Stimulant, Source, Instigator etc.), (2) passive role (Terminator, Affect, Enabled, Beneficient, Executed, Experiencer, Goal, etc.) and (3) median role (Mediator, Instrument, Benefit, Motor, Means etc.). Figures can play quasi-active (Q-active) roles.
Some Remarks Concerning the Reference of Mental and Language Representations
Studia Philosophiae Christianae, 2020
This paper is an attempt to answer the question, what is exactly represented by our thoughts or language expressions. At the beginning, the article presents the main philosophical problems regarding the understanding of the nature of the subject of reference of such representations as names or descriptions. Is the name directly referred to the real object or rather to the content of thought? What about cases when the name cannot be referred to the real object? What is the relation between the intentional subject connected with every name (or description) and the external object to which only some names can be referred to, and which one is prior to the constitution of representation? The idea to understand the subject of mental or language representations as a complex structure which has a relational nature is the solution proposed in this paper. This structure is constituted by cognition and ties internal elements of a given representation such as the content with the elements which...
INFORMATION SYSTEM APPROACH TO SEMANTICS
ABSTRACT The proposed here approach presents the language as an information system with two connected levels of representation – one which concerns basic semantic primitives and operators and one which concerns the grammatical level of the natural language. We assume that there is a common general underlying semantic scheme for all languages and that any grammatical rule can be represented as consisting of some semantic primitives /internal representations, which are mind-operable. The Language Faculty is a highly non-redundant system. What is the extent of the inter-set mapping that the system permits? Can, for example, statives be characterized by the aspectual (change-of-state) property of dynamic domains? The claim advanced in this paper is that this kind of attribution constitutes a violation of the rule that preserves domain specific properties. A change-of-state characteristic is preserved for the semantic interpretation of the dynamic functions of verbs. In the first part of this paper, it is shown that in Russian, the Instrumental case marking on the secondary predicate implies a choice-of-state operation. It is explained why a change-of-state cannot be attributed to stative domains of adjectives and nouns, and the conditions are established for the interpretation of a choice-of-state function. It is supposed that the semantic representation of events must be treated separately from the analysis of objects’ states and characteristics. The proposed approach to Language as Information System is developed in the second part of the paper. Secondary predication in Russian is modeled by means of a ‘connected to the semantic level database’ (SDB), and the links between basic semantic units and grammar are analyzed. The results obtained from the SDB reports confirm that the semantic representation of events must be treated separately from the analysis of objects’ states and characteristics. They also show that the Instrumental case marking implies a choice-of-state mind operator independently of a function on events. KEYWORDS Language Information System, Semantic Level
Meaning in Artificial Agents: The Symbol Grounding Problem Revisited
The Chinese room argument has presented a persistent headache in the search for Artificial Intelligence. Since it first appeared in the literature, various interpretations have been made, attempting to understand the problems posed by this thought experiment. Throughout all this time, some researchers in the Artificial Intelligence community have seen Symbol Grounding as proposed by Harnad as a solution to the Chinese Room Argument. The main thesis in this paper is that although related, these two issues present different problems in the framework presented by Harnad himself. The work presented here attempts to shed some light on the relationship between Searle’s intentionality notion and Harnad’s Symbol Grounding Problem.
COGNITIVE SYSTEMS AND SEMANTIC KNOWLEDGE
ABSTRACT A model of the cognitive process of natural language processing has been developed using the formalism of generalised nets. Following this stage-simulating model, the treatment of information inevitably includes phases which require joint operations in two knowledge spaces – syntax of language, and its semantics. In order to examine and formalize the relations between the syntactic and semantic levels, the model was presented as an information system, conceived on the bases of human cognitive resources, semantic primitives, semantic operators combined with syntactic rules and data. This approach is applied for modelling a specific grammatical rule – secondary (depictive) predication in Russian. Grammatical rules of the language space are expressed as operators in the semantic space. The results of applying the information system approach to the analysis of language phenomena appear to be consistent with the stages of treatment, modelled with the generalized net. The result of this formalization supports the idea that a cognitive system responsible for language utilizes basic semantic primitives and operations. The analysis and the tracking of examples suggest that the mechanisms of NLP are strongly assisted by a Top-down information flow, based on semantic knowledge. The central claim advanced in this article is that languages are in fact interactions of fundamental syntactic and semantic features, which are basically the same in each and every human language. KEYWORDS Natural Language Processing, Cognitive model, Language Information System, Parallel Treatment
Reference with the sign kind symbol (Hausser, 2017) is modeled in Database Semantics (DBS) as an agent-internal pattern matching between the language and the context level, based on the type-token relation. For example, the concept type dog at the language level matches a concept token dog at the context level. But what about a nonliteral (figurative) use, such as referring to a dog with the concept animal or to an orange crate with the concept table? It is proposed to relate the literal referent and the nonliteral concept by means of an inference which applies before pattern matching in the speak mode and after pattern matching in the hear mode, thus maintaining a standard type-token pattern matching. The paper proceeds systematically from nonliteral uses of nouns to those of verbs and adjectives.