Toward a Very Idea of Representation (original) (raw)

Representational Systems

Minds and Machines, 2010

The concept of representation has been a key element in the scientific study of mental processes, ever since such studies commenced. However, usage of the term has been all but too liberal-if one were to adhere to common use it remains unclear if there are examples of physical systems which cannot be construed in terms of representation. The problem is considered afresh, taking as the staring point the notion of activity spaces-spaces of spatiotemporal events produced by dynamical systems. It is argued that representation can be analyzed in terms of the geometrical and topological properties of such spaces. Several attributes and processes associated with conceptual domains, such as logical structure, generalization and learning are considered, and given analogues in structural facets of activity spaces, as are misrepresentation and states of arousal. Based on this analysis, representational systems are defined, as is a key concept associated with such systems, the notion of representational capacity. According to the proposed theory, rather than being an all or none phenomenon, representation is in fact a matter of degree-that is can be associated with measurable quantities, as is behooving of a putative naturalistic construct.

Symbolic, Conceptual and Subconceptual Representations

Human and Machine Perception, 1997

Cognitive science aims at understanding how information is represented and processed in different kinds of agents, biological as well as artificial. The research has two overarching goals. One is explanatory: By studying the cognitive activities of humans and other animals, one formulates theories of different kinds of cognition. The theories are tested either by experiments or by computer simulations. The other goal is constructive: By building artifacts like chess-playing programs, robots, animats, etc, one attempts to construct systems that can solve various cognitive tasks. For both kinds of goals, a key problem is how the information used by the cognitive system is to be modelled in an appropriate way.

From meaningful information to representations, enaction and cognition (2008 E-CAP08)

The notions of information, representation and enaction entertain historical and complex relations with cognition. Historical relations because representational structures belong to the central hypothesis of cognitive sciences. Complex relations because cognitive sciences apply the notion of representation to animals, humans and robots, and also because the enactive approach tends to disregard the GOFAI type of representations. In this wide horizon of relations, we propose to look at a systemic approach that could bring up a common denominator for information and representations in the build up of cognition, and also keep a link with the enactive approach. Our purpose is to show that systems submitted to constraints can generate meaningful information to maintain their natures, and consequently build up meaningful representations that have some compatibility with the enactive approach. Such a systemic approach to the notion of meaningful information could then make available a link between enaction and meaningful representations. The first part of the presentation is about reminding that cognition does not exist per se, but is related to the system that builds it. We look at cognition as constituted by dynamic meaningful representations built up by systems that have constraints to satisfy in their environments. Cognition is considered here at the level of the system that builds it and uses it in order to maintain its nature in its environment. Such a systemic approach fits with evolution. Organisms build representations to cope with survival constraints (frogs build representations of moving black dots in order to satisfy food constraints). Humans build representations and cognition to satisfy constraints that are conscious and unconscious. Artificial systems can use representations and cognition to run activities related to constraints implemented by the designers or coming from the environment (a goal to reach being considered as a constraint to satisfy). In the second part of the presentation we define what are a meaningful information and a representation for a system submitted to a constraint in its environment, and we link these to the enactive approach. We define a meaningful information (a meaning) as an information generated by a system submitted to a constraint when it receives an external information that has a connection with the constraint. The meaning is precisely that connection. The meaning belongs to the interactions that link the system to its environment. The function of the meaning is to participate to the determination of an action that will be implemented in order to satisfy the constraint. (Menant, 2003). The satisfaction of the constraint goes with maintaining the nature of the system in its environment. A Meaning Generator System (MGS) is defined correspondingly. It is a building block for higher level systems. We present some characteristics of the MGS (groundings of a meaning, domain of efficiency and transfer of meanings, networking of meanings, evolutionary usage). The MGS approach is close to a simplified version of the Peircean triadic theory of signs (Menant, 2003, 2005 ). We define the representation of an item for a system as being the dynamic set of meaningful information corresponding to the item for the system in its environments (an elementary representation being made of a single meaningful information). These representations link the system to its environment by their meaningful components related to the nature of the system. These representations are different from the GOFAI ones. The possibilities for linking these notions of meaning and representation with the enactive approach come from the structure of the MGS: the need for an action is the cause of the meaning generation by and for the system. The action on the environment is for the system to maintain its nature (its identity). The MGS links together the generation of meaningful representations, the nature of the system, and the interactions with the environment. This can be considered as close to enacting a world by meaning generation (Di Paolo and all 2007), and to the enactive concept of sense making (De Jaegher, Di Paolo 2007). We propose that basing the definition of a representation on the notion of meaningful information generated by a system submitted to a constraint can open a way for making the notion of representation compatible with the enactive approach. In the third part of the presentation, we consider some cases of meaningful information and representations for organisms and for robots. Regarding organisms, the MGS can be used in an evolutionary context by looking at the evolution of the systems and of the constraints. Purpose is to modelize the generation of meanings and of representations in order to make available a tool usable for different levels of evolution, as evolution has a place in cognitive sciences (Proust, 2007). Constraints for basic life are survival constraints (individual and species). Group life constraints are also to be considered. Reaching the level of humans in evolution brings in new constraints that cannot be clearly identified as they have to take into account human consciousness which is today a mystery (the "hard problem"). On an evolutionary standpoint, human constraints come in addition to the ones existing for non human organisms. We can make some hypothesis on the nature of human constraints (Maslow pyramid based constraints, anxiety limitation…). For robots, the MGS is initially based on the design of the robot. The meaning generated within a robot is initially derived from the constraints implemented by the designer and from the environment. But some non calculable or non predictable evolutions of the robot can introduce meanings that look proper to the robot. This last point can be linked to the notion of autonomy in robots. In such examples, the dynamic management of meanings thru the MGSs in their environments keeps the link with the enactive approach. We finish the presentation by summarising the points addressed and by proposing several continuations. * De Jaegher, H. and Di Paolo E. 2007. “Participatory Sense-Making An Enactive Approach to Social Cognition” To appear in Phenomenology and the Cognitive Sciences, 2007. http://www.informatics.sussex.ac.uk/users/ezequiel/DeJaegher&DiPaolo2007.pdf * Di Paolo, E., Rohde, M., De Jaegher, H. 2007 “Horizons for the Enactive Mind: Values, Social Interaction, and Play”. To appear in Enaction: Towards a New Paradigm for Cognitive Science, J. Stewart, O. Gapenne, and E. A. Di Paolo (Eds), Cambridge, MA: MIT Press, forthcoming. http://www.informatics.sussex.ac.uk/users/ezequiel/DiPaoloetal\_csrp587.pdf * Menant, C. 2003. "Information and Meaning" Entropy 2003, 5, 193-204. http://www.mdpi.org/entropy/papers/e5020193.pdf * Menant, C. 2005. "Information and Meaning in Life, Humans and Robots" FIS 2005 Paris. http://www.mdpi.org/fis2005/F.45.paper.pdf * Proust, J. 2007. “Why evolution has to matter to cognitive psychology and to philosophy of mind”, Biological Theory, 2007, 2. http://jeannicod.ccsd.cnrs.fr/docs/00/13/93/30/PDF/Biological\_Theory.Proust.pdf

Representation and mental representation

Philosophical Explorations, 2018

This paper engages critically with anti-representationalist arguments pressed by prominent enactivists and their allies. The arguments in question are meant to show that the “as-such” and “job-description” problems constitute insurmountable challenges to causal-informational theories of mental content. In response to these challenges, a positive account of what makes a physical or computational structure a mental representation is proposed; the positive account is inspired partly by Dretske’s views about content and partly by the role of mental representations in contemporary cognitive scientific modeling.

Mental Representation - a Functionalist View

Pacific Philosophical Quarterly, 1982

d s represent the world. To have a mind is to possess a capacity for representing states of affairs which actually obtain and others which might be brought about. Minds are internally active, and much of their activity consists in processes or operations on internal representations. Thus any adequate theory of mind must provide an account of what the mind can and does represent and of the representations and processes which underlie its rep* resentational powers. Since the seventeenth century there has been little good reason to deny any of these claims which lie at the core of the representational theory of mind. With the decline of behaviorism, philosophers of mind and cognitive psychologists alike have returned their attention to the representational nature of mind. Despite this emerging consensus, the central notion of mental representation remains far from clear, and a lively interdisciplinary discussion has arisen concerning basic conceptual issues. ' The present essay is offered as a philisophical contribution to that debate. In keeping with the generally functionalist view of mind which has become the new orthodoxy among anlaytic philosophers, I will sketch how one plausible version of functionalism might be used to untangle a few puzzles about mental representation. I will approach the issues from an admittedly phil osophical perspective, but if I can establish some philosophical results, I hope that they may also be of interest to those working in other disciplines concerned with the nature of mind. The term "mental representation" is ambiguous in a theoretically significant way. On one hand, it can be used to refer to the general capacity of minds for representing actual and potential states of affairs. When one has a belief or acquires information through a sensory organ, one represents the world as being in some particular way. In this general sense any organism with a capacity to adapt its behavior to the conditions of its environment would be engaged in mental representation. In other contexts mental representations are clearly to be undcrsUxxi as formal or syntactic structures which function as internal symbols (as " fonnulae in the language of thought"). The distinction can be sharpened by noting that there are two quite distinct classes of psychological items which can be described as having content: con

UC Merced Proceedings of the Annual Meeting of the Cognitive Science Society Title A New Theory of the Representational Base of Consciousness Publication Date A New Theory of the Representational Base of Consciousness

2004

Though we take mainly a philosophical approach, we hope that the results of our work will be useful to researchers on consciousness who take other approaches. Everyone agrees, no matter what their point of view on consciousness, that consciousness has a representational base. However, there have been relatively few well-worked-out attempts to say what this base might be like. The two best developed are perhaps the higher-order thought (HOT) and the transparency approaches. Both are lacking. Starting from the notion of a self-presenting representation, we develop an alternative view. In our view, a representation, a completely normal representation, is the representational base for not just for consciousness of its object (if it has one), but also itself and oneself as its subject. The unified picture of consciousness that results should assist research on consciousness.