Distributed Representation (connectionism) Research Papers (original) (raw)

The purpose of this research is to identify the differences in the literature at describing connectionism as a model for Second Language Acquisition. Connectionism is a psychological, cognitive and computational theory that explains how... more

The purpose of this research is to identify the differences in the literature at describing connectionism as a model for Second Language Acquisition. Connectionism is a psychological, cognitive and computational theory that explains how second language learning is processed in the brain by means of computational simulation. The immediate outcome of understanding connectionism is to acknowledge its powerful implications for both teachers and learners of foreign languages. In this paper some of those presumptions will be illustrated in order to extract more and deeper links between theoretical brain research and its immediate applications in real contexts of foreign
language acquisition.

During the last decades, many cognitive architectures (CAs) have been realized adopting different assumptions about the organization and the representation of their knowledge level. Some of them (e.g. SOAR [35]) adopt a classical symbolic... more

During the last decades, many cognitive architectures (CAs) have been realized adopting different assumptions about the organization and the representation of their knowledge level. Some of them (e.g. SOAR [35]) adopt a classical symbolic approach, some (e.g. LEABRA[48]) are based on a purely con-nectionist model, while others (e.g. CLARION [59]) adopt a hybrid approach combining connectionist and symbolic representational levels. Additionally, some attempts (e.g. biSOAR) trying to extend the representational capacities of CAs by integrating diagrammatical representations and reasoning are also available [34]. In this paper we propose a reflection on the role that Conceptual Spaces, a framework developed by Peter Gärdenfors [24] more than fifteen years ago, can play in the current development of the Knowledge Level in Cognitive Systems and Architectures. In particular, we claim that Conceptual Spaces offer a lingua franca that allows to unify and generalize many aspects of the symbolic, sub-symbolic and diagrammatic approaches (by overcoming some of their typical problems) and to integrate them on a common ground. In doing so we extend and detail some of the arguments explored by Gärdenfors [23] for defending the need of a conceptual, intermediate, representation level between the symbolic and the sub-symbolic one. In particular we focus on the advantages offered by Conceptual Spaces (w.r.t. symbolic and sub-symbolic approaches) in dealing with the problem of compositionality of representations based on typicality traits. Additionally, we argue that Conceptual Spaces could offer a unifying framework for interpreting many kinds of diagrammatic and analogical representations. As a consequence, their adoption could also favor the integration of diagrammatical representation and reasoning in CAs.

In neuroscience research, a firing pattern expression, called a firing snapshot, defined by firing counts of multiple neurons in discrete time-bin is frequently used. However, the firing pattern expression over multiple neurons requires... more

In neuroscience research, a firing pattern expression, called a firing snapshot, defined by firing counts of multiple neurons in discrete time-bin is frequently used. However, the firing pattern expression over multiple neurons requires synchronization commonly known as a "central clock" in computation circuits to externally control or observe multi-neuron firing. Since it is not appropriate to hypothesize the central clock in the brain, we have to reconsider the synchronization assumption. We found the historical origins of central clock synchronization in the works of McCulloch, Pitts and von Neumann and propose an alternative perspective to study neural processing without the central clock. In the "First draft of a report on the EDVAC" known as the first proposal of store-program type computer, von Neumann incorporated McCulloch and Pitts' arbitrary assumption of constant synaptic delay into the logical circuit modeling. He introduced a constant pulse delay stabilized by a central clock to synchronize multiple pulse lines. This constant delay carried over to automaton theory by von Neumann and then theoretical neuroscience as an integral part of distributed representation and firing pattern analysis. Instead, we propose a rethinking of neuronal processing by focusing on a variable synaptic delay. We review neuron-type dependent differences and point out that the maximum inter-stimulus interval which can affect the last stimulus response is longer than the minimum inter-spike interval, indicating the possibility of asynchronous input summation without a firing rate level clock. This fundamental baseline contributes to the investigation of time consuming processes: e.g., the compression of appositional synaptic inputs with duration into a single firing.

The authors, on the basis of brief arguments, have dismissed tensor networks as a viable response to Jackendoff’s challenges. However, there are reasons to believe that connectionist approaches descended from tensor networks are... more

The authors, on the basis of brief arguments, have dismissed tensor networks
as a viable response to Jackendoff’s challenges. However, there are reasons to believe
that connectionist approaches descended from tensor networks are actually very well
suited to answering Jackendoff’s challenges. I rebut their arguments for dismissing tensor
networks and briefly compare the approaches.

Jackendoff (2002) posed four challenges that linguistic combinatoriality and rules of language present to theories of brain function. The essence of these problems is the question of how to neurally instantiate the rapid construction and... more

Jackendoff (2002) posed four challenges that linguistic combinatoriality and rules of language present to theories of brain function. The essence of these problems is the question of how to neurally instantiate the rapid construction and transformation of the compositional structures that are typically taken to be the domain of symbolic processing. He contended that typical connectionist approaches fail to meet these challenges and that the dialogue between linguistic theory and cognitive neuroscience will be relatively unproductive until the importance of these problems is widely recognised and the challenges answered by some technical innovation in connectionist modelling. This paper claims that a little-known family of connectionist models (Vector Symbolic Architectures) are able to meet Jackendoff's challenges.

Jackendoff (2002) posed four challenges that linguistic combinatoriality and rules of language present to theories of brain function. The essence of these problems is the question of how to neurally instantiate the rapid construction and... more

Jackendoff (2002) posed four challenges that linguistic combinatoriality and rules of language present to theories of brain function. The essence of these problems is the question of how to neurally instantiate the rapid construction and transformation of the compositional structures that are typically taken to be the domain of symbolic processing. He contended that typical connectionist approaches fail to meet these challenges and that the dialogue between linguistic theory and cognitive neuroscience will be relatively unproductive until the importance of these problems is widely recognised and the challenges answered by some technical innovation in connectionist modelling. This paper claims that a little-known family of connectionist models (Vector Symbolic Architectures) are able to meet Jackendoffs challenges.

This research presents data showing how effectively inter-sentential contexts determine the (lexical) meanings of lexical items based on experiments with a Japanse verb, "osou" (meaning attack, hit). In retrospect, this work can be viewed... more

This research presents data showing how effectively inter-sentential contexts determine the (lexical) meanings of lexical items based on experiments with a Japanse verb, "osou" (meaning attack, hit). In retrospect, this work can be viewed as a precursor of the idea of skip-grams.

Full terms and conditions of use: http://www.informaworld.com/terms-and-conditions-of-access.pdf This article may be used for research, teaching and private study purposes. Any substantial or systematic reproduction, redistribution , reselling , loan or sub-licensing, systematic supply or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.

If robots are to cooperate with humans in an increasingly human-like manner, then significant progress must be made in their abilities to observe and learn to perform novel goal directed actions in a flexible and adaptive manner. The... more

If robots are to cooperate with humans in an increasingly human-like manner, then significant progress must be made in their abilities to observe and learn to perform novel goal directed actions in a flexible and adaptive manner. The current research addresses this challenge. In CHRIS.I [1], we developed a platform-independent perceptual system that learns from observation to recognize human actions in a way which abstracted from the specifics of the robotic platform, learning actions including ?????????put X on Y????????? and ?????????take X?????????. In the current research, we extend this system from action perception to execution, consistent with current developmental research in human understanding of goal directed action and teleological reasoning. We demonstrate the platform independence with experiments on three different robots. In Experiments 1 and 2 we complete our previous study of perception of actions ?????????put????????? and ?????????take????????? demonstrating how the system learns to execute these same actions, along with new related actions ?????????cover????????? and ?????????uncover????????? based on the composition of action primitives ?????????grasp X????????? and ?????????release X at Y?????????. Significantly, these compositional action execution specifications learned on one iCub robot are then executed on another, based on the abstraction layer of motor primitives. Experiment 3 further validates the platform-independence of the system, as a new action that is learned on the iCub in Lyon is then executed on the Jido robot in Toulouse. In Experiment 4 we extended the definition of action perception to include the notion of agency, again inspired by developmental studies of agency attribution, exploiting the Kinect motion capture system for tracking human motion. Finally in Experiment 5 we demonstrate how the combined representation of action in terms of perception and execution provides the basis for imitation. This provides the basis for a- open ended cooperation capability where new actions can be learned and integrated into shared plans for cooperation. Part of the novelty of this research is the robots' use of spoken language understanding and visual perception to generate action representations in a platform independent manner based on physical state changes. This provides a flexible capability for goal-directed action imitation.

Jackendoff (2002) posed four challenges that linguistic combinatoriality and rules of language present to theories of brain function. The essence of these problems is the question of how to neurally instantiate the rapid construction and... more

Jackendoff (2002) posed four challenges that linguistic combinatoriality and rules of language present to theories of brain function. The essence of these problems is the question of how to neurally instantiate the rapid construction and transformation of the compositional structures that are typically taken to be the domain of symbolic processing. He contended that typical connectionist approaches fail to meet these challenges and that the dialogue between linguistic theory and cognitive neuroscience will be relatively unproductive until the importance of these problems is widely recognised and the challenges answered by some technical innovation in connectionist modelling. This paper claims that a little-known family of connectionist models (Vector Symbolic Architectures) are able to meet Jackendoff's challenges.

If robots are to cooperate with humans in an increasingly human-like manner, then significant progress must be made in their abilities to observe and learn to perform novel goal directed actions in a flexible and adaptive manner. The... more

If robots are to cooperate with humans in an increasingly human-like manner, then significant progress must be made in their abilities to observe and learn to perform novel goal directed actions in a flexible and adaptive manner. The current research addresses this challenge. In CHRIS.I [1], we developed a platform-independent perceptual system that learns from observation to recognize human actions in a way which abstracted from the specifics of the robotic platform, learning actions including "put X on Y" and "take X". In the current research, we extend this system from action perception to execution, consistent with current developmental research in human understanding of goal directed action and teleological reasoning. We demonstrate the platform independence with experiments on three different robots. In Experiments 1 and 2 we complete our previous study of perception of actions "put" and "take" demonstrating how the system learns to execute these same actions, along with new related actions "cover" and "uncover" based on the composition of action primitives "grasp X" and "release X at Y". Significantly, these compositional action execution specifications learned on one iCub robot are then executed on another, based on the abstraction layer of motor primitives. Experiment 3 further validates the platformindependence of the system, as a new action that is learned on the iCub in Lyon is then executed on the Jido robot in Toulouse. In Experiment 4 we extended the definition of action perception to include the notion of agency, again inspired by developmental studies of agency attribution, exploiting the Kinect motion capture system for tracking human motion.

If robots are to cooperate with humans in an increasingly human-like manner, then significant progress must be made in their abilities to observe and learn to perform novel goal directed actions in a flexible and adaptive manner. The... more

If robots are to cooperate with humans in an increasingly human-like manner, then significant progress must be made in their abilities to observe and learn to perform novel goal directed actions in a flexible and adaptive manner. The current research addresses this challenge. In CHRIS.I [1], we developed a platform-independent perceptual system that learns from observation to recognize human actions in a way which abstracted from the specifics of the robotic platform, learning actions including ?????????put X on Y????????? and ?????????take X?????????. In the current research, we extend this system from action perception to execution, consistent with current developmental research in human understanding of goal directed action and teleological reasoning. We demonstrate the platform independence with experiments on three different robots. In Experiments 1 and 2 we complete our previous study of perception of actions ?????????put????????? and ?????????take????????? demonstrating how the system learns to execute these same actions, along with new related actions ?????????cover????????? and ?????????uncover????????? based on the composition of action primitives ?????????grasp X????????? and ?????????release X at Y?????????. Significantly, these compositional action execution specifications learned on one iCub robot are then executed on another, based on the abstraction layer of motor primitives. Experiment 3 further validates the platform-independence of the system, as a new action that is learned on the iCub in Lyon is then executed on the Jido robot in Toulouse. In Experiment 4 we extended the definition of action perception to include the notion of agency, again inspired by developmental studies of agency attribution, exploiting the Kinect motion capture system for tracking human motion. Finally in Experiment 5 we demonstrate how the combined representation of action in terms of perception and execution provides the basis for imitation. This provides the basis for a- open ended cooperation capability where new actions can be learned and integrated into shared plans for cooperation. Part of the novelty of this research is the robots' use of spoken language understanding and visual perception to generate action representations in a platform independent manner based on physical state changes. This provides a flexible capability for goal-directed action imitation.

The ability to perceive similarities and group entities into meaningful hierarchies is central to the processes of learning and generalisation. In artificial intelligence and data mining, the similarity of symbolic data has been estimated... more

The ability to perceive similarities and group entities into meaningful hierarchies is central to the processes of learning and generalisation. In artificial intelligence and data mining, the similarity of symbolic data has been estimated by techniques ranging from feature-matching and correlation analysis to Latent Semantic Analysis (LSA). One set of techniques that has received very little attention are those based upon cognitive models of similarity and concept formation.
In this paper, we propose an extension to a neural network- based approach called Forming Global Representations with Extended backPropagation (FGREP), and show that it can be used to form meaningful conceptual clusters from information about an entity’s perceivable attributes or its usage and interactions. By examining these clusters, and their classification errors, we also show that the groupings identified by FGREP are more intuitive, and generalise better, than those formed using LSA.

... Page 118. 104 SCHWARTZ reasoning applies, mutatis mutandis, if radical connectionists (see Cummins & Schwarz, 1988) are right that connectionism should displace GOFAI as our account of cognition and be construed instead as a... more

... Page 118. 104 SCHWARTZ reasoning applies, mutatis mutandis, if radical connectionists (see Cummins & Schwarz, 1988) are right that connectionism should displace GOFAI as our account of cognition and be construed instead as a direct implementation of or explanation for ...

Robots should be capable of interacting in a cooperative and adaptive manner with their human counterparts in open-ended tasks that can change in real-time. An important aspect of the robot behavior will be the ability to acquire new... more

Robots should be capable of interacting in a cooperative and adaptive manner with their human counterparts in open-ended tasks that can change in real-time. An important aspect of the robot behavior will be the ability to acquire new knowledge of the cooperative tasks by observing and interacting with humans. The current research addresses this challenge. We present results from a cooperative humanrobot interaction system that has been specifically developed for portability between different humanoid platforms, by abstraction layers at the perceptual and motor interfaces. In the perceptual domain, the resulting system is demonstrated to learn to recognize objects and to recognize actions as sequences of perceptual primitives, and to transfer this learning, and recognition, between different robotic platforms. For execution, composite actions and plans are shown to be learnt on one robot and executed successfully on a different one. Most importantly, the system provides the ability to link actions into shared plans, that form the basis of human-robot cooperation, applying principles from human cognitive development to the domain of robot cognitive systems.

One of the long term objectives of robotics and artificial cognitive systems is that robots will increasingly be capable of interacting in a cooperative and adaptive manner with their human counterparts in open-ended tasks that can change... more

One of the long term objectives of robotics and artificial cognitive systems is that robots will increasingly be capable of interacting in a cooperative and adaptive manner with their human counterparts in open-ended tasks that can change in real-time. In such situations, an important aspect of the robot behavior will be the ability to acquire new knowledge of the cooperative tasks by observing humans. At least two significant challenges can be identified in this context. The first challenge concerns development of methods to allow the characterization of human actions such that robotic systems can observe and learn new actions, and more complex behaviors made up of those actions. The second challenge is associated with the immense heterogeneity and diversity of robots and their perceptual and motor systems. The associated question is whether the identified methods for action perception can be generalized across the different perceptual systems inherent to distinct robot platforms. The current research addresses these two challenges. We present results from a cooperative humanrobot interaction system that has been specifically developed for portability between different humanoid platforms. Within this architecture, the physical details of the perceptual system (e.g. video camera vs IR video with reflecting markers) are encapsulated at the lowest level. Actions are then automatically characterized in terms of perceptual primitives related to motion, contact and visibility.

If robots are to cooperate with humans in an increasingly human-like manner, then significant progress must be made in their abilities to observe and learn to perform novel goal directed actions in a flexible and adaptive manner. The... more

If robots are to cooperate with humans in an increasingly human-like manner, then significant progress must be made in their abilities to observe and learn to perform novel goal directed actions in a flexible and adaptive manner. The current research addresses this challenge. In CHRIS.I [1], we developed a platform-independent perceptual system that learns from observation to recognize human actions in a way which abstracted from the specifics of the robotic platform, learning actions including ?????????put X on Y????????? and ?????????take X?????????. In the current research, we extend this system from action perception to execution, consistent with current developmental research in human understanding of goal directed action and teleological reasoning. We demonstrate the platform independence with experiments on three different robots. In Experiments 1 and 2 we complete our previous study of perception of actions ?????????put????????? and ?????????take????????? demonstrating how the system learns to execute these same actions, along with new related actions ?????????cover????????? and ?????????uncover????????? based on the composition of action primitives ?????????grasp X????????? and ?????????release X at Y?????????. Significantly, these compositional action execution specifications learned on one iCub robot are then executed on another, based on the abstraction layer of motor primitives. Experiment 3 further validates the platform-independence of the system, as a new action that is learned on the iCub in Lyon is then executed on the Jido robot in Toulouse. In Experiment 4 we extended the definition of action perception to include the notion of agency, again inspired by developmental studies of agency attribution, exploiting the Kinect motion capture system for tracking human motion. Finally in Experiment 5 we demonstrate how the combined representation of action in terms of perception and execution provides the basis for imitation. This provides the basis for a- open ended cooperation capability where new actions can be learned and integrated into shared plans for cooperation. Part of the novelty of this research is the robots' use of spoken language understanding and visual perception to generate action representations in a platform independent manner based on physical state changes. This provides a flexible capability for goal-directed action imitation.

One of the long term objectives of robotics and artificial cognitive systems is that robots will increasingly be capable of interacting in a cooperative and adaptive manner with their human counterparts in open-ended tasks that can change... more

One of the long term objectives of robotics and artificial cognitive systems is that robots will increasingly be capable of interacting in a cooperative and adaptive manner with their human counterparts in open-ended tasks that can change in real-time. In such situations, an important aspect of the robot behavior will be the ability to acquire new knowledge of the cooperative tasks by observing humans. At least two significant challenges can be identified in this context. The first challenge concerns development of methods to allow the characterization of human actions such that robotic systems can observe and learn new actions, and more complex behaviors made up of those actions. The second challenge is associated with the immense heterogeneity and diversity of robots and their perceptual and motor systems. The associated question is whether the identified methods for action perception can be generalized across the different perceptual systems inherent to distinct robot platforms. The current research addresses these two challenges. We present results from a cooperative humanrobot interaction system that has been specifically developed for portability between different humanoid platforms. Within this architecture, the physical details of the perceptual system (e.g. video camera vs IR video with reflecting markers) are encapsulated at the lowest level. Actions are then automatically characterized in terms of perceptual primitives related to motion, contact and visibility.

Abstract Robots should be capable of interacting in a cooperative and adaptive manner with their human counterparts in open-ended tasks that can change in real-time. An important aspect of the robot behavior will be the ability to acquire... more

Abstract Robots should be capable of interacting in a cooperative and adaptive manner with their human counterparts in open-ended tasks that can change in real-time. An important aspect of the robot behavior will be the ability to acquire new knowledge of the cooperative tasks by observing and interacting with humans. The current research addresses this challenge. We present results from a cooperative human-robot interaction system that has been specifically developed for portability between different humanoid platforms, by ...

Robots should be capable of interacting in a cooperative and adaptive manner with their human counterparts in open-ended tasks that can change in real-time. An important aspect of the robot behavior will be the ability to acquire new... more

Robots should be capable of interacting in a cooperative and adaptive manner with their human counterparts in open-ended tasks that can change in real-time. An important aspect of the robot behavior will be the ability to acquire new knowledge of the cooperative tasks by observing and interacting with humans. The current research addresses this challenge. We present results from a cooperative human–robot interaction system that has been specifically developed for portability between different humanoid platforms, by abstraction layers at the perceptual and motor interfaces. In the perceptual domain, the resulting system is
demonstrated to learn to recognize objects and to recognize actions as sequences of perceptual primitives, and to transfer this learning, and recognition, between different robotic platforms. For execution, composite actions and plans are shown to be learnt on one robot and executed successfully on a different one. Most importantly, the system provides the ability to link actions into shared plans, that form the basis of human–robot cooperation, applying principles from human cognitive development to the domain of robot cognitive systems.