Eric Postma | Tilburg University (original) (raw)
Papers by Eric Postma
The full version of this paper appeared in: Proceedings of the CogSci 2004 Conference. A new reco... more The full version of this paper appeared in: Proceedings of the CogSci 2004 Conference. A new recognition memory model, the natural input memory (NIM) model, is proposed which differs from existing models of human memory in that it operates on natural input. A biologically-informed pre-processing method, which is commonly used in artificial in- telligence (8), takes local samples from a natural image and translates these into a feature vector representation. Existing memory models (e.g., the R EM model, (9); the model of differentiation, (5)) lack such a pre-processing method and often make simplifying as- sumptions about item representations. These models represent an item by a vector of abstract features. The feature values are usually drawn from a particular mathematical distribution, which describes the distributional statistics of real-world perceptual features. Since these models artificially generate representations, they do not address the informa- tional contribution of the ...
Future Directions for Intelligent Systems and Information Sciences, 2000
... of the partitioning of the painting into nine regions for (from left to right) landscape, squ... more ... of the partitioning of the painting into nine regions for (from left to right) landscape, square, and ... 144 7 Conclusions and Outlook Our knowledge-guided quest for good features yielded feature-space representations permitting ... Journal of Experimental Psychology, 103, 597-600. ...
International Journal of Neural Systems, 1996
The rapidity of time-constrained visual identification suggests a feedforward process in which ne... more The rapidity of time-constrained visual identification suggests a feedforward process in which neural activity is propagated through a number of cortical stages. The process is modeled by using a synfire chain, leading to a neural-network model which involves propagating activation waves through a sequence of layers. Theory and analysis of the model’s behavior, especially in the presence of noise, predict enhancement of wave propagation for a range of noise intensities. Simulation studies confirm this prediction. The results are discussed in terms of (spatio-temporal) stochastic resonance. It is concluded that feedforward processes such as time-constrained visual identification may benefit from moderate levels of noise.
Lecture Notes in Computer Science, 2003
The efficiency of alpha-beta search algorithms heavily depends on the order in which the moves ar... more The efficiency of alpha-beta search algorithms heavily depends on the order in which the moves are examined. This paper investigates a new move-ordering heuristic in chess, namely the Neural MoveMap (NMM) heuristic. The heuristic uses a neural network to estimate the likelihood of a move being the best in a certain position. The moves considered more likely to be the best are examined first. We develop an enhanced approach to apply the NMM heuristic during the search, by using a weighted combination of the neural-network scores and the history-heuristic scores. Moreover, we analyse the influence of existing game databases and opening theory on the design of the training patterns. The NMM heuristic is tested for middle-game chess positions by the program CRAFTY. The experimental results indicate that the NMM heuristic outperforms the existing move ordering, especially when a weighted-combination approach is chosen.
Lecture Notes in Computer Science, 2003
The paper presents a system that learns to predict local strong expert moves in the game of Go at... more The paper presents a system that learns to predict local strong expert moves in the game of Go at a level comparable to that of strong human kyu players. This performance is achieved by four techniques. First, our training algorithm is based on a relativetarget approach that avoids needless weight adaptations characteristic of most neural-network classifiers. Second, we reduce dimensionality
Studies in Computational Intelligence, 2010
Relational reinforcement learning is a promising new direction within reinforcement learning rese... more Relational reinforcement learning is a promising new direction within reinforcement learning research. It upgrades reinforcement learning techniques by using relational representations for states, actions and learned value-functions or policies to allow more natural representations and abstractions of complex tasks. Multi-agent systems present a good example of such a complex task and are often characterized by their relational structure. In this paper, we show how relational reinforcement learning could be a useful tool for learning in multi agent systems and study this approach in more detail on one aspect of multi-agent systems, i.e., on learning a communication policy for cooperative systems (e.g. resource distribution). We perform a number of exploratory experiments that highlight the conditions in which relational representations are beneficial.
2005 IEEE International Conference on Multimedia and Expo
Pattern Recognition Letters, 2007
Traditionally, the analysis of visual arts is performed by human art experts only. The availabili... more Traditionally, the analysis of visual arts is performed by human art experts only. The availability of advanced artificial intelligence techniques makes it possible to support art experts in their judgement of visual art. In this paper image-analysis techniques are applied to measure the complementary colours in the oeuvre of Vincent van Gogh. It is commonly acknowledged that, especially in his French period, Van Gogh started employed complementary colours to emphasize contours of objects or parts of scenes. We propose a method to measure complementary-colour usage in a painting by combing an opponent-colour space representation with Gabor filtering. Using this method, the analysis of a dataset of 617 digitised oil-on-canvas paintings confirms artexpert's knowledge about the global pattern of complementary-colour usage in Van Gogh's paintings. In addition, it provides an objective and quantifiable way to support the analysis of colours in individual paintings. Our results show that art experts can be supported by artificial-intelligence techniques.
Pattern Recognition Letters, 2007
Art experts have great difficulties in validating the work of painters and writers of their own t... more Art experts have great difficulties in validating the work of painters and writers of their own time. For one or another reason they usually underestimate their contemporary subjects of assessment. If the experts were clairvoyant, Rembrandt and Van Gogh would have been millionaires. However, only some two, three or four centuries later the proper recognition became their due. They were unanimously crowned as the best painters; they were recalled, remembered, and honoured in retrospect and their birthdays were celebrated. Why did this recognition come so late? Was it by the personal judgement approach of the art experts? It is hard to tell and even harder to analyse. From the world of auditing we know that personal judgements heavily depend on experience and expertise. In the world of art experts experience and expertise is a presumption for adequate operating in a world full of tangible and almost untouchable assets. However, the approach by the experts (and thus the presumption) may be ineffective and may lead to different opinions, to personal bias, and even to misleading statements. In this time we have computers that may help us to disentangle the intricacies created by the artist and that may unveil the secrets of the greatest artists of the world. The prevailing question now is: do we wish to know why an art creation is attractive, irresistible or even almost perfect. For scientists, the answer is a definitive 'yes'. So, pattern recognition techniques have been developed and applied to a wide variety of domains. However, so far their applications to the cultural heritage have been scarce. The main barrier was that art-historical or archaeological domains had only a modest usage of computers. Since the last decade, computers are rapidly becoming one of the standard tools for cultural-heritage researchers. Historians employ text retrieval and text mining tools, art experts examine and analyse paintings using dedicated image processing software, and archaeologists are supported by pattern recognition software to create and refine their taxonomies of pottery and other findings. Therefore this special issue of Pattern Recognition Letters is dedicated to Pattern Recognition in Cultural Heritage. Originally we had in mind to combine this topic with pattern recognition in Medical Applications. The success rate of the Cultural Heritage contributions led us to the decision to put emphasis on the Cultural Heritage, and to complete the issue by closing with a contribution which highlights pattern discovery in bioinformatics. It shows a way to an integral approach of many pattern recognition techniques in different areas of research. So, the special issue contains eight papers on pattern recognition in cultural heritage and one in medical applications. Three papers focus on pattern recognition for archaeology. Bannai, Fisher, and Agathos investigate how virtual models of historical buildings can be created realistically. Kampel and Sablatnig describe their development of a rule-based system for the classification of ceramics. The contribution by Van Tonder shows how the original visual structure of ancient Japanese gardens can be reinstated using historical illustrations. Five contributions to this special issue address the analysis of the ''handwriting'' of authors. The handwriting may be taken literally or metaphorically. In the latter case, ''handwriting'' refers to the personal style of an author as reflected in a painting or other work of art. Analysing the actual handwriting, Schomaker, Franke, and Bulacu employ codebooks of handwriting fragments to identify the author of historical documents. Kammerer, Lettner, Zolda, and Sablatnig employ pattern classification to identify the drawing tools used for medieval panel paintings. Their findings lead to the identification of the personal style of the painter. Another type of handwriting is reflected in the colour usage in paintings. Berezhnoy, Postma, and Van den Herik analyse the Van Gogh's usage of complementary colours by applying special pattern recognition techniques to his digitized oeuvre. Authentication of paintings requires the proper identification of the idiosyncratic features of a painters' handwriting. Taylor et al. have identified the fractal dimension as one of these features for the well-known dripping painter Jackson Pollock. Finally, Bergboer, Postma, and Van den Herik concentrate on the high-level features (i.e., objects) to support art experts in their authentication of paintings. Using the visual context, they show how efficient and reliable object detection from paintings may become feasible.
Neural Networks, 1997
Thispaper describes the SCAN (Signal Channeling Attentional Network) model, a scalable neural net... more Thispaper describes the SCAN (Signal Channeling Attentional Network) model, a scalable neural network model for attentional scanning. The building block of SCAN is a gating lattice, a sparsely-connected neural network de~ned as a special case of the Ising latticefiom statistical mechanics. The process of spatial selection through covert attention is inteqoreted as a biological solution to the problem of translation-invariant pattern processing. In SCAN, a sequence ofpattem translations combines active selection with translation-invariant processing. Selected patterns are channeled through a gating network formed by a hierarchical jiactal structure of gating lattices, and mapped onto an output window. We show how the incorporation of an expectation-generating classtjier network (e.g. Caqoenter and Grossberg's ART network) into SCAN allows attentional selection to be driven by expectation. Simulation studies show the SCAN model to be capable of attending and identifying object patterns that are part of a realistically sized natural image. 01997 Elsevier Science Ltd.
Machine Vision and Applications, 2008
Connection Science, 2004
Microscopic analysis is a standard approach in the study of robot behaviour. Typically, the appro... more Microscopic analysis is a standard approach in the study of robot behaviour. Typically, the approach comprises the analysis of a single (or sometimes a few) robotenvironment system(s) to reveal specific properties of robot behaviour. In contrast to microscopic analysis, macroscopic analysis focuses on averaged properties of systems. The advantage is that such a property is easier to generalise so that it can be established to what extent the property is universal. This paper investigates whether a macroscopic analysis can reveal a universal property of adaptive behaviour in a robot model of foraging behaviour. Our analysis reveals that the step lengths of the most successful robots are distributed according to a Lévy-flight distribution. From studies on a variety of natural species, it is known that such a distribution constitutes a universal property of foraging behaviour. Thereafter we discuss an example of how macroscopic analysis can be applied to existing research in evolutionary robotics, and relate the macroscopic and microscopic analyses of foraging behaviour to the framework of scientific research described by Cohen (1995). We conclude that macroscopic analysis may predict universal properties of adaptive behaviour and that it may complement microscopic analysis in the study of adaptive behaviour.
Behavioral and Brain Sciences, 1998
This commentary discusses three main requirements for models of vision, namely, translation and s... more This commentary discusses three main requirements for models of vision, namely, translation and scale invariance, scalability, and hierarchy. Edelman's Chorus model falls short of fulfilling these requirements because it ignores the highly dynamic nature of vision. Incorporating an attentional mechanism and assuming geon-like prototype representations may enhance Chorus's plausibility as a model of human object recognition.
Artificial Life, 2001
The evolution of visual systems is constrained by a trade-off between spatial and temporal resolu... more The evolution of visual systems is constrained by a trade-off between spatial and temporal resolution. In this article we aim at identifying the causes of the trade-off at the retinal level in both artificial and natural visual systems. We start by selecting two factors that limit the values of spatial and temporal resolution. Then we show in two experiments on the evolution of an artificial system that the two factors induce trade-off curves connecting the evolved values of spatial and temporal resolution. A comparison of the experimental results with the resolution evolved in natural visual systems leads us to the conclusion that in natural systems the same factors are responsible for the observed trade-off.
The 'simulation hypothesis' is an intriguing explanation for cognition, and holds that 'thinking ... more The 'simulation hypothesis' is an intriguing explanation for cognition, and holds that 'thinking consists of simulated interaction with the environment' ([4], p.242). However, the neuroscientific proof for a simulation mechanism in the brain is indirect. In this paper we present a minimal-model approach to investigate the 'simulation hypothesis'. Our minimal model is called ACP and is an extension of the Active Categorical Perception model (ACP) presented in [8]. In ACP , robots have a neurocontroller with an output-input feedback mechanism that allows them to simulate perception and behaviour internally. Our experiments focus on the performance of robots with three different types of neurocontroller (two feedforward and one recurrent type of neurocontroller). Their performance is compared over three experimental conditions in which the output-input feedback mechanism is functional for variable durations. The results show that feedforward-neurocontrolled robots benefit from output-input feedback, while recurrent-neurocontrolled robots do not. Based on these results, two closely related conclusions are drawn: (1) the 'simulation hypothesis' may be too specific, and (2) predicting future perception may depend on neural recurrency (i.e., internal feedback) in general, rather than on the ability to simulate perception by feeding back actions.
In our research we evolve robot controllers for executing elementary behaviours. This paper focus... more In our research we evolve robot controllers for executing elementary behaviours. This paper focuses on the behaviour of pushing a box between two walls. Successful execution of this behaviour is critically dependent on the robot's ability to distinguish between the walls and the box. Using evolutionary algorithms, we optimised the box-pushing performances of two types of networks: feedforward networks and
Simple behaviours form the underpinnings of complex high-level behaviours. Traditional AI researc... more Simple behaviours form the underpinnings of complex high-level behaviours. Traditional AI research focuses on high-level behaviours and ignores the underpinnings that are considered to be irrelevant for the problem at hand. As a rst step towards a hybrid approach to complex behaviour, this paper studies the simple behaviour of box-pushing that underlies complex planning in, for instance, the game of Sokoban. We investigate neural-network controllers for box-pushing behaviour in a simulated and a real robot. Using an evolutionary algorithm we assess the tness of individuals by means of the four combinations global vs. local measure and internal vs. external measure. We compared the e ciency of the four tness functions for evolving box-pushing behaviour. The results show the global external measure to outperform the other three measures when evolving controllers from scratch in simulation. In addition, the local tness measures are argued to be appropriate for ne-tuning the box-pushing behaviour.
The full version of this paper appeared in: Proceedings of the CogSci 2004 Conference. A new reco... more The full version of this paper appeared in: Proceedings of the CogSci 2004 Conference. A new recognition memory model, the natural input memory (NIM) model, is proposed which differs from existing models of human memory in that it operates on natural input. A biologically-informed pre-processing method, which is commonly used in artificial in- telligence (8), takes local samples from a natural image and translates these into a feature vector representation. Existing memory models (e.g., the R EM model, (9); the model of differentiation, (5)) lack such a pre-processing method and often make simplifying as- sumptions about item representations. These models represent an item by a vector of abstract features. The feature values are usually drawn from a particular mathematical distribution, which describes the distributional statistics of real-world perceptual features. Since these models artificially generate representations, they do not address the informa- tional contribution of the ...
Future Directions for Intelligent Systems and Information Sciences, 2000
... of the partitioning of the painting into nine regions for (from left to right) landscape, squ... more ... of the partitioning of the painting into nine regions for (from left to right) landscape, square, and ... 144 7 Conclusions and Outlook Our knowledge-guided quest for good features yielded feature-space representations permitting ... Journal of Experimental Psychology, 103, 597-600. ...
International Journal of Neural Systems, 1996
The rapidity of time-constrained visual identification suggests a feedforward process in which ne... more The rapidity of time-constrained visual identification suggests a feedforward process in which neural activity is propagated through a number of cortical stages. The process is modeled by using a synfire chain, leading to a neural-network model which involves propagating activation waves through a sequence of layers. Theory and analysis of the model’s behavior, especially in the presence of noise, predict enhancement of wave propagation for a range of noise intensities. Simulation studies confirm this prediction. The results are discussed in terms of (spatio-temporal) stochastic resonance. It is concluded that feedforward processes such as time-constrained visual identification may benefit from moderate levels of noise.
Lecture Notes in Computer Science, 2003
The efficiency of alpha-beta search algorithms heavily depends on the order in which the moves ar... more The efficiency of alpha-beta search algorithms heavily depends on the order in which the moves are examined. This paper investigates a new move-ordering heuristic in chess, namely the Neural MoveMap (NMM) heuristic. The heuristic uses a neural network to estimate the likelihood of a move being the best in a certain position. The moves considered more likely to be the best are examined first. We develop an enhanced approach to apply the NMM heuristic during the search, by using a weighted combination of the neural-network scores and the history-heuristic scores. Moreover, we analyse the influence of existing game databases and opening theory on the design of the training patterns. The NMM heuristic is tested for middle-game chess positions by the program CRAFTY. The experimental results indicate that the NMM heuristic outperforms the existing move ordering, especially when a weighted-combination approach is chosen.
Lecture Notes in Computer Science, 2003
The paper presents a system that learns to predict local strong expert moves in the game of Go at... more The paper presents a system that learns to predict local strong expert moves in the game of Go at a level comparable to that of strong human kyu players. This performance is achieved by four techniques. First, our training algorithm is based on a relativetarget approach that avoids needless weight adaptations characteristic of most neural-network classifiers. Second, we reduce dimensionality
Studies in Computational Intelligence, 2010
Relational reinforcement learning is a promising new direction within reinforcement learning rese... more Relational reinforcement learning is a promising new direction within reinforcement learning research. It upgrades reinforcement learning techniques by using relational representations for states, actions and learned value-functions or policies to allow more natural representations and abstractions of complex tasks. Multi-agent systems present a good example of such a complex task and are often characterized by their relational structure. In this paper, we show how relational reinforcement learning could be a useful tool for learning in multi agent systems and study this approach in more detail on one aspect of multi-agent systems, i.e., on learning a communication policy for cooperative systems (e.g. resource distribution). We perform a number of exploratory experiments that highlight the conditions in which relational representations are beneficial.
2005 IEEE International Conference on Multimedia and Expo
Pattern Recognition Letters, 2007
Traditionally, the analysis of visual arts is performed by human art experts only. The availabili... more Traditionally, the analysis of visual arts is performed by human art experts only. The availability of advanced artificial intelligence techniques makes it possible to support art experts in their judgement of visual art. In this paper image-analysis techniques are applied to measure the complementary colours in the oeuvre of Vincent van Gogh. It is commonly acknowledged that, especially in his French period, Van Gogh started employed complementary colours to emphasize contours of objects or parts of scenes. We propose a method to measure complementary-colour usage in a painting by combing an opponent-colour space representation with Gabor filtering. Using this method, the analysis of a dataset of 617 digitised oil-on-canvas paintings confirms artexpert's knowledge about the global pattern of complementary-colour usage in Van Gogh's paintings. In addition, it provides an objective and quantifiable way to support the analysis of colours in individual paintings. Our results show that art experts can be supported by artificial-intelligence techniques.
Pattern Recognition Letters, 2007
Art experts have great difficulties in validating the work of painters and writers of their own t... more Art experts have great difficulties in validating the work of painters and writers of their own time. For one or another reason they usually underestimate their contemporary subjects of assessment. If the experts were clairvoyant, Rembrandt and Van Gogh would have been millionaires. However, only some two, three or four centuries later the proper recognition became their due. They were unanimously crowned as the best painters; they were recalled, remembered, and honoured in retrospect and their birthdays were celebrated. Why did this recognition come so late? Was it by the personal judgement approach of the art experts? It is hard to tell and even harder to analyse. From the world of auditing we know that personal judgements heavily depend on experience and expertise. In the world of art experts experience and expertise is a presumption for adequate operating in a world full of tangible and almost untouchable assets. However, the approach by the experts (and thus the presumption) may be ineffective and may lead to different opinions, to personal bias, and even to misleading statements. In this time we have computers that may help us to disentangle the intricacies created by the artist and that may unveil the secrets of the greatest artists of the world. The prevailing question now is: do we wish to know why an art creation is attractive, irresistible or even almost perfect. For scientists, the answer is a definitive 'yes'. So, pattern recognition techniques have been developed and applied to a wide variety of domains. However, so far their applications to the cultural heritage have been scarce. The main barrier was that art-historical or archaeological domains had only a modest usage of computers. Since the last decade, computers are rapidly becoming one of the standard tools for cultural-heritage researchers. Historians employ text retrieval and text mining tools, art experts examine and analyse paintings using dedicated image processing software, and archaeologists are supported by pattern recognition software to create and refine their taxonomies of pottery and other findings. Therefore this special issue of Pattern Recognition Letters is dedicated to Pattern Recognition in Cultural Heritage. Originally we had in mind to combine this topic with pattern recognition in Medical Applications. The success rate of the Cultural Heritage contributions led us to the decision to put emphasis on the Cultural Heritage, and to complete the issue by closing with a contribution which highlights pattern discovery in bioinformatics. It shows a way to an integral approach of many pattern recognition techniques in different areas of research. So, the special issue contains eight papers on pattern recognition in cultural heritage and one in medical applications. Three papers focus on pattern recognition for archaeology. Bannai, Fisher, and Agathos investigate how virtual models of historical buildings can be created realistically. Kampel and Sablatnig describe their development of a rule-based system for the classification of ceramics. The contribution by Van Tonder shows how the original visual structure of ancient Japanese gardens can be reinstated using historical illustrations. Five contributions to this special issue address the analysis of the ''handwriting'' of authors. The handwriting may be taken literally or metaphorically. In the latter case, ''handwriting'' refers to the personal style of an author as reflected in a painting or other work of art. Analysing the actual handwriting, Schomaker, Franke, and Bulacu employ codebooks of handwriting fragments to identify the author of historical documents. Kammerer, Lettner, Zolda, and Sablatnig employ pattern classification to identify the drawing tools used for medieval panel paintings. Their findings lead to the identification of the personal style of the painter. Another type of handwriting is reflected in the colour usage in paintings. Berezhnoy, Postma, and Van den Herik analyse the Van Gogh's usage of complementary colours by applying special pattern recognition techniques to his digitized oeuvre. Authentication of paintings requires the proper identification of the idiosyncratic features of a painters' handwriting. Taylor et al. have identified the fractal dimension as one of these features for the well-known dripping painter Jackson Pollock. Finally, Bergboer, Postma, and Van den Herik concentrate on the high-level features (i.e., objects) to support art experts in their authentication of paintings. Using the visual context, they show how efficient and reliable object detection from paintings may become feasible.
Neural Networks, 1997
Thispaper describes the SCAN (Signal Channeling Attentional Network) model, a scalable neural net... more Thispaper describes the SCAN (Signal Channeling Attentional Network) model, a scalable neural network model for attentional scanning. The building block of SCAN is a gating lattice, a sparsely-connected neural network de~ned as a special case of the Ising latticefiom statistical mechanics. The process of spatial selection through covert attention is inteqoreted as a biological solution to the problem of translation-invariant pattern processing. In SCAN, a sequence ofpattem translations combines active selection with translation-invariant processing. Selected patterns are channeled through a gating network formed by a hierarchical jiactal structure of gating lattices, and mapped onto an output window. We show how the incorporation of an expectation-generating classtjier network (e.g. Caqoenter and Grossberg's ART network) into SCAN allows attentional selection to be driven by expectation. Simulation studies show the SCAN model to be capable of attending and identifying object patterns that are part of a realistically sized natural image. 01997 Elsevier Science Ltd.
Machine Vision and Applications, 2008
Connection Science, 2004
Microscopic analysis is a standard approach in the study of robot behaviour. Typically, the appro... more Microscopic analysis is a standard approach in the study of robot behaviour. Typically, the approach comprises the analysis of a single (or sometimes a few) robotenvironment system(s) to reveal specific properties of robot behaviour. In contrast to microscopic analysis, macroscopic analysis focuses on averaged properties of systems. The advantage is that such a property is easier to generalise so that it can be established to what extent the property is universal. This paper investigates whether a macroscopic analysis can reveal a universal property of adaptive behaviour in a robot model of foraging behaviour. Our analysis reveals that the step lengths of the most successful robots are distributed according to a Lévy-flight distribution. From studies on a variety of natural species, it is known that such a distribution constitutes a universal property of foraging behaviour. Thereafter we discuss an example of how macroscopic analysis can be applied to existing research in evolutionary robotics, and relate the macroscopic and microscopic analyses of foraging behaviour to the framework of scientific research described by Cohen (1995). We conclude that macroscopic analysis may predict universal properties of adaptive behaviour and that it may complement microscopic analysis in the study of adaptive behaviour.
Behavioral and Brain Sciences, 1998
This commentary discusses three main requirements for models of vision, namely, translation and s... more This commentary discusses three main requirements for models of vision, namely, translation and scale invariance, scalability, and hierarchy. Edelman's Chorus model falls short of fulfilling these requirements because it ignores the highly dynamic nature of vision. Incorporating an attentional mechanism and assuming geon-like prototype representations may enhance Chorus's plausibility as a model of human object recognition.
Artificial Life, 2001
The evolution of visual systems is constrained by a trade-off between spatial and temporal resolu... more The evolution of visual systems is constrained by a trade-off between spatial and temporal resolution. In this article we aim at identifying the causes of the trade-off at the retinal level in both artificial and natural visual systems. We start by selecting two factors that limit the values of spatial and temporal resolution. Then we show in two experiments on the evolution of an artificial system that the two factors induce trade-off curves connecting the evolved values of spatial and temporal resolution. A comparison of the experimental results with the resolution evolved in natural visual systems leads us to the conclusion that in natural systems the same factors are responsible for the observed trade-off.
The 'simulation hypothesis' is an intriguing explanation for cognition, and holds that 'thinking ... more The 'simulation hypothesis' is an intriguing explanation for cognition, and holds that 'thinking consists of simulated interaction with the environment' ([4], p.242). However, the neuroscientific proof for a simulation mechanism in the brain is indirect. In this paper we present a minimal-model approach to investigate the 'simulation hypothesis'. Our minimal model is called ACP and is an extension of the Active Categorical Perception model (ACP) presented in [8]. In ACP , robots have a neurocontroller with an output-input feedback mechanism that allows them to simulate perception and behaviour internally. Our experiments focus on the performance of robots with three different types of neurocontroller (two feedforward and one recurrent type of neurocontroller). Their performance is compared over three experimental conditions in which the output-input feedback mechanism is functional for variable durations. The results show that feedforward-neurocontrolled robots benefit from output-input feedback, while recurrent-neurocontrolled robots do not. Based on these results, two closely related conclusions are drawn: (1) the 'simulation hypothesis' may be too specific, and (2) predicting future perception may depend on neural recurrency (i.e., internal feedback) in general, rather than on the ability to simulate perception by feeding back actions.
In our research we evolve robot controllers for executing elementary behaviours. This paper focus... more In our research we evolve robot controllers for executing elementary behaviours. This paper focuses on the behaviour of pushing a box between two walls. Successful execution of this behaviour is critically dependent on the robot's ability to distinguish between the walls and the box. Using evolutionary algorithms, we optimised the box-pushing performances of two types of networks: feedforward networks and
Simple behaviours form the underpinnings of complex high-level behaviours. Traditional AI researc... more Simple behaviours form the underpinnings of complex high-level behaviours. Traditional AI research focuses on high-level behaviours and ignores the underpinnings that are considered to be irrelevant for the problem at hand. As a rst step towards a hybrid approach to complex behaviour, this paper studies the simple behaviour of box-pushing that underlies complex planning in, for instance, the game of Sokoban. We investigate neural-network controllers for box-pushing behaviour in a simulated and a real robot. Using an evolutionary algorithm we assess the tness of individuals by means of the four combinations global vs. local measure and internal vs. external measure. We compared the e ciency of the four tness functions for evolving box-pushing behaviour. The results show the global external measure to outperform the other three measures when evolving controllers from scratch in simulation. In addition, the local tness measures are argued to be appropriate for ne-tuning the box-pushing behaviour.