Guillermo Puebla | University of Edinburgh (original) (raw)
Papers by Guillermo Puebla
2023 Conference on Cognitive Computational Neuroscience
Researchers studying the correspondences between Deep Neural Networks (DNNs) and humans often giv... more Researchers studying the correspondences between Deep Neural Networks (DNNs) and humans often give little consideration to severe testing when drawing conclusions from empirical findings, and this is impeding progress in building better models of minds. We first detail what we mean by severe testing and highlight how this is especially important when working with opaque models with many free parameters that may solve a given task in multiple different ways. Second, we provide multiple examples of researchers making strong claims regarding DNN-human similarities without engaging in severe testing of their hypotheses. Third, we consider why severe testing is undervalued. We provide evidence that part of the fault lies with the review process. There is now a widespread appreciation in many areas of science that a bias for publishing positive results (among other practices) is leading to a credibility crisis, but there seems less awareness of the problem here.
On several key issues we agree with the commentators. Perhaps most importantly, everyone seems to... more On several key issues we agree with the commentators. Perhaps most importantly, everyone seems to agree that psychology has an important role to play in building better models of human vision, and (most) everyone agrees (including us) that DNNs will play an important role in modelling human vision going forward. But there are also disagreements about what models are for, how DNN-human correspondences should be evaluated, the value of alternative modelling approaches, and impact of marketing hype in the literature. In our view, these latter issues are contributing to many unjustified claims regarding DNN-human correspondences in vision and other domains of cognition. We explore all these issues in this response.
We describe the MindSet benchmark designed to facilitate the testing of DNNs against controlled e... more We describe the MindSet benchmark designed to facilitate the testing of DNNs against controlled experiments reported in psychology. MindSet will focus on a range of low-, middle-, and high-level visual findings that provide important constraints for theory, provide the materials for testing DNNs, and provide an example of how to assess a DNN on each experiment using a ResNet152 pretrained on ImageNet. The goal is not to evaluate how well ResNet152 accounts for human vision, but rather, encourage researchers to assess how well various DNNs account for a range of key human visual phenomena.
ArXiv, 2022
Humans perceive the world in terms of objects and relations between them. In fact, for any given ... more Humans perceive the world in terms of objects and relations between them. In fact, for any given pair of objects, there is a myriad of relations that apply to them. How does the cognitive system learn which relations are useful to characterize the task at hand? And how can it use these representations to build a relational policy to interact effectively with the environment? In this paper we proposed that this problem can be understood through the lens of a sub-field of symbolic machine learning called relational reinforcement learning (RRL). To demonstrate the potential of our approach, we build a simple model of relational policy learning based on a function approximator developed in RRL. We trained and tested our model in three Atari games that required to consider an increasingly number of potential relations: Breakout, Pong and Demon Attack. In each game, our model was able to select adequate relational representations and build a relational policy incrementally. We discuss the r...
Humans readily generalize, applying prior knowledge to novel situations and stimuli. Advances in ... more Humans readily generalize, applying prior knowledge to novel situations and stimuli. Advances in machine learning and artificial intelligence have begun to approximate and even surpass human performance, but machine systems reliably struggle to generalize information to untrained situations. We describe a neural network model that is trained to play one video game (Breakout) and demonstrates one-shot generalization to a new game (Pong). The model generalizes by learning representations that are functionally and formally symbolic from training data, without feedback, and without requiring that structured representations be specified a priori. The model uses unsupervised comparison to discover which characteristics of the input are invariant, and to learn relational predicates; it then applies these predicates to arguments in a symbolic fashion, using oscillatory regularities in network firing to dynamically bind predicates to arguments. We argue that models of human cognition must ac...
People readily generalise prior knowledge to novel situations and stimuli. Advances in machine le... more People readily generalise prior knowledge to novel situations and stimuli. Advances in machine learning and artificial intelligence have begun to approximate and even surpass human performance in specific domains, but machine learning systems struggle to generalise information to untrained situations. We present and model that demonstrates human-like extrapolatory generalisation by learning and explicitly representing an open-ended set of relations characterising regularities within the domains it is exposed to. First, when trained to play one video game (e.g., Breakout). the model generalises to a new game (e.g., Pong) with different rules, dimensions, and characteristics in a single shot. Second, the model can learn representations from a different domain (e.g., 3D shape images) that support learning a video game and generalising to a new game in one shot. By exploiting well-established principles from cognitive psychology and neuroscience, the model learns structured representati...
Same-different visual reasoning is a basic skill central to abstract combinatorial thought. This ... more Same-different visual reasoning is a basic skill central to abstract combinatorial thought. This fact has lead neural networks researchers to test same-different classification on deep convolutional neural networks (DCNNs), which has resulted in a controversy regarding whether this skill is within the capacity of these models. However, most tests of same-different classification rely on testing on images that come from the same pixel-level distribution as the testing images, yielding the results inconclusive. In this study we tested relational same-different reasoning DCNNs. In a series of simulations we show that models based on the ResNet-50 architecture are capable of visual same-different classification, but only when the test images are similar to the training images at the pixel-level. In contrast, even when there are only subtle differences between the testing and training images, the performance of DCNNs drops substantially. This is true even when DCNNs' training regime ...
Developmental Review, 2016
People tend to think that the function intended by an artifact's designer is its real or proper f... more People tend to think that the function intended by an artifact's designer is its real or proper function. Relatedly, people tend to classify artifacts according to their designer's intended function (DIF), as opposed to an alternative opportunistic function. This centrality of DIF has been shown in children from 6 years of age to adults, and it is not restricted to Western societies. We review four different explanations for the centrality of DIF, integrating developmental and adult data. Two of these explanations are essentialist accounts (causal and intentional essentialism). Two of them are normative accounts (conventional function and idea ownership). Though essentialist accounts have been very influential, we review evidence that shows their limitations. Normative accounts have been less predominant. We review evidence to support them, and discuss how they account for the data. In particular, we review evidence suggesting that the centrality of DIF can be explained as a case of idea ownership. This theory makes sense of a great deal of the existing data on the subject, reconciles contradictory results, links this line of work to other literatures, and offers an account of the observed developmental trend.
Same-different visual reasoning is a basic skill central to abstract combinatorial thought. This ... more Same-different visual reasoning is a basic skill central to abstract combinatorial thought. This fact has lead neural networks researchers to test same-different classification on deep convolutional neural networks (DCNNs), which has resulted in a controversy regarding whether this skill is within the capacity of these models. However, most tests of same-different classification rely on testing on images that come from the same pixel-level distribution as the testing images, yielding the results inconclusive. In this study we tested relational same-different reasoning DCNNs. In a series of simulations we show that DCNNs are capable of visual same-different classification, but only when the test images are similar to the training images at the pixel-level. In contrast, even when there are only subtle differences between the testing and training images, the performance of DCNNs could drop to chance levels. This is true even when DCNNs’ training regime included a wide distribution of i...
Same-different visual reasoning is a basic skill central to abstract combinatorial thought. This ... more Same-different visual reasoning is a basic skill central to abstract combinatorial thought. This fact has lead neural networks researchers to test same-different classification on deep convolutional neural networks (DCNNs), which has resulted in a controversy regarding whether this skill is within the capacity of these models. However, most tests of same-different classification rely on testing on images that come from the same pixel-level distribution as the testing images, yielding the results inconclusive. In this study we tested relational same-different reasoning DCNNs. In a series of simulations we show that models based on the ResNet-50 architecture are capable of visual same-different classification, but only when the test images are similar to the training images at the pixel-level. In contrast, even when there are only subtle differences between the testing and training images, the performance of DCNNs drops substantially. This is true even when DCNNs' training regime ...
Psychological review, 2022
People readily generalize knowledge to novel domains and stimuli. We present a theory, instantiat... more People readily generalize knowledge to novel domains and stimuli. We present a theory, instantiated in a computational model, based on the idea that cross-domain generalization in humans is a case of analogical inference over structured (i.e., symbolic) relational representations. The model is an extension of the Learning and Inference with Schemas and Analogy (LISA; Hummel & Holyoak, 1997, 2003) and Discovery of Relations by Analogy (DORA; Doumas et al., 2008) models of relational inference and learning. The resulting model learns both the content and format (i.e., structure) of relational representations from nonrelational inputs without supervision, when augmented with the capacity for reinforcement learning it leverages these representations to learn about individual domains, and then generalizes to new domains on the first exposure (i.e., zero-shot learning) via analogical inference. We demonstrate the capacity of the model to learn structured relational representations from a ...
Humans readily generalize, applying prior knowledge to novel situations and stimuli. Advances in ... more Humans readily generalize, applying prior knowledge to novel situations and stimuli. Advances in machine learning and artificial intelligence have begun to approximate and even surpass human performance, but machine systems reliably struggle to generalize information to untrained situations. We describe a neural network model that is trained to play one video game (Breakout) and demonstrates one-shot generalization to a new game (Pong). The model generalizes by learning representations that are functionally and formally symbolic from training data, without feedback, and without requiring that structured representations be specified a priori. The model uses unsupervised comparison to discover which characteristics of the input are invariant, and to learn relational predicates; it then applies these predicates to arguments in a symbolic fashion, using oscillatory regularities in network firing to dynamically bind predicates to arguments. We argue that models of human cognition must ac...
Review of Philosophy and Psychology, 2013
ABSTRACT Designers’ intentions are important for determining an artifact’s proper function (i.e.,... more ABSTRACT Designers’ intentions are important for determining an artifact’s proper function (i.e., its perceived real function). However, there are disagreements regarding why. In one view, people reason causally about artifacts’ functional outcomes, and designers’ intended functions become important to the extent that they allow inferring outcomes. In another view, people use knowledge of designers’ intentions to determine proper functions, but this is unrelated to causal reasoning, having perhaps to do with intentional or social forms of reasoning (e.g., authority). Regarding these latter social factors, researchers have proposed that designers’ intentions operate through a mechanism akin to social conventions, and that therefore both are determinants of proper function. In the current work, participants learned about an object’s creation, about social conventions for its use and about a specific episode where the artifact was used. The function implemented by the user could be aligned with the designer’s intended function, the social convention, both, or neither (i.e., an opportunistic use). Importantly, the use episode always resulted in an accident. Data show that the accident negatively affected proper function judgments and perceived efficiency for conventional and opportunistic functions, but not for designers’ intended functions. This is inconsistent with the view that designers’ intentions are conceptualized as causes of functional outcomes and with the idea that designers’ intentions and social conventions operate through a common mechanism.
Cognition, 2014
In four experiments, we tested conditions under which artifact concepts support inference and coh... more In four experiments, we tested conditions under which artifact concepts support inference and coherence in causal categorization. In all four experiments, participants categorized scenarios in which we systematically varied information about artifacts' associated design history, physical structure, user intention, user action and functional outcome, and where each property could be specified as intact, compromised or not observed. Consistently across experiments, when participants received complete information (i.e., when all properties were observed), they categorized based on individual properties and did not show evidence of using coherence to categorize. In contrast, when the state of some property was not observed, participants gave evidence of using available information to infer the state of the unobserved property, which increased the value of the available information for categorization. Our data offers answers to longstanding questions regarding artifact categorization, such as whether there are underlying causal models for artifacts, which properties are part of them, whether design history is an artifact's causal essence, and whether physical appearance or functional outcome is the most central artifact property.
csjarchive.cogsci.rpi.edu
Design history function (i.e., what an artifact was made for) is a central aspect of artifact con... more Design history function (i.e., what an artifact was made for) is a central aspect of artifact conceptualization. A generally accepted explanation is that design history is central because it is the root cause for many other artifact properties. In Exp. 1, an inference task allowed us to probe participants' causal models, and then to use them when making predictions for Exp. 2. Design history was, in fact, part of what participants viewed as conceptually relevant. Predictions for Exp. 2 were derived using the currently most comprehensive theory about how causal knowledge affects categorization. Our results show that though participants used design history, functional outcome and physical structure to conceptualize artifacts, the effect of design history was independent from knowledge of physical structure and functional outcome. This result is inconsistent with a causal knowledge explanation of design history's conceptual centrality.
People readily generalise prior knowledge to novel situations and stimuli. Advances in machine le... more People readily generalise prior knowledge to novel situations and stimuli. Advances in machine learning and artificial intelligence have begun to approximate and even surpass human performance in specific domains, but machine learning systems struggle to generalise information to untrained situations. We present and model that demonstrates human-like extrapolatory generalisation by learning and explicitly representing an open-ended set of relations characterising regularities within the domains it is exposed to. First, when trained to play one video game (e.g., Breakout). the model generalises to a new game (e.g., Pong) with different rules, dimensions, and characteristics in a single shot. Second, the model can learn representations from a different domain (e.g., 3D shape images) that support learning a video game and generalising to a new game in one shot. By exploiting well-established principles from cognitive psychology and neuroscience, the model learns structured representati...
Same-different visual reasoning is a basic skill central to abstract combinatorial thought. This ... more Same-different visual reasoning is a basic skill central to abstract combinatorial thought. This fact has lead neural networks researchers to test same-different classification on deep convolutional neural networks (DCNNs), which has resulted in a controversy regarding whether this skill is within the capacity of these models. However, most tests of same-different classification rely on testing on images that come from the same pixel-level distribution as the testing images, yielding the results inconclusive. In this study we tested relational same-different reasoning DCNNs. In a series of simulations we show that models based on the ResNet-50 architecture are capable of visual same-different classification, but only when the test images are similar to the training images at the pixel-level. In contrast, even when there are only subtle differences between the testing and training images, the performance of DCNNs drops substantially. This is true even when DCNNs' training regime ...
2023 Conference on Cognitive Computational Neuroscience
Researchers studying the correspondences between Deep Neural Networks (DNNs) and humans often giv... more Researchers studying the correspondences between Deep Neural Networks (DNNs) and humans often give little consideration to severe testing when drawing conclusions from empirical findings, and this is impeding progress in building better models of minds. We first detail what we mean by severe testing and highlight how this is especially important when working with opaque models with many free parameters that may solve a given task in multiple different ways. Second, we provide multiple examples of researchers making strong claims regarding DNN-human similarities without engaging in severe testing of their hypotheses. Third, we consider why severe testing is undervalued. We provide evidence that part of the fault lies with the review process. There is now a widespread appreciation in many areas of science that a bias for publishing positive results (among other practices) is leading to a credibility crisis, but there seems less awareness of the problem here.
On several key issues we agree with the commentators. Perhaps most importantly, everyone seems to... more On several key issues we agree with the commentators. Perhaps most importantly, everyone seems to agree that psychology has an important role to play in building better models of human vision, and (most) everyone agrees (including us) that DNNs will play an important role in modelling human vision going forward. But there are also disagreements about what models are for, how DNN-human correspondences should be evaluated, the value of alternative modelling approaches, and impact of marketing hype in the literature. In our view, these latter issues are contributing to many unjustified claims regarding DNN-human correspondences in vision and other domains of cognition. We explore all these issues in this response.
We describe the MindSet benchmark designed to facilitate the testing of DNNs against controlled e... more We describe the MindSet benchmark designed to facilitate the testing of DNNs against controlled experiments reported in psychology. MindSet will focus on a range of low-, middle-, and high-level visual findings that provide important constraints for theory, provide the materials for testing DNNs, and provide an example of how to assess a DNN on each experiment using a ResNet152 pretrained on ImageNet. The goal is not to evaluate how well ResNet152 accounts for human vision, but rather, encourage researchers to assess how well various DNNs account for a range of key human visual phenomena.
ArXiv, 2022
Humans perceive the world in terms of objects and relations between them. In fact, for any given ... more Humans perceive the world in terms of objects and relations between them. In fact, for any given pair of objects, there is a myriad of relations that apply to them. How does the cognitive system learn which relations are useful to characterize the task at hand? And how can it use these representations to build a relational policy to interact effectively with the environment? In this paper we proposed that this problem can be understood through the lens of a sub-field of symbolic machine learning called relational reinforcement learning (RRL). To demonstrate the potential of our approach, we build a simple model of relational policy learning based on a function approximator developed in RRL. We trained and tested our model in three Atari games that required to consider an increasingly number of potential relations: Breakout, Pong and Demon Attack. In each game, our model was able to select adequate relational representations and build a relational policy incrementally. We discuss the r...
Humans readily generalize, applying prior knowledge to novel situations and stimuli. Advances in ... more Humans readily generalize, applying prior knowledge to novel situations and stimuli. Advances in machine learning and artificial intelligence have begun to approximate and even surpass human performance, but machine systems reliably struggle to generalize information to untrained situations. We describe a neural network model that is trained to play one video game (Breakout) and demonstrates one-shot generalization to a new game (Pong). The model generalizes by learning representations that are functionally and formally symbolic from training data, without feedback, and without requiring that structured representations be specified a priori. The model uses unsupervised comparison to discover which characteristics of the input are invariant, and to learn relational predicates; it then applies these predicates to arguments in a symbolic fashion, using oscillatory regularities in network firing to dynamically bind predicates to arguments. We argue that models of human cognition must ac...
People readily generalise prior knowledge to novel situations and stimuli. Advances in machine le... more People readily generalise prior knowledge to novel situations and stimuli. Advances in machine learning and artificial intelligence have begun to approximate and even surpass human performance in specific domains, but machine learning systems struggle to generalise information to untrained situations. We present and model that demonstrates human-like extrapolatory generalisation by learning and explicitly representing an open-ended set of relations characterising regularities within the domains it is exposed to. First, when trained to play one video game (e.g., Breakout). the model generalises to a new game (e.g., Pong) with different rules, dimensions, and characteristics in a single shot. Second, the model can learn representations from a different domain (e.g., 3D shape images) that support learning a video game and generalising to a new game in one shot. By exploiting well-established principles from cognitive psychology and neuroscience, the model learns structured representati...
Same-different visual reasoning is a basic skill central to abstract combinatorial thought. This ... more Same-different visual reasoning is a basic skill central to abstract combinatorial thought. This fact has lead neural networks researchers to test same-different classification on deep convolutional neural networks (DCNNs), which has resulted in a controversy regarding whether this skill is within the capacity of these models. However, most tests of same-different classification rely on testing on images that come from the same pixel-level distribution as the testing images, yielding the results inconclusive. In this study we tested relational same-different reasoning DCNNs. In a series of simulations we show that models based on the ResNet-50 architecture are capable of visual same-different classification, but only when the test images are similar to the training images at the pixel-level. In contrast, even when there are only subtle differences between the testing and training images, the performance of DCNNs drops substantially. This is true even when DCNNs' training regime ...
Developmental Review, 2016
People tend to think that the function intended by an artifact's designer is its real or proper f... more People tend to think that the function intended by an artifact's designer is its real or proper function. Relatedly, people tend to classify artifacts according to their designer's intended function (DIF), as opposed to an alternative opportunistic function. This centrality of DIF has been shown in children from 6 years of age to adults, and it is not restricted to Western societies. We review four different explanations for the centrality of DIF, integrating developmental and adult data. Two of these explanations are essentialist accounts (causal and intentional essentialism). Two of them are normative accounts (conventional function and idea ownership). Though essentialist accounts have been very influential, we review evidence that shows their limitations. Normative accounts have been less predominant. We review evidence to support them, and discuss how they account for the data. In particular, we review evidence suggesting that the centrality of DIF can be explained as a case of idea ownership. This theory makes sense of a great deal of the existing data on the subject, reconciles contradictory results, links this line of work to other literatures, and offers an account of the observed developmental trend.
Same-different visual reasoning is a basic skill central to abstract combinatorial thought. This ... more Same-different visual reasoning is a basic skill central to abstract combinatorial thought. This fact has lead neural networks researchers to test same-different classification on deep convolutional neural networks (DCNNs), which has resulted in a controversy regarding whether this skill is within the capacity of these models. However, most tests of same-different classification rely on testing on images that come from the same pixel-level distribution as the testing images, yielding the results inconclusive. In this study we tested relational same-different reasoning DCNNs. In a series of simulations we show that DCNNs are capable of visual same-different classification, but only when the test images are similar to the training images at the pixel-level. In contrast, even when there are only subtle differences between the testing and training images, the performance of DCNNs could drop to chance levels. This is true even when DCNNs’ training regime included a wide distribution of i...
Same-different visual reasoning is a basic skill central to abstract combinatorial thought. This ... more Same-different visual reasoning is a basic skill central to abstract combinatorial thought. This fact has lead neural networks researchers to test same-different classification on deep convolutional neural networks (DCNNs), which has resulted in a controversy regarding whether this skill is within the capacity of these models. However, most tests of same-different classification rely on testing on images that come from the same pixel-level distribution as the testing images, yielding the results inconclusive. In this study we tested relational same-different reasoning DCNNs. In a series of simulations we show that models based on the ResNet-50 architecture are capable of visual same-different classification, but only when the test images are similar to the training images at the pixel-level. In contrast, even when there are only subtle differences between the testing and training images, the performance of DCNNs drops substantially. This is true even when DCNNs' training regime ...
Psychological review, 2022
People readily generalize knowledge to novel domains and stimuli. We present a theory, instantiat... more People readily generalize knowledge to novel domains and stimuli. We present a theory, instantiated in a computational model, based on the idea that cross-domain generalization in humans is a case of analogical inference over structured (i.e., symbolic) relational representations. The model is an extension of the Learning and Inference with Schemas and Analogy (LISA; Hummel & Holyoak, 1997, 2003) and Discovery of Relations by Analogy (DORA; Doumas et al., 2008) models of relational inference and learning. The resulting model learns both the content and format (i.e., structure) of relational representations from nonrelational inputs without supervision, when augmented with the capacity for reinforcement learning it leverages these representations to learn about individual domains, and then generalizes to new domains on the first exposure (i.e., zero-shot learning) via analogical inference. We demonstrate the capacity of the model to learn structured relational representations from a ...
Humans readily generalize, applying prior knowledge to novel situations and stimuli. Advances in ... more Humans readily generalize, applying prior knowledge to novel situations and stimuli. Advances in machine learning and artificial intelligence have begun to approximate and even surpass human performance, but machine systems reliably struggle to generalize information to untrained situations. We describe a neural network model that is trained to play one video game (Breakout) and demonstrates one-shot generalization to a new game (Pong). The model generalizes by learning representations that are functionally and formally symbolic from training data, without feedback, and without requiring that structured representations be specified a priori. The model uses unsupervised comparison to discover which characteristics of the input are invariant, and to learn relational predicates; it then applies these predicates to arguments in a symbolic fashion, using oscillatory regularities in network firing to dynamically bind predicates to arguments. We argue that models of human cognition must ac...
Review of Philosophy and Psychology, 2013
ABSTRACT Designers’ intentions are important for determining an artifact’s proper function (i.e.,... more ABSTRACT Designers’ intentions are important for determining an artifact’s proper function (i.e., its perceived real function). However, there are disagreements regarding why. In one view, people reason causally about artifacts’ functional outcomes, and designers’ intended functions become important to the extent that they allow inferring outcomes. In another view, people use knowledge of designers’ intentions to determine proper functions, but this is unrelated to causal reasoning, having perhaps to do with intentional or social forms of reasoning (e.g., authority). Regarding these latter social factors, researchers have proposed that designers’ intentions operate through a mechanism akin to social conventions, and that therefore both are determinants of proper function. In the current work, participants learned about an object’s creation, about social conventions for its use and about a specific episode where the artifact was used. The function implemented by the user could be aligned with the designer’s intended function, the social convention, both, or neither (i.e., an opportunistic use). Importantly, the use episode always resulted in an accident. Data show that the accident negatively affected proper function judgments and perceived efficiency for conventional and opportunistic functions, but not for designers’ intended functions. This is inconsistent with the view that designers’ intentions are conceptualized as causes of functional outcomes and with the idea that designers’ intentions and social conventions operate through a common mechanism.
Cognition, 2014
In four experiments, we tested conditions under which artifact concepts support inference and coh... more In four experiments, we tested conditions under which artifact concepts support inference and coherence in causal categorization. In all four experiments, participants categorized scenarios in which we systematically varied information about artifacts' associated design history, physical structure, user intention, user action and functional outcome, and where each property could be specified as intact, compromised or not observed. Consistently across experiments, when participants received complete information (i.e., when all properties were observed), they categorized based on individual properties and did not show evidence of using coherence to categorize. In contrast, when the state of some property was not observed, participants gave evidence of using available information to infer the state of the unobserved property, which increased the value of the available information for categorization. Our data offers answers to longstanding questions regarding artifact categorization, such as whether there are underlying causal models for artifacts, which properties are part of them, whether design history is an artifact's causal essence, and whether physical appearance or functional outcome is the most central artifact property.
csjarchive.cogsci.rpi.edu
Design history function (i.e., what an artifact was made for) is a central aspect of artifact con... more Design history function (i.e., what an artifact was made for) is a central aspect of artifact conceptualization. A generally accepted explanation is that design history is central because it is the root cause for many other artifact properties. In Exp. 1, an inference task allowed us to probe participants' causal models, and then to use them when making predictions for Exp. 2. Design history was, in fact, part of what participants viewed as conceptually relevant. Predictions for Exp. 2 were derived using the currently most comprehensive theory about how causal knowledge affects categorization. Our results show that though participants used design history, functional outcome and physical structure to conceptualize artifacts, the effect of design history was independent from knowledge of physical structure and functional outcome. This result is inconsistent with a causal knowledge explanation of design history's conceptual centrality.
People readily generalise prior knowledge to novel situations and stimuli. Advances in machine le... more People readily generalise prior knowledge to novel situations and stimuli. Advances in machine learning and artificial intelligence have begun to approximate and even surpass human performance in specific domains, but machine learning systems struggle to generalise information to untrained situations. We present and model that demonstrates human-like extrapolatory generalisation by learning and explicitly representing an open-ended set of relations characterising regularities within the domains it is exposed to. First, when trained to play one video game (e.g., Breakout). the model generalises to a new game (e.g., Pong) with different rules, dimensions, and characteristics in a single shot. Second, the model can learn representations from a different domain (e.g., 3D shape images) that support learning a video game and generalising to a new game in one shot. By exploiting well-established principles from cognitive psychology and neuroscience, the model learns structured representati...
Same-different visual reasoning is a basic skill central to abstract combinatorial thought. This ... more Same-different visual reasoning is a basic skill central to abstract combinatorial thought. This fact has lead neural networks researchers to test same-different classification on deep convolutional neural networks (DCNNs), which has resulted in a controversy regarding whether this skill is within the capacity of these models. However, most tests of same-different classification rely on testing on images that come from the same pixel-level distribution as the testing images, yielding the results inconclusive. In this study we tested relational same-different reasoning DCNNs. In a series of simulations we show that models based on the ResNet-50 architecture are capable of visual same-different classification, but only when the test images are similar to the training images at the pixel-level. In contrast, even when there are only subtle differences between the testing and training images, the performance of DCNNs drops substantially. This is true even when DCNNs' training regime ...