Guillermo Del Pinal | University of Illinois at Urbana-Champaign (original) (raw)

Papers by Guillermo Del Pinal

Research paper thumbnail of Probabilistic semantics for epistemic modals

Linguistics and Philosophy, 2021

The epistemic modal auxiliaries must and might are vehicles for expressing the force with which a... more The epistemic modal auxiliaries must and might are vehicles for expressing the force with which a proposition follows from some body of evidence or information. Standard approaches model these operators using quantificational modal logic, but probabilistic approaches are becoming increasingly influential. According to a traditional view, `must' is a maximally strong epistemic operator and `might' is a bare possibility one. A competing account---popular amongst proponents of a probabilisitic turn---says that, given a body of evidence, `must φ' entails that Pr(φ) is high but non-maximal and `might φ' that Pr(φ) is significantly greater than 0. Drawing on several observations concerning the behavior of `must', `might' and similar epistemic operators in evidential contexts, deductive inferences, downplaying and retractions scenarios, and expressions of epistemic tension, I argue that those two influential accounts have systematic descriptive shortcomings. To better make sense of their complex behavior, I propose instead a broadly Kratzerian account according to which `must φ' entails that Pr(φ) = 1 and `might φ' that Pr(φ) > 0, given a body of evidence and a set of normality assumptions about the world. From this perspective, `must' and `might' are vehicles for expressing a common mode of reasoning whereby we draw inferences from specific bits of evidence against a rich set of background assumptions---some of which we represent as defeasible---which capture our general expectations about the world. I will show that the predictions of this Kratzerian account can be substantially refined once it is combined with a specific yet independently motivated 'grammatical' approach to the computation of scalar implicatures. Finally, I discuss some implications of these results for more general discussions concerning the empirical and theoretical motivation to adopt a probabilisitic semantic framework.

Research paper thumbnail of Oddness, modularity, and exhaustification

Natural Language Semantics, 2021

According to the 'grammatical account', scalar implicatures are triggered by a covert exhaustific... more According to the 'grammatical account', scalar implicatures are triggered by a covert exhaustification operator present in logical form. This account covers considerable empirical ground, but there is a peculiar pattern that resists treatment given its usual implementation. The pattern centers on odd assertions like #"Most lions are mammals" and #"Some Italians come from a beautiful country", which seem to trigger implicatures in contexts where the enriched readings conflict with information in the common ground. Magri (2009, 2011) argues that, to account for these cases, the basic grammatical approach has to be supplemented with the stipulations that exhaustification is obligatory and is based on formal computations which are blind to information in the common ground. In this paper , I argue that accounts of oddness should allow for the possibility of felicitous assertions that call for revision of the common ground, including explicit assertions of unusual beliefs such as Most but not all lions are mammals and Some but not all Italians come from Italy. To adequately cover these and similar cases, I propose that Magri's version of the Grammatical account should be refined with the novel hypothesis that exhaustification triggers a bifurcation between presupposed (the negated relevant alternatives) and at-issue (the prejacent) content. The explanation of the full oddness pattern, including cases of felicitous proposals to revise the common ground, follows from the interaction between presupposed and at-issue content with an independently motivated constraint on accommodation. Finally, I argue that treating the exhaustification operator as a presupposition trigger helps solve various independent puzzles faced by extant grammatical accounts, and motivates a substantial revision of standard accounts of the overt exhaustifier "only".

Research paper thumbnail of The Logicality of Language: Contextualism vs. Semantic Minimalism

Mind , 2021

The Logicality of Language is the hypothesis that the language system has access to a `natural' l... more The Logicality of Language is the hypothesis that the language system has access to a `natural' logic that can identify and filter out as unacceptable expressions that have trivial meanings---i.e., that are true/false in all possible worlds or situations in which they are defined. This hypothesis helps explain otherwise puzzling patterns concerning the distribution of various functional terms and phrases. Despite its promise, Logicality vastly over-generates unacceptability assignments. Most solutions to this problem rest on specific stipulations about the properties of logical form---roughly, the level of linguistic representation which feeds into the interpretation procedures---and have substantial implications for traditional philosophical disputes about the nature of language. Specifically, Contextualism and Semantic Minimalism, construed as competing hypothesis about the nature and degree of context-sensitivity at the level of logical form, suggest different approaches to the over-generation problem. In this paper, I explore the implications of pairing Logicality with various forms of Contextualism and Semantic Minimalism. I argue that, to adequately solve the over-generation problem, Logicality should be implemented in a constrained Contextualist framework.

Research paper thumbnail of Epistemic modals under epistemic tension

Natural Language Semantics, 2019

According to Kratzer's influential account (1981, 1991, 2012), epistemic "must" and "might" invol... more According to Kratzer's influential account (1981, 1991, 2012), epistemic "must" and "might" involve quantification over domains of possibilities determined by a modal base and an ordering source. Recently, this account has been challenged by invoking contexts of `epistemic tension': i.e., cases in which an assertion that "must p" is conjoined with the possibility that "not p", and cases in which speakers try to downplay a previous assertion that "must p", after finding out that "not p". Epistemic tensions have been invoked from two directions. Von Fintel and Gillies (2010) propose a return to a simpler modal logic-inspired account: "must" and "might" still involve universal and existential quantification, but the domains of possibilities are determined solely by realistic modal bases. In contrast, Lassiter (2016), following Swanson (2006, 2011), proposes a more revisionary account which treats "must" and "might" as probabilistic operators. In this paper, we present a series of experiments to obtain reliable data on the degree of acceptability of various contexts of epistemic tension. Our experiments include novel variations that, we argue, are required to make progress in this debate. We show that restricted quantificational accounts fit the overall pattern of results better than either of their recent competitors. In addition, our results help us identify the key components of restricted quantificational accounts, and on that basis propose some refinements and general constraints that should be satisfied by any account of the modal auxiliaries.

Research paper thumbnail of The Logicality of Language: A new take on Triviality, "Ungrammaticality", and Logical Form

Noûs, 2019

Recent work in formal semantics suggests that the language system includes not only a structure b... more Recent work in formal semantics suggests that the language system includes not only a structure building device, as standardly assumed, but also a natural deductive system which can determine when expressions have trivial truth-conditions (e.g., are logically true/false) and mark them as unacceptable. This hypothesis, called the 'logicality of language', accounts for many acceptability patterns, including systematic restrictions on the distribution of quantifiers. To deal with apparent counterexamples consisting of acceptable tautologies and contradictions, the logicality of language is often paired with an additional assumption according to which logical forms are radically underspecified: i.e., the language system can see functional terms but is 'blind' to open class terms to the extent that different tokens of the same term are treated as if independent. This conception of logical form has profound implications: it suggests an extreme version of the modularity of language, and can only be paired with non-classical—indeed quite exotic—kinds of deductive systems. The aim of this paper is to show that we can pair the logicality of language with a different and ultimately more traditional account of logical form. This framework accounts for the basic acceptability patterns which motivated the logicality of language, can explain why some tautologies and contradictions are acceptable, and makes better predictions in key cases. As a result, we can pursue versions of the logicality of language in frameworks compatible with the view that language system is not radically modular vis- a-vis its open class terms and employs a deductive system that is basically classical.

Research paper thumbnail of Meaning, Modulation, and Context: A Multidimensional Semantics for Truth-conditional Pragmatics

Linguistics and Philosophy, 2018

The meaning that expressions take on particular occasions often depends on the context in ways wh... more The meaning that expressions take on particular occasions often depends on the context in ways which seem to transcend its direct effect on context-sensitive parameters. 'Truth-conditional pragmatics' is the project of trying to model such semantic flexibility within a compositional truth-conditional framework. Most proposals proceed by radically 'freeing up' the compositional operations of language. I argue, however, that the resulting theories are too unconstrained, and predict flexibility in cases were it is not observed. These accounts fall into this position because they rarely, if ever, take advantage of the rich information made available by lexical items. I hold, instead, that lexical items encode both extension and non-extension determining information. Under certain conditions, the non-extension determining information of an expression e can enter into the compositional processes that determine the meaning of more complex expressions which contain e. This paper presents and motivates a set of type-driven compositional operations that can access non-extension determining information and introduce bits of it into the meaning of complex expressions. The resulting multidimensional semantics has the tools to deal with key cases of semantic flexibility in appropriately constrained ways, making it a promising framework to pursue the project of truth-conditional pragmatics.

Research paper thumbnail of The Future of Cognitive Neuroscience? Reverse Inference in Focus

This article presents and discusses one of the most prominent inferential strategies currently em... more This article presents and discusses one of the most prominent inferential strategies currently employed in cognitive neuropsychology, namely, reverse inference. Simply put, this is the practice of inferring, in the context of experimental tasks, the engagement of cognitive processes from locations or patterns of neural activation. This technique is notoriously controversial because , critics argue, it presupposes the problematic assumption that neural areas are functionally selective. We proceed as follows. We begin by introducing the basic structure of traditional "location-based" reverse inference (§1) and discuss the influential lack of selectivity objection (§2). Next, we rehearse various ways of responding to this challenge and provide some reasons for cautious optimism (§3). The second part of the essay presents a more recent development: "pattern-decoding reverse inference" (§4). This inferential strategy, we maintain, provides an even more convincing response to the lack of selectivity charge. Due to this and other methodological advantages, it is now a prominent component in the toolbox of cognitive neuropsychology (§5). Finally, we conclude by drawing some implications for philosophy of science and philosophy of mind (§6).

Research paper thumbnail of Conceptual centrality and implicit bias

Mind & Language, 2018

How are biases encoded in our representations of social categories? Philosophical and empirical d... more How are biases encoded in our representations of social categories? Philosophical and empirical discussions of implicit bias overwhelmingly focus on 'salient' or 'statistical associations' between target features and representations of social categories. These are the sorts of associations probed by the Implicit Association Test and various priming tasks. In this paper, we argue that these discussions systematically overlook an alternative way in which biases are encoded, i.e., in the 'dependency networks' that are part of our representations of social categories. Dependency networks encode information about how features in a conceptual representation depend on each other. This information determines the degree of centrality of a feature for a conceptual representation. Importantly, centrally encoded biases systematically disassociate from those encoded in salient-statistical associations. Furthermore, the degree of centrality of a feature determines its cross-contextual stability: in general, the more central a feature is for a concept, the more likely it is to survive into a wide array of cognitive tasks involving that concept. Accordingly, implicit biases that are encoded in the central features of concepts are predicted to be more resilient across different tasks and contexts. As a result, the distinction between centrally encoded and salient-statistical biases has important theoretical and practical implications.

Research paper thumbnail of Stereotypes, Conceptual Centrality and Gender Bias: An Empirical Investigation

Discussions in social psychology overlook an important way in which biases can be encoded in conc... more Discussions in social psychology overlook an important way in which biases can be encoded in conceptual representations. Most accounts of implicit bias focus on 'mere associations' between features and representations of social groups. While some have argued that some implicit biases must have a richer conceptual structure, they have said little about what this richer structure might be. To address this lacuna, we build on research in philosophy and cognitive science demonstrating that concepts represent dependency relations between features. These relations, in turn, determine the centrality of a feature f for a concept C: roughly, the more features of C depend on f , the more central f is for C. In this paper, we argue that the dependency networks that link features can encode significant biases. To support this claim, we present a series of studies that show how a particular brilliance-gender bias is encoded in the dependency networks which are part of the concepts of female and male academics. We also argue that biases which are encoded in dependency networks have unique implications for social cognition.

Research paper thumbnail of Dual character concepts in social cognition: Commitments and the normative dimension of conceptual representation

Cognitive Science, 2017

The concepts expressed by social role terms such as artist and scientist are unique in that they ... more The concepts expressed by social role terms such as artist and scientist are unique in that they seem to allow two independent criteria for categorization, one of which is inherently normative (Knobe et al., 2013). This paper presents and tests an account of the content and structure of the normative dimension of these 'dual character con-cepts'. Experiment 1 suggests that the normative dimension of a social role concept represents the commitment to fulfill the idealised basic function associated with the role. Background information can affect which basic function is associated with each social role. However, Experiment 2 indicates that the normative dimension always represents the relevant commitment as an end in itself. We argue that social role concepts represent the commitments to basic functions because that information is crucial to predict the future social roles and role-dependent behavior of others.

Research paper thumbnail of Dual Content Semantics, Privative Adjectives, and Dynamic Compositionality

Semantics and Pragmatics , 2015

This paper defends the view that common nouns have a dual semantic structure that includes extens... more This paper defends the view that common nouns have a dual semantic structure that includes extension and non-extension determining components. I argue that the non-extension determining components are part of linguistic meaning because they play a key compositional role in certain constructions, esp., in privative noun phrases such as fake gun and counterfeit document. Furthermore, I show that if we modify the compositional interpretation rules in certain simple ways, this dual content account of noun phrase modification can be implemented in a type-driven formal semantic framework. In addition, I also argue against traditional accounts of privative noun phrases which can be paired with the assumption that nouns do not have a dual semantic structure. At the most general level, this paper presents a proposal for how we can begin to integrate a psychologically realistic account of lexical semantics with a linguistically plausible compositional semantic framework.

Research paper thumbnail of Two Kinds of Reverse Inference in Cognitive Neuroscience

This essay examines the prospects and limits of 'reverse inferring' cognitive processes from neur... more This essay examines the prospects and limits of 'reverse inferring' cognitive processes from neural data, a technique commonly used in cognitive neuroscience for discriminating between competing psychological hypotheses. Specifically, we distinguish between two main types of reverse inference. The first kind of inference moves from the locations of neural activation to the underlying cognitive processes. We illustrate this strategy by presenting a well-known example involving mirror neurons and theories of low-level mind-reading, and discuss some general methodological problems. Next we present the second type of reverse inference by discussing an example from recognition memory research. These inferences, based on pattern-decoding techniques, do not presuppose strong assumptions about the functions of particular neural locations. Consequently, while they have been largely ignored in methodological critiques, they overcome important objections plaguing traditional methods.

Research paper thumbnail of Prototypes as Compositional Components of Concepts

Synthese , 2016

The aim of this paper is to reconcile two claims that have long been thought to be incompatible: ... more The aim of this paper is to reconcile two claims that have long been thought to be incompatible: (a) that we compositionally determine the meaning of complex expressions from the meaning of their parts, and (b) that prototypes are components of the meaning of lexical terms such as "fish", "red", and "gun". Hypotheses (a) and (b) are independently plausible, but most researchers think that reconciling them is a difficult, if not hopeless task. In particular, most linguists and philosophers agree that (a) is not negotiable; so they tend to reject (b). Recently, there have been some attempts to reconcile these claims, but they all adopt an implausibly weak notion of compositionality. Furthermore, parties to this debate tend to fall into a problematic way of individuating prototypes that is too externalistic. In contrast, I propose that we can reconcile (a) and (b) if we adopt, instead, an internalist and pluralist conception of prototypes and a context-sensitive but strong notion of compositionality. I argue that each of this proposals is independently plausible, and that, when taken together, provide the basis for a satisfactory account of prototype compositionality.

Research paper thumbnail of The Structure of Semantic Competence: Compositionality as an Innate Constraint on the Faculty of Language

Mind and Language, 2015

This paper defends the view that the Faculty of Language is compositional, namely, that it comput... more This paper defends the view that the Faculty of Language is compositional, namely, that it computes the meaning of complex expressions from the meanings of their immediate constituents. I first argue that compositionality and other competing, non-compositional constraints on the ways in which we compute the meanings of complex expressions should be understood as hypotheses about the innate constrains of the semantic operations of the Faculty of Language. I then argue that, unlike compositionality, most of the currently available non-compositional constraints predict incorrect patterns of early linguistic development. This supports the view that the Faculty of Language is compositional. More generally, this paper proposes a way of reframing the compositionality debate which, by focusing on its implications for language acquisition, opens what has so far been a mainly theoretical debate to a more straightforward empirical resolution.

Research paper thumbnail of Mapping the Mind: Bridge Laws and the Psycho-Neural Interface

Recent advancements in the brain sciences have enabled researchers to determine, with increasing ... more Recent advancements in the brain sciences have enabled researchers to determine, with increasing accuracy, patterns and locations of neural activation associated with various psychological functions. These techniques have revived a longstanding debate regarding the relation between the mind and the brain: while many authors claim that neuroscientific data can be employed to advance theories of higher cognition, others defend the so-called `autonomy' of psychology. Settling this significant issue requires understanding the nature of the bridge laws used at the psycho-neural interface. While these laws have been the topic of extensive discussion, such debates have mostly focused on a particular type of link: reductive laws. Reductive laws are problematic: they face notorious philosophical objections and they are too scarce to substantiate current research at the intersection of psychology and neuroscience. The aim of this article is to provide a systematic analysis of a different kind of bridge laws -- associative laws -- which play a central, albeit overlooked role in scientific practice.

Research paper thumbnail of There and Up Again: On the Uses and Misuses of Neuroimaging in Psychology

The aim of this article is to discuss the conditions under which functional neuroimaging can cont... more The aim of this article is to discuss the conditions under which functional neuroimaging can contribute to the study of higher-cognition. We begin by presenting two case studies --on moral and economic decision-making -- which will help us identify and examine one of the main ways in which neuroimaging can help advance the study of higher cognition. We agree with critics that fMRI studies seldom "refine" or "confirm" particular psychological hypotheses, or even provide details of the neural implementation of cognitive functions. However, we suggest that neuroimaging can support psychology in a different way, namely, by selecting among competing hypotheses of the cognitive mechanisms underlying some mental function. One of the main ways in which neurimaging can be used for hypothesis selection is via reverse inferences, which we here examine in detail. Despite frequent claims to the contrary, we argue that successful reverse inferences do not assume any strong or objectionable form of reductionism or functional locationism. Moreover, our discussion illustrates that reverse inferences can be successful at early stages of psychological theorizing, when models of the cognitive mechanisms are only partially developed.

Research paper thumbnail of Probabilistic semantics for epistemic modals

Linguistics and Philosophy, 2021

The epistemic modal auxiliaries must and might are vehicles for expressing the force with which a... more The epistemic modal auxiliaries must and might are vehicles for expressing the force with which a proposition follows from some body of evidence or information. Standard approaches model these operators using quantificational modal logic, but probabilistic approaches are becoming increasingly influential. According to a traditional view, `must' is a maximally strong epistemic operator and `might' is a bare possibility one. A competing account---popular amongst proponents of a probabilisitic turn---says that, given a body of evidence, `must φ' entails that Pr(φ) is high but non-maximal and `might φ' that Pr(φ) is significantly greater than 0. Drawing on several observations concerning the behavior of `must', `might' and similar epistemic operators in evidential contexts, deductive inferences, downplaying and retractions scenarios, and expressions of epistemic tension, I argue that those two influential accounts have systematic descriptive shortcomings. To better make sense of their complex behavior, I propose instead a broadly Kratzerian account according to which `must φ' entails that Pr(φ) = 1 and `might φ' that Pr(φ) > 0, given a body of evidence and a set of normality assumptions about the world. From this perspective, `must' and `might' are vehicles for expressing a common mode of reasoning whereby we draw inferences from specific bits of evidence against a rich set of background assumptions---some of which we represent as defeasible---which capture our general expectations about the world. I will show that the predictions of this Kratzerian account can be substantially refined once it is combined with a specific yet independently motivated 'grammatical' approach to the computation of scalar implicatures. Finally, I discuss some implications of these results for more general discussions concerning the empirical and theoretical motivation to adopt a probabilisitic semantic framework.

Research paper thumbnail of Oddness, modularity, and exhaustification

Natural Language Semantics, 2021

According to the 'grammatical account', scalar implicatures are triggered by a covert exhaustific... more According to the 'grammatical account', scalar implicatures are triggered by a covert exhaustification operator present in logical form. This account covers considerable empirical ground, but there is a peculiar pattern that resists treatment given its usual implementation. The pattern centers on odd assertions like #"Most lions are mammals" and #"Some Italians come from a beautiful country", which seem to trigger implicatures in contexts where the enriched readings conflict with information in the common ground. Magri (2009, 2011) argues that, to account for these cases, the basic grammatical approach has to be supplemented with the stipulations that exhaustification is obligatory and is based on formal computations which are blind to information in the common ground. In this paper , I argue that accounts of oddness should allow for the possibility of felicitous assertions that call for revision of the common ground, including explicit assertions of unusual beliefs such as Most but not all lions are mammals and Some but not all Italians come from Italy. To adequately cover these and similar cases, I propose that Magri's version of the Grammatical account should be refined with the novel hypothesis that exhaustification triggers a bifurcation between presupposed (the negated relevant alternatives) and at-issue (the prejacent) content. The explanation of the full oddness pattern, including cases of felicitous proposals to revise the common ground, follows from the interaction between presupposed and at-issue content with an independently motivated constraint on accommodation. Finally, I argue that treating the exhaustification operator as a presupposition trigger helps solve various independent puzzles faced by extant grammatical accounts, and motivates a substantial revision of standard accounts of the overt exhaustifier "only".

Research paper thumbnail of The Logicality of Language: Contextualism vs. Semantic Minimalism

Mind , 2021

The Logicality of Language is the hypothesis that the language system has access to a `natural' l... more The Logicality of Language is the hypothesis that the language system has access to a `natural' logic that can identify and filter out as unacceptable expressions that have trivial meanings---i.e., that are true/false in all possible worlds or situations in which they are defined. This hypothesis helps explain otherwise puzzling patterns concerning the distribution of various functional terms and phrases. Despite its promise, Logicality vastly over-generates unacceptability assignments. Most solutions to this problem rest on specific stipulations about the properties of logical form---roughly, the level of linguistic representation which feeds into the interpretation procedures---and have substantial implications for traditional philosophical disputes about the nature of language. Specifically, Contextualism and Semantic Minimalism, construed as competing hypothesis about the nature and degree of context-sensitivity at the level of logical form, suggest different approaches to the over-generation problem. In this paper, I explore the implications of pairing Logicality with various forms of Contextualism and Semantic Minimalism. I argue that, to adequately solve the over-generation problem, Logicality should be implemented in a constrained Contextualist framework.

Research paper thumbnail of Epistemic modals under epistemic tension

Natural Language Semantics, 2019

According to Kratzer's influential account (1981, 1991, 2012), epistemic "must" and "might" invol... more According to Kratzer's influential account (1981, 1991, 2012), epistemic "must" and "might" involve quantification over domains of possibilities determined by a modal base and an ordering source. Recently, this account has been challenged by invoking contexts of `epistemic tension': i.e., cases in which an assertion that "must p" is conjoined with the possibility that "not p", and cases in which speakers try to downplay a previous assertion that "must p", after finding out that "not p". Epistemic tensions have been invoked from two directions. Von Fintel and Gillies (2010) propose a return to a simpler modal logic-inspired account: "must" and "might" still involve universal and existential quantification, but the domains of possibilities are determined solely by realistic modal bases. In contrast, Lassiter (2016), following Swanson (2006, 2011), proposes a more revisionary account which treats "must" and "might" as probabilistic operators. In this paper, we present a series of experiments to obtain reliable data on the degree of acceptability of various contexts of epistemic tension. Our experiments include novel variations that, we argue, are required to make progress in this debate. We show that restricted quantificational accounts fit the overall pattern of results better than either of their recent competitors. In addition, our results help us identify the key components of restricted quantificational accounts, and on that basis propose some refinements and general constraints that should be satisfied by any account of the modal auxiliaries.

Research paper thumbnail of The Logicality of Language: A new take on Triviality, "Ungrammaticality", and Logical Form

Noûs, 2019

Recent work in formal semantics suggests that the language system includes not only a structure b... more Recent work in formal semantics suggests that the language system includes not only a structure building device, as standardly assumed, but also a natural deductive system which can determine when expressions have trivial truth-conditions (e.g., are logically true/false) and mark them as unacceptable. This hypothesis, called the 'logicality of language', accounts for many acceptability patterns, including systematic restrictions on the distribution of quantifiers. To deal with apparent counterexamples consisting of acceptable tautologies and contradictions, the logicality of language is often paired with an additional assumption according to which logical forms are radically underspecified: i.e., the language system can see functional terms but is 'blind' to open class terms to the extent that different tokens of the same term are treated as if independent. This conception of logical form has profound implications: it suggests an extreme version of the modularity of language, and can only be paired with non-classical—indeed quite exotic—kinds of deductive systems. The aim of this paper is to show that we can pair the logicality of language with a different and ultimately more traditional account of logical form. This framework accounts for the basic acceptability patterns which motivated the logicality of language, can explain why some tautologies and contradictions are acceptable, and makes better predictions in key cases. As a result, we can pursue versions of the logicality of language in frameworks compatible with the view that language system is not radically modular vis- a-vis its open class terms and employs a deductive system that is basically classical.

Research paper thumbnail of Meaning, Modulation, and Context: A Multidimensional Semantics for Truth-conditional Pragmatics

Linguistics and Philosophy, 2018

The meaning that expressions take on particular occasions often depends on the context in ways wh... more The meaning that expressions take on particular occasions often depends on the context in ways which seem to transcend its direct effect on context-sensitive parameters. 'Truth-conditional pragmatics' is the project of trying to model such semantic flexibility within a compositional truth-conditional framework. Most proposals proceed by radically 'freeing up' the compositional operations of language. I argue, however, that the resulting theories are too unconstrained, and predict flexibility in cases were it is not observed. These accounts fall into this position because they rarely, if ever, take advantage of the rich information made available by lexical items. I hold, instead, that lexical items encode both extension and non-extension determining information. Under certain conditions, the non-extension determining information of an expression e can enter into the compositional processes that determine the meaning of more complex expressions which contain e. This paper presents and motivates a set of type-driven compositional operations that can access non-extension determining information and introduce bits of it into the meaning of complex expressions. The resulting multidimensional semantics has the tools to deal with key cases of semantic flexibility in appropriately constrained ways, making it a promising framework to pursue the project of truth-conditional pragmatics.

Research paper thumbnail of The Future of Cognitive Neuroscience? Reverse Inference in Focus

This article presents and discusses one of the most prominent inferential strategies currently em... more This article presents and discusses one of the most prominent inferential strategies currently employed in cognitive neuropsychology, namely, reverse inference. Simply put, this is the practice of inferring, in the context of experimental tasks, the engagement of cognitive processes from locations or patterns of neural activation. This technique is notoriously controversial because , critics argue, it presupposes the problematic assumption that neural areas are functionally selective. We proceed as follows. We begin by introducing the basic structure of traditional "location-based" reverse inference (§1) and discuss the influential lack of selectivity objection (§2). Next, we rehearse various ways of responding to this challenge and provide some reasons for cautious optimism (§3). The second part of the essay presents a more recent development: "pattern-decoding reverse inference" (§4). This inferential strategy, we maintain, provides an even more convincing response to the lack of selectivity charge. Due to this and other methodological advantages, it is now a prominent component in the toolbox of cognitive neuropsychology (§5). Finally, we conclude by drawing some implications for philosophy of science and philosophy of mind (§6).

Research paper thumbnail of Conceptual centrality and implicit bias

Mind & Language, 2018

How are biases encoded in our representations of social categories? Philosophical and empirical d... more How are biases encoded in our representations of social categories? Philosophical and empirical discussions of implicit bias overwhelmingly focus on 'salient' or 'statistical associations' between target features and representations of social categories. These are the sorts of associations probed by the Implicit Association Test and various priming tasks. In this paper, we argue that these discussions systematically overlook an alternative way in which biases are encoded, i.e., in the 'dependency networks' that are part of our representations of social categories. Dependency networks encode information about how features in a conceptual representation depend on each other. This information determines the degree of centrality of a feature for a conceptual representation. Importantly, centrally encoded biases systematically disassociate from those encoded in salient-statistical associations. Furthermore, the degree of centrality of a feature determines its cross-contextual stability: in general, the more central a feature is for a concept, the more likely it is to survive into a wide array of cognitive tasks involving that concept. Accordingly, implicit biases that are encoded in the central features of concepts are predicted to be more resilient across different tasks and contexts. As a result, the distinction between centrally encoded and salient-statistical biases has important theoretical and practical implications.

Research paper thumbnail of Stereotypes, Conceptual Centrality and Gender Bias: An Empirical Investigation

Discussions in social psychology overlook an important way in which biases can be encoded in conc... more Discussions in social psychology overlook an important way in which biases can be encoded in conceptual representations. Most accounts of implicit bias focus on 'mere associations' between features and representations of social groups. While some have argued that some implicit biases must have a richer conceptual structure, they have said little about what this richer structure might be. To address this lacuna, we build on research in philosophy and cognitive science demonstrating that concepts represent dependency relations between features. These relations, in turn, determine the centrality of a feature f for a concept C: roughly, the more features of C depend on f , the more central f is for C. In this paper, we argue that the dependency networks that link features can encode significant biases. To support this claim, we present a series of studies that show how a particular brilliance-gender bias is encoded in the dependency networks which are part of the concepts of female and male academics. We also argue that biases which are encoded in dependency networks have unique implications for social cognition.

Research paper thumbnail of Dual character concepts in social cognition: Commitments and the normative dimension of conceptual representation

Cognitive Science, 2017

The concepts expressed by social role terms such as artist and scientist are unique in that they ... more The concepts expressed by social role terms such as artist and scientist are unique in that they seem to allow two independent criteria for categorization, one of which is inherently normative (Knobe et al., 2013). This paper presents and tests an account of the content and structure of the normative dimension of these 'dual character con-cepts'. Experiment 1 suggests that the normative dimension of a social role concept represents the commitment to fulfill the idealised basic function associated with the role. Background information can affect which basic function is associated with each social role. However, Experiment 2 indicates that the normative dimension always represents the relevant commitment as an end in itself. We argue that social role concepts represent the commitments to basic functions because that information is crucial to predict the future social roles and role-dependent behavior of others.

Research paper thumbnail of Dual Content Semantics, Privative Adjectives, and Dynamic Compositionality

Semantics and Pragmatics , 2015

This paper defends the view that common nouns have a dual semantic structure that includes extens... more This paper defends the view that common nouns have a dual semantic structure that includes extension and non-extension determining components. I argue that the non-extension determining components are part of linguistic meaning because they play a key compositional role in certain constructions, esp., in privative noun phrases such as fake gun and counterfeit document. Furthermore, I show that if we modify the compositional interpretation rules in certain simple ways, this dual content account of noun phrase modification can be implemented in a type-driven formal semantic framework. In addition, I also argue against traditional accounts of privative noun phrases which can be paired with the assumption that nouns do not have a dual semantic structure. At the most general level, this paper presents a proposal for how we can begin to integrate a psychologically realistic account of lexical semantics with a linguistically plausible compositional semantic framework.

Research paper thumbnail of Two Kinds of Reverse Inference in Cognitive Neuroscience

This essay examines the prospects and limits of 'reverse inferring' cognitive processes from neur... more This essay examines the prospects and limits of 'reverse inferring' cognitive processes from neural data, a technique commonly used in cognitive neuroscience for discriminating between competing psychological hypotheses. Specifically, we distinguish between two main types of reverse inference. The first kind of inference moves from the locations of neural activation to the underlying cognitive processes. We illustrate this strategy by presenting a well-known example involving mirror neurons and theories of low-level mind-reading, and discuss some general methodological problems. Next we present the second type of reverse inference by discussing an example from recognition memory research. These inferences, based on pattern-decoding techniques, do not presuppose strong assumptions about the functions of particular neural locations. Consequently, while they have been largely ignored in methodological critiques, they overcome important objections plaguing traditional methods.

Research paper thumbnail of Prototypes as Compositional Components of Concepts

Synthese , 2016

The aim of this paper is to reconcile two claims that have long been thought to be incompatible: ... more The aim of this paper is to reconcile two claims that have long been thought to be incompatible: (a) that we compositionally determine the meaning of complex expressions from the meaning of their parts, and (b) that prototypes are components of the meaning of lexical terms such as "fish", "red", and "gun". Hypotheses (a) and (b) are independently plausible, but most researchers think that reconciling them is a difficult, if not hopeless task. In particular, most linguists and philosophers agree that (a) is not negotiable; so they tend to reject (b). Recently, there have been some attempts to reconcile these claims, but they all adopt an implausibly weak notion of compositionality. Furthermore, parties to this debate tend to fall into a problematic way of individuating prototypes that is too externalistic. In contrast, I propose that we can reconcile (a) and (b) if we adopt, instead, an internalist and pluralist conception of prototypes and a context-sensitive but strong notion of compositionality. I argue that each of this proposals is independently plausible, and that, when taken together, provide the basis for a satisfactory account of prototype compositionality.

Research paper thumbnail of The Structure of Semantic Competence: Compositionality as an Innate Constraint on the Faculty of Language

Mind and Language, 2015

This paper defends the view that the Faculty of Language is compositional, namely, that it comput... more This paper defends the view that the Faculty of Language is compositional, namely, that it computes the meaning of complex expressions from the meanings of their immediate constituents. I first argue that compositionality and other competing, non-compositional constraints on the ways in which we compute the meanings of complex expressions should be understood as hypotheses about the innate constrains of the semantic operations of the Faculty of Language. I then argue that, unlike compositionality, most of the currently available non-compositional constraints predict incorrect patterns of early linguistic development. This supports the view that the Faculty of Language is compositional. More generally, this paper proposes a way of reframing the compositionality debate which, by focusing on its implications for language acquisition, opens what has so far been a mainly theoretical debate to a more straightforward empirical resolution.

Research paper thumbnail of Mapping the Mind: Bridge Laws and the Psycho-Neural Interface

Recent advancements in the brain sciences have enabled researchers to determine, with increasing ... more Recent advancements in the brain sciences have enabled researchers to determine, with increasing accuracy, patterns and locations of neural activation associated with various psychological functions. These techniques have revived a longstanding debate regarding the relation between the mind and the brain: while many authors claim that neuroscientific data can be employed to advance theories of higher cognition, others defend the so-called `autonomy' of psychology. Settling this significant issue requires understanding the nature of the bridge laws used at the psycho-neural interface. While these laws have been the topic of extensive discussion, such debates have mostly focused on a particular type of link: reductive laws. Reductive laws are problematic: they face notorious philosophical objections and they are too scarce to substantiate current research at the intersection of psychology and neuroscience. The aim of this article is to provide a systematic analysis of a different kind of bridge laws -- associative laws -- which play a central, albeit overlooked role in scientific practice.

Research paper thumbnail of There and Up Again: On the Uses and Misuses of Neuroimaging in Psychology

The aim of this article is to discuss the conditions under which functional neuroimaging can cont... more The aim of this article is to discuss the conditions under which functional neuroimaging can contribute to the study of higher-cognition. We begin by presenting two case studies --on moral and economic decision-making -- which will help us identify and examine one of the main ways in which neuroimaging can help advance the study of higher cognition. We agree with critics that fMRI studies seldom "refine" or "confirm" particular psychological hypotheses, or even provide details of the neural implementation of cognitive functions. However, we suggest that neuroimaging can support psychology in a different way, namely, by selecting among competing hypotheses of the cognitive mechanisms underlying some mental function. One of the main ways in which neurimaging can be used for hypothesis selection is via reverse inferences, which we here examine in detail. Despite frequent claims to the contrary, we argue that successful reverse inferences do not assume any strong or objectionable form of reductionism or functional locationism. Moreover, our discussion illustrates that reverse inferences can be successful at early stages of psychological theorizing, when models of the cognitive mechanisms are only partially developed.