From Molecule to Metaphor: A Neural Theory of Language Jerome A. Feldman (University of California, Berkeley) Cambridge, MA: The MIT Press (A Bradford book), 2006, xx+357 pp; hardbound, ISBN 0-262-06253-4, $36.00 (original) (raw)

2007, Computational Linguistics

Over the last decade or so, it has become increasingly clear to many cognitive scientists that research into human language (and cognition in general, for that matter) has largely neglected how language and thought are embedded in the body and the world. As argued by, for instance, Clark (1997), cognition is fundamentally embodied, that is, it can only be studied in relation to human action, perception, thought, and experience. As Feldman puts it: "Human language and thought are crucially shaped by the properties of our bodies and the structure of our physical and social environment. Language and thought are not best studied as formal mathematics and logic, but as adaptations that enable creatures like us to thrive in a wide range of situations" (p. 7). Although it may seem paradoxical to try formalizing this view in a computational theory of language comprehension, this is exactly what From Molecule to Metaphor does. Starting from the assumption that human thought is neural computation, Feldman develops a computational theory that takes the embodied nature of language into account: the neural theory of language. The book comprises 27 short chapters, distributed over nine parts. Part I presents the basic ideas behind embodied language and cognition and explains how the embodiment of language is apparent in the brain: The neural circuits involved in a particular experience or action are, for a large part, the same circuits involved in processing language about this experience or action. Part II discusses neural computation, starting from the molecules that take part in information processing by neurons. This detailed exposition is followed by a description of neuronal networks in the human body, in particular in the brain. The description of the neural theory of language begins in Part III, where it is explained how localist neural networks, often used as psycholinguistic models, can represent the meaning of concepts. This is done by introducing triangle nodes into the network. Each triangle node connects the nodes representing a concept, a role, and a filler-for example, "pea," "has-color," and "green." Such networks are trained by a process called recruitment learning, which is described only very informally. This is certainly an interesting idea for combining propositional and connectionist models, but it does leave the reader with a number of questions. For instance, how is the concept distinguished from the filler when they can be interchanged, as in "cats, feed-on, mice" versus "mice, feed-on, cats." And on a more philosophical note: Where does this leave embodiment? The idea that there exists a node representing the concept "pea," neurally distinct from its properties and from experiences with peas, seems to introduce abstract and arbitrary symbols. These are quite alien to embodied theories of cognition, which generally assume modal and analogical perceptual symbols (Barsalou 1999) or even no symbols at all (Brooks 1991).