Matthew Hutson - Academia.edu (original) (raw)
Articles by Matthew Hutson
Scientific American Mind, 2012
Scientific American Mind, 2011
ABSTRACT
Scientific American Mind, 2015
Scientific American Mind, 2012
Scientific American Mind, 2011
ABSTRACT
Scientific American Mind, 2015
Investigación y ciencia, 2020
Investigación y ciencia, 2020
Science, Nov 25, 2022
Meta’s algorithm tackles both language and strategy in a classic board game that involves negotia... more Meta’s algorithm tackles both language and strategy in a classic board game that involves negotiation
Science, May 6, 2022
AI software clears high hurdles on IQ tests but still makes dumb mistakes. Can better benchmarks ... more AI software clears high hurdles on IQ tests but still makes dumb mistakes. Can better benchmarks help?
Nature, Oct 31, 2022
Y ou know that text autocomplete function that makes your smartphone so convenient-and occasional... more Y ou know that text autocomplete function that makes your smartphone so convenient-and occasionally frustrating-to use? Well, now tools based on the same idea have progressed to the point that they are helping researchers to analyse and write scientific papers, generate code and brainstorm ideas. The tools come from natural language processing (NLP), an area of artificial intelligence aimed at helping computers to 'understand' and even produce human-readable text. Called large language models (LLMs), these tools have evolved to become not only objects of study but also assistants in research. LLMs are neural networks that have been trained on massive bodies of text to process and, in particular, generate language. OpenAI, a research laboratory in San Francisco,
Cerveau & psycho, Mar 31, 2022
Scientific American, 2018
Scientific American Mind, Jun 9, 2016
Science, Jan 14, 2022
Software that identifies unique styles poses privacy risks
Investigación y ciencia, 2019
Nature, May 10, 2019
Computer scientists have thwarted programs that can trick AI systems into classifying malicious a... more Computer scientists have thwarted programs that can trick AI systems into classifying malicious audio as safe. Computer scientists have thwarted programs that can trick AI systems into classifying malicious audio as safe.
Science, May 17, 2018
Researchers shun traditional journals for conference papers and open-review websites.
Science, Feb 14, 2020
With little training, neural networks create accurate emulators for physics, astronomy, and earth... more With little training, neural networks create accurate emulators for physics, astronomy, and earth science.
Science, Dec 15, 2017
SCIENCE sciencemag.org T his October, television and web viewers were treated to an advertisement... more SCIENCE sciencemag.org T his October, television and web viewers were treated to an advertisement featuring basketball star LeBron James taking a ride in a driverless car. At first, Jamesknown for his fearlessness on the court-peers in doubtfully at the vacant driver's seat and declares: "Nope." But after a short trip in the back seat, he has changed his tune. "Hey yo, I'm keepin' this!" James exclaims to friends. The ad, from computer chip-maker Intel in Santa Clara, California, is aimed at overcoming what could be one of the biggest obstacles to the widespread adoption of autonomous vehicles (AVs): consumer distrust of the technology. Unnerved by the idea of not being in control-and by news of semi-AVs that have crashed, in one case killing the owner-many consumers are apprehensive. In a recent survey by AAA, for example, 78% of respondents said they were afraid to ride in an AV. In a poll by insurance giant AIG, 41% didn't want to share the road with driverless cars. And, ironically, even as companies roll out more capable semi-AVs, the public is becoming less-not more-trusting of AVs, according to surveys over the past 2 years by the Massachusetts Institute of Technology (MIT) in Cambridge and marketing firm J.D. Power and Associates. Such numbers are a warning sign to firms hoping to sell millions of AVs, says Jack Weast, the chief systems architect of Intel's autonomous driving group in Phoenix. "We could have the safest car in the world," he says, "but if consumers don't want to put their kids into it, then there's no market." Consumer distrust has become a catalyst, prompting researchers in industry and academia to a launch a wide range of studies aimed at understanding how people perceive AVs-and what might persuade skeptics to change their views. Some are studying how those outside the vehicle, including pedestrians and nearby drivers, react to driverless vehicles. Others are focusing on how passengers interact with AVs, for instance by testing whether people are more likely to trust cars that talk or share visual information on screens. Bertram Malle, a psychologist at Brown University, predicts that "acceptability is going to depend on how people feel when they are riding in an AV." AN INTEL STUDY CONDUCTED earlier this year at its corporate campus in Chandler, Arizona, suggested that-as in the James adfamiliarity will ease some anxiety. Researchers recruited a diverse group of 10 volunteers and offered them a choreographed, 5-minute ride in an AV on a closed course. The ride was structured to resemble one offered by a A MATTER OF TRUST Researchers are studying why many consumers are apprehensive about autonomous vehicles, and how to put them at ease