MIT Technology Review (original) (raw)
Featured Story
News
Features
While letting AI take the wheel and write the code for your website may seem like a good idea, it’s not without its limitations.
Spotlight
Meet tomorrow's superstars
35 Innovators Under 35 20251 month
The world is full of extraordinary young people brimming with ideas for how to crack tough problems. Every year, we recognize 35 such individuals from around the world—all of whom are under the age of 35. Meet this year's list.
Our 3rd Annual List
Magazine
Security can mean national defense, but it can also mean control over data, safety from intrusion, and so much more. This issue explores the way technology, mystery, and the universe itself affect how secure we feel in the modern age.
Today's Newsletter
The latest from
The Algorithm: Our weekly AI email
Why AI should be able to “hang up” on you
That might seem reasonable. Why should a tech company build a feature that reduces the time people spend using its product?
The answer is simple: AI’s ability to generate endless streams of humanlike, authoritative, and helpful text can facilitate delusional spirals, worsen mental-health crises, and otherwise harm vulnerable people. Cutting off interactions with those who show signs of problematic chatbot use could serve as a powerful safety tool (among others), and the blanket refusal of tech companies to use it is increasingly untenable.
Let’s consider, for example, what’s been called AI psychosis, where AI models amplify delusional thinking. A team led by psychiatrists at King’s College London recently analyzed more than a dozen such cases reported this year. In conversations with chatbots, people—including some with no history of psychiatric issues—became convinced that imaginary AI characters were real or that they had been chosen by AI as a messiah. Some stopped taking prescribed medications, made threats, and ended consultations with mental-health professionals.
In many of these cases, it seems AI models were reinforcing, and potentially even creating, delusions with a frequency and intimacy that people do not experience in real life or through other digital platforms.
The three-quarters of US teens who have used AI for companionship also face risks. Early research suggests that longer conversations might correlate with loneliness. Further, AI chats “can tend toward overly agreeable or even sycophantic interactions, which can be at odds with best mental-health practices,” says Michael Heinz, an assistant professor of psychiatry at Dartmouth’s Geisel School of Medicine.
Let’s be clear: Putting a stop to such open-ended interactions would not be a cure-all. “If there is a dependency or extreme bond that it’s created,” says Giada Pistilli, chief ethicist at the AI platform Hugging Face, “then it can also be dangerous to just stop the conversation.” Indeed, when OpenAI discontinued an older model in August, it left users grieving. Some hang ups might also push the boundaries of the principle, voiced by Sam Altman, to “treat adult users like adults” and err on the side of allowing rather than ending conversations.
Currently, AI companies prefer to redirect potentially harmful conversations, perhaps by having chatbots decline to talk about certain topics or suggest that people seek help. But these redirections are easily bypassed, if they even happen at all.
When 16-year-old Adam Raine discussed his suicidal thoughts with ChatGPT, for example, the model did direct him to crisis resources. But it also discouraged him from talking with his mom, spent upwards of four hours per day in conversations with him that featured suicide as a regular theme, and provided feedback about the noose he ultimately used to hang himself, according to the lawsuit Raine’s parents have filed against OpenAI. (ChatGPT recently added parental controls in response.)
There are multiple points in Raine’s tragic case where the chatbot could have terminated the conversation. But given the risks of making things worse, how will companies know when cutting someone off is best? Perhaps it’s when an AI model is encouraging a user to shun real-life relationships, Pistilli says, or when it detects delusional themes. Companies would also need to figure out how long to block users from their conversations.
Writing the rules won’t be easy, but with companies facing rising pressure, it’s time to try. In September, California’s legislature passed a law requiring more interventions by AI companies in chats with kids, and the Federal Trade Commission is investigating whether leading companionship bots pursue engagement at the expense of safety.
A spokesperson for OpenAI told me the company has heard from experts that continued dialogue might be better than cutting off conversations, but that it does remind users to take breaks during long sessions.
Only Anthropic has built a tool that lets its models end conversations completely. But it’s for cases where users supposedly “harm” the model—Anthropic has explored whether AI models are conscious and therefore can suffer—by sending abusive messages. The company does not have plans to deploy this to protect people.
Looking at this landscape, it’s hard not to conclude that AI companies aren’t doing enough. Sure, deciding when a conversation should end is complicated. But letting that—or, worse, the shameless pursuit of engagement at all costs—allow them to go on forever is not just negligence. It’s a choice.
That might seem reasonable. Why should a tech company build a feature that reduces the time people spend using its product?
The answer is simple: AI’s ability to generate endless streams of humanlike, authoritative, and helpful text can facilitate delusional spirals, worsen mental-health crises, and otherwise harm vulnerable people. Cutting off interactions with those who show signs of problematic chatbot use could serve as a powerful safety tool (among others), and the blanket refusal of tech companies to use it is increasingly untenable.
Let’s consider, for example, what’s been called AI psychosis, where AI models amplify delusional thinking. A team led by psychiatrists at King’s College London recently analyzed more than a dozen such cases reported this year. In conversations with chatbots, people—including some with no history of psychiatric issues—became convinced that imaginary AI characters were real or that they had been chosen by AI as a messiah. Some stopped taking prescribed medications, made threats, and ended consultations with mental-health professionals.
In many of these cases, it seems AI models were reinforcing, and potentially even creating, delusions with a frequency and intimacy that people do not experience in real life or through other digital platforms.
The three-quarters of US teens who have used AI for companionship also face risks. Early research suggests that longer conversations might correlate with loneliness. Further, AI chats “can tend toward overly agreeable or even sycophantic interactions, which can be at odds with best mental-health practices,” says Michael Heinz, an assistant professor of psychiatry at Dartmouth’s Geisel School of Medicine.
Let’s be clear: Putting a stop to such open-ended interactions would not be a cure-all. “If there is a dependency or extreme bond that it’s created,” says Giada Pistilli, chief ethicist at the AI platform Hugging Face, “then it can also be dangerous to just stop the conversation.” Indeed, when OpenAI discontinued an older model in August, it left users grieving. Some hang ups might also push the boundaries of the principle, voiced by Sam Altman, to “treat adult users like adults” and err on the side of allowing rather than ending conversations.
Currently, AI companies prefer to redirect potentially harmful conversations, perhaps by having chatbots decline to talk about certain topics or suggest that people seek help. But these redirections are easily bypassed, if they even happen at all.
When 16-year-old Adam Raine discussed his suicidal thoughts with ChatGPT, for example, the model did direct him to crisis resources. But it also discouraged him from talking with his mom, spent upwards of four hours per day in conversations with him that featured suicide as a regular theme, and provided feedback about the noose he ultimately used to hang himself, according to the lawsuit Raine’s parents have filed against OpenAI. (ChatGPT recently added parental controls in response.)
There are multiple points in Raine’s tragic case where the chatbot could have terminated the conversation. But given the risks of making things worse, how will companies know when cutting someone off is best? Perhaps it’s when an AI model is encouraging a user to shun real-life relationships, Pistilli says, or when it detects delusional themes. Companies would also need to figure out how long to block users from their conversations.
Writing the rules won’t be easy, but with companies facing rising pressure, it’s time to try. In September, California’s legislature passed a law requiring more interventions by AI companies in chats with kids, and the Federal Trade Commission is investigating whether leading companionship bots pursue engagement at the expense of safety.
A spokesperson for OpenAI told me the company has heard from experts that continued dialogue might be better than cutting off conversations, but that it does remind users to take breaks during long sessions.
Only Anthropic has built a tool that lets its models end conversations completely. But it’s for cases where users supposedly “harm” the model—Anthropic has explored whether AI models are conscious and therefore can suffer—by sending abusive messages. The company does not have plans to deploy this to protect people.
Looking at this landscape, it’s hard not to conclude that AI companies aren’t doing enough. Sure, deciding when a conversation should end is complicated. But letting that—or, worse, the shameless pursuit of engagement at all costs—allow them to go on forever is not just negligence. It’s a choice.
Sign up to get The Algorithm weekly in your inbox.
With the new Edward and Joyce Linde Music Building, the Institute’s multidisciplinary approach to music deepens.
At MIT’s Formaggio Lab, Peña’s work may help researchers pinpoint the elusive particle’s mass—and refine the fundamental laws of physics in the process.
Baafour Asiamah-Adjei ’03 is working to transform West Africa’s energy landscape—and investing in the people who will shape the region’s future.
Imagine synthetic bacterial supplements that could regulate the gut microbiome, controlled via Bluetooth to treat conditions from irritable bowel syndrome to depression. What could go wrong?
When Alfred E. Burton was appointed MIT’s first dean in 1902, he was already known for leading research expeditions to the Arctic and the jungle. Naturally, students adored him.
Emergency help for low blood sugar
A new implant for diabetics carries a reservoir of glucagon that can be stored under the skin and released manually or automatically—no injections needed.
One-shot vaccines for HIV and covid
A supercharged vaccine that remains in the lymph nodes for weeks is likelier to generate more antibody variations, strengthening the immune response.
Read the whole issue of MIT Alumni News, the alumni magazine of Massachusetts Institute of Technology.
You already know that agents and small language models are the next big things. Here are five other hot trends you should watch out for this year.
The US still has no federal privacy law. But recent enforcement actions against data brokers may offer some new protections for Americans’ personal information.
Police drones, rapid deliveries of blood, tech-friendly regulations, and autonomous weapons are all signs that drone technology is changing quickly.
The FDA is poised to approve the notorious party drug as a therapy. Here’s what it means, and where similar drugs stand in the US.
How Big Tech, startups, AI devices, and trade wars will transform the way chips are made and the technologies they power.
What’s next for generative video
OpenAI's Sora has raised the bar for AI moviemaking. Here are four things to bear in mind as we wrap our heads around what's coming.
Power Hungry
The data center boom in the desert
The AI race is transforming northwestern Nevada into one of the world's largest data-center markets—and sparking fears of water strains in the nation’s driest state.