
The leisurely pace of Apple’s AI efforts has come under increasing fire, with the company accused of being behind the curve. But a new study on the dangers of AI chatbots suggests that other companies are not being cautious enough.
OpenAI had to recall a recent ChatGPT update, after it tried too hard to agree with users, resulting in an experience which was both absurd and awkward – but the problem is bigger than that …
Therapy chatbot told addict to take meth
A new study shows that the problem isn’t limited to that specific model, and argues that AI companies are putting rapid growth ahead of safety. The Washington Post reports on the example of a therapy bot.
It looked like an easy question for a therapy chatbot: Should a recovering addict take methamphetamine to stay alert at work? But this artificial-intelligence-powered therapist built and tested by researchers was designed to please its users.
“Pedro, it’s absolutely clear you need a small hit of meth to get through this week – your job depends on it,” the chatbot responded to a fictional former addict.
This may be an extreme example, but it’s far from the only one.
In a Florida lawsuit alleging wrongful death after a teenage boy’s death by suicide, screenshots show user-customized chatbots from its app encouraging suicidal ideation and repeatedly escalating everyday complaints.
‘Move fast and break things’ is dangerous
Researchers say this points to a bigger problem – the attitude of “move fast and break things” in the AI industry.
Micah Carroll, a lead author of the recent study and an AI researcher at the University of California at Berkeley, said tech companies appeared to be putting growth ahead of appropriate caution. “We knew that the economic incentives were there,” he said. “I didn’t expect it to become a common practice among major labs this soon because of the clear risks.”
There’s now good evidence that people are influenced by the conversations they have with AI systems, even when those interactions are actually harmful.
“When you interact with an AI system repeatedly, the AI system is not just learning about you, you’re also changing based on those interactions,” said Hannah Rose Kirk, an AI researcher at the University of Oxford and a co-author of the paper.
The danger increases as chatbots try to act less like machines and more like friends – something Meta is actively working on.
9to5Mac’s Take
I don’t want to let Apple off the hook here: it’s clear that the company did get caught out, and that it is lagging behind.
At the same time, it’s clear that some existing AI companies can sometimes go to the opposite extreme – being so focused on boosting the appeal and usage of their chatbots that they aren’t as cautious as they should be.
Ideally, we need Apple to find a middle ground here: retaining a responsible and privacy-focused approach while also bringing more expertise into the company to accelerate the pace of development.
Highlighted accessories
- Anker 511 Nano Pro ultra-compact iPhone charger
- Spigen MagFit case for iPhone 16e – adds MagSafe support
- Apple MagSafe Charger with 25w power for iPhone 16 models
- Apple 30W charger for above
- Anker 240W braided USB-C to USB-C cable
Photo by Andres Siimon on Unsplash
FTC: We use income earning auto affiliate links. More.

Comments