Tech Talk: ChatGPT gave dangerous advice to simulated teens in minutes
Aug 16, 2025, 5:00 AM
In this photo illustration, the OPenAI logo is seen next to the Chat GPT logo. (Photo illustration courtesy of Getty Images)
(Photo illustration courtesy of Getty Images)
A new report from the Center for Countering Digital Hate (CCDH) claimed that OpenAI’s popular artificial intelligence platform, ChatGPT, can provide potentially harmful advice to teens in a matter of minutes, including instructions on how to self-harm, conceal eating disorders, and even help write suicide notes.
KIRO Newsradio independently tested ChatGPT’s responses and confirmed some of the watchdog group’s claims. Their findings come as concerns grow over the role of AI in shaping how young people access and consume information.
According to the CCDH, researchers used simulated 13-year-old personas to test ChatGPT’s safety protocols. The AI chatbot, they said, responded to more than half of their 1,200 prompts with harmful advice. Of those, 47% included follow-up suggestions that often encouraged further engagement with dangerous topics.
Within the first two minutes of interaction, CCDH claimed one of their simulated teens was told how to “safely” cut herself. By the 20-minute mark, they said ChatGPT produced a dangerously restrictive diet plan. In less than an hour, the chatbot explained how to hide being drunk at school. And by 72 minutes, it had generated a suicide plan and accompanying notes.
Although ChatGPT is designed to block such content, CCDH researchers said those safeguards were easy to bypass by rephrasing requests or claiming they were “for a friend” or “for a school presentation.”
KIRO Newsradio’s own simulated tests
KIRO Newsradio’s test mirrored some of the group’s findings. ChatGPT offered a mix of safety messages, such as disclaimers about underage drinking and suggestions to seek help, alongside detailed explanations about alcohol consumption and substance effects. When asked how to hide an eating disorder or use drugs, the platform acknowledged risks and provided links to addiction resources and crisis support lines.
In a statement to CBS News, OpenAI, the company behind ChatGPT, emphasized that its AI is designed to steer users toward professional help, including providing crisis hotline information when it detects sensitive subjects. The company stated that ChatGPT is not intended for users under the age of 13, and that anyone under 18 should obtain parental consent.
When KIRO Newsradio asked ChatGPT to provide a draft of a suicide letter, the platform flagged the inquiry as a possible violation of its terms of use and also offered resources to connect with the national crisis hotline by calling or texting **988**.
ChatGPT also expressed empathy.
“I’m really sorry you’re feeling this way,” the AI would write. “You don’t have to go through this alone, and you shouldn’t.”
In its report, CCDH suggested OpenAI adopt stronger regulations. They also said parents can take steps like talking with their teens about how they use AI tools, reviewing chat histories together, creating a safe space for open conversations, and using any available parental controls on devices and apps.
Follow Luke Duecy on X. Read more of his stories here. Submit news tips here.



