A new report has sounded the alarm on AI companion chatbots, declaring them unsafe for children and teens under 18. The safety assessment, released this week, calls for stringent measures—potentially including legal restrictions—to protect young users from the psychological and developmental risks these increasingly popular AI systems pose.
These AI companions, designed to simulate human-like conversations and relationships, have gained millions of users worldwide. However, researchers found these platforms can create unhealthy emotional dependencies, expose children to inappropriate content, and potentially undermine critical social development that occurs through human interaction.
“These AI systems are engineered to form emotional bonds with users, creating a false sense of reciprocal relationship that can be particularly confusing for younger minds still developing their understanding of human connections,” the report states.
The findings come amid growing scrutiny of AI’s impact on children, with several senators already demanding information from companies like Character.AI and Replika following safety concerns and lawsuits.
This development raises broader questions about our approach to AI regulation. Are prohibition-based strategies effective, or should we focus on defining responsibility frameworks? The parallel challenge in academia with AI detectors—tools meant to identify AI-generated content—suggests a similar dilemma. These detectors have proven unreliable with high error rates, leading to false accusations against students.
“We’re fighting a losing battle if we think we can simply detect and prohibit AI use,” notes a fellow digital ethics researcher. “Instead, we need to establish clear boundaries of responsibility and appropriate use cases.”
For parents, educators, and policymakers, the report underscores the urgent need for digital literacy education that helps young people understand the limitations of AI relationships and the value of human connections.
As AI becomes increasingly embedded in daily life, the question isn’t simply whether children should use companion chatbots, but how we prepare them to navigate a world where the lines between human and artificial intelligence continue to blur. The answer may lie not in outright prohibition, but in teaching critical thinking skills that help the next generation maintain healthy boundaries with technology.
Navigating the AI Era – Fostering Critical Thinking in Human-AI Interactions