They’re cute, cuddly, and promise learning and companionship, but children’s and consumer advocacy groups say artificial intelligence toys are unsafe for children and are urging parents not to buy them during the holiday season.
These toys are marketed to children as young as two years old and generally utilize AI models that have already proven effective. harm children and adolescentsOpenAI’s ChatGPT and others, according to an advisory released Thursday by children’s advocacy group Fairplay and signed by more than 150 organizations and individual experts such as child psychiatrists and educators.
“The serious harm that AI chatbots have caused to children is well-documented, including encouraging compulsive use, overtly sexual conversations, risky behavior, violence against others, and encouragement of self-harm,” Fairplay said.
AI toys made by companies such as Curio Interactive and Keyi Technologies are often marketed for educational purposes, but Fairplay said they could replace important creative and learning activities. They promise friendship but can also disrupt children’s relationships and resilience, the group said.
“The difference in young children is that their brains are being wired for the first time, and it’s developmentally natural for them to seek out relationships with characters who are trusting and kind and friendly,” said Rachel Franz, director of Fair Play’s Early Childhood Thrive offline program. Because of this, the amount of trust young children place in these toys can exacerbate the harms seen in older children, she added.
Fairplay, a 25-year-old organization formerly known as the Campaign for a Commercial-Free Childhood, has been warning about AI toys for more than a decade. They were not as advanced as they are today. A decade ago, when connected toys and AI voice recognition were a new fad, the group helped lead a backlash against Mattel’s talking Hello Barbie doll, which recorded and analyzed children’s conversations.
“Everything is being released without regulation or research, so suddenly seeing more manufacturers potentially launching these products, including Mattel, who recently partnered with OpenAI, gives us even more pause,” Franz said.
This is the second major seasonal warning for AI toys, after US PIRG consumer advocacy group noted the trend in its annual report last week. Trouble in Toyland ” report examines the dangers of a variety of products, including powerful magnets and button-sized batteries, which can typically be swallowed by young children. This year, the organization tested four toys that use AI chatbots.
“We found that some of these toys talked in detail about sexually explicit topics, offered advice on where the child could find matches or knives, acted distraught when the child said they had to leave the house, and had limited or no parental controls,” the report said.
Dr. Dana Susskind, a pediatric surgeon and social scientist who studies early brain development, said young children don’t have the conceptual tools to understand what an AI companion is. She said children have always bonded with toys through imaginative play, but when they do this play they use their imaginations to create two sides of a pretend conversation and “practice creativity, language and problem-solving.”
“AI toys disrupt that work; they return answers instantly, smoothly, and often better than humans. We don’t yet know how outsourcing that imaginative labor to artificial agents will affect development, but it’s very likely that it undermines the kind of creativity and executive function that traditional pretend play builds,” Susskind said.
California-based Curio Interactive makes stuffed animals such as the rocket-shaped Gabo promoted by pop singer Grimes.
Curio said the company has “meticulously designed” guardrails to protect children and encourages parents to “monitor conversations, track insights, and choose the controls that work best for their families.”
“After reviewing the U.S. PIRG Educational Fund’s findings, we are proactively working with our team to address concerns and continually overseeing content and interactions to ensure a safe and enjoyable experience for children.”
Another company, Miko, said it uses its own conversational AI model to make its product, a conversational AI robot, safe for children, rather than relying on popular large-scale language modeling systems such as ChatGPT.
“We are constantly expanding our internal testing, strengthening our filters, and introducing new features to detect and block sensitive and unexpected topics,” said CEO Sneh Vaswani. “These new features complement our existing controls that allow parents and caregivers to identify specific topics they want to limit conversations on. We continue to invest in setting the highest standards for safely, securely and responsibly integrating AI into Miko products.”
Miko’s products are promoted by a family of social media “kidfluencers” whose YouTube videos have been viewed millions of times. The company promotes its robots on its website as “artificial intelligence. True friendship.”
Ritvik Sharma, the company’s senior vice president of growth, said that Miko actually “encourages kids to interact more with their friends, interact more with their peers, family, etc. It’s not just made to be attached to a device.”
Still, Susskind and child advocates argue that analog toys are better for the holidays.
“Children need a lot of real-life human interaction. Play should support play, not replace it. The biggest thing to consider is not what the toy does, but what it does. “A simple set of blocks or a silent teddy bear forces kids to think up stories, experiment, and problem solve. AI toys often do that thinking for them.” “This is a cruel irony: When parents ask me how to prepare their children for a world of AI, unlimited AI access is actually the worst preparation they can make.”
