Character.What is AI? Prohibition of minors The company’s chatbots are being discouraged amid growing concerns about the impact artificial intelligence conversations have on children. The company faces multiple lawsuits over child safety, including one by the mother of a teenage boy who claims the company is responsible. Chatbot shoved her teenage son to commit suicide.
Character Technologies, the Menlo Park, Calif., company behind Character.AI, announced Wednesday that it is removing the ability for users under 18 to participate in open-ended chats with AI characters. The changes will take effect by November 25th, and the two-hour per day limit will begin immediately. Character.AI added that it is working on new features for kids, including the ability to create videos, stories, and streams using AI characters. The company has also established an AI safety lab.
Character.AI said it would roll out an age verification feature to help determine which users are under 18. Age check to prevent children from accessing tools It’s not safe for them. However, these are imperfect and many children find ways around them. For example, a facial scan cannot always tell whether a person is 17 or 18 years old. There are also privacy concerns about requiring people to upload their government IDs.
Character.AI is an app that allows users to create customizable characters and interact with characters generated by other users, offering a wide range of experiences from imaginative play to mock job interviews. The company says the artificial persona is designed to “feel alive” and be “human-like.”
The app’s description on Google Play says, “Imagine talking to a super-intelligent, life-like chat bot character that hears, understands, and remembers your voice.” “We encourage you to push the frontiers of what is possible with this innovative technology.”
Critics welcomed the move but said it was not enough and should have been done sooner. “Many details still remain unknown,” said Meetali Jain, executive director of the Tech Justice Law Project.
“They do not say how they will operate age verification, how they will ensure that the method is privacy-protective, and they do not address the possible psychological impact of abruptly disabling access to young users, given the emotional dependency that is created,” Jain said. “Furthermore, these changes do not address the fundamental design features that promote emotional dependence not only in children but also in people over the age of 18.”
more 70% of teens have used an AI companion According to half of them use it regularly recent research Common Sense Media is a group that researches and advocates for the wise use of screens and digital media.
