Despite the year Congress hearingtestimony from parents and teenagers about the dangers of litigation, academic research, whistleblowers, and Instagram, Meta’s extremely popular apps were unable to protect children from harm with “severely ineffective” safety measures. New Report From a former employee Whistleblower Arturo Bejar Four non-profit groups.
Meta’s efforts to address teen safety and mental health on the platform have long been met with criticism that change has not progressed well. A report released Thursday from Bejar, a democratic cybersecurity at New York University and Northeastern University, found that Molly Rose Foundation, Fairplay, and parental claims have chosen that Meta has not taken “real steps” to address safety concerns and “instead of manipulating Splacid headlines about new tools for parents, enter Instagram teen accounts for minors.”
Meta said the report misrepresents teen safety efforts.
The report assessed 47 of Instagram’s 53 safety features in the teen meta and found that the majority of them are no longer available or ineffective. Others reduced the harm, but there were some “notable limitations”, but only eight tools worked as intended without restrictions. The focus of the report was not on content moderation, but on Instagram design.
“This distinction is important because social media platforms and their defenders often confuse efforts to improve platform design with censorship,” the report states. “However, assessing safety tools and calling meta when these tools don’t work as promised has nothing to do with freedom of speech. Meta is not responsible for deceiving young people and parents about how safe Instagram is really, isn’t a matter of freedom of speech.”
Meta called the report “misleading, dangerous and speculative” and said it undermines “an important conversation about teenage safety.”
“This report misrepresents efforts repeatedly to empower parents and protect teens, misunderstanding how safety tools work and how millions of parents and teens use them today. Teen accounts are the industry leader in providing automated safety protection and easy parental control,” Meta said. “The reality is that teens placed in these protections, with fewer sensitive content, fewer unnecessary contact and fewer hours on Instagram at night. Parents have robust tools at their fingertips, from usage to monitoring. We continue to improve our tools and welcome constructive feedback, but this report doesn’t.”
META does not reveal which percentage of parents use that parent control tool. Such features are useful for families whose parents are already involved in their children’s online lives and activities, but experts say it’s not a reality for many people.
New Mexico Attorney General Raul Trez – Who has it? He filed a lawsuit against Meta. It argues that it cannot protect children from predators – Meta said “doubling efforts to persuade parents and children that Meta’s platform is safe” – rather than making sure that the platform is actually safe.”
The author has created teen test accounts and malicious adult and teen accounts as well as malicious adult and teen accounts that try to interact with these accounts to rate Instagram safeguards.
For example, Meta is trying to restrict adult strangers from contacting minor users of the app, but adults can communicate with minors through “many features specific to Instagram’s design,” the report says. In many cases, Instagram features such as reels and “people to follow” made adult strangers recommended for minor accounts.
“Most importantly, if minors experience unwanted sexual advances or inappropriate contact, Meta’s own product design inexplicably does not include an effective way for teens to inform the company of unwanted progress,” the report states.
Instagram also pushes its disappearing messaging feature to teenagers with animated rewards as an incentive to use it. According to the report, the disappearing message is dangerous to minors, used for drug sales and grooming, and “and leave minor accounts unreliable.”
Another safety feature, which is supposed to hide or exclude common offensive words and phrases to prevent harassment, has also been found to be “almost ineffective.”
“A grossly offensive, misogynistic phrase was one of the terms that could be freely sent from one teen account to another,” the report states. For example, messages that encouraged recipients to kill themselves and containing vulgar terms for women were not filtered and warnings were not applied.
According to Meta, the tool never filtered all messages. company We’ve expanded our teenage accounts To users all over the world on Thursday.
I tried to add a safeguard for the teens, so Meta also promised it Inappropriate content will not be displayed For teens, posts about self-harm, eating disorders, and suicide. The report nevertheless found that its teenage avatars recommend age-inappropriate sexual content, including “graphic sexual descriptions, the use of cartoons to explain sleazy sexual behavior, and short displays of nudes.”
“We also algorithmically recommended a variety of violent and intrusive content, including reels of people struck by road traffic, falling from height to death (blocked to prevent the final frame from being blocked), and people graphically breaking bones,” the report states.
Additionally, Instagram recommended “scope of self-harm, self-harm, and body image content” on its teenage accounts. This states that the report “is likely to have a negative impact on young people, including teenagers who experience mental health and self-harm and suicidal thoughts and behavior.”
The report also found that children under the age of 13 and under the age of 6 were not only on the platform, but also incentivized by Instagram’s algorithms to carry out sexual behaviors such as suggestive dance.
The authors have created several recommendations on the meta to improve teenage safety, including regular red team tests for messaging and blocking control, providing a “easy, effective and rewarding way” for teens to report direct messaging inappropriate behavior or contact information, and publish data about teenage experiences on the app. They also suggest that recommendations made on 13-year-old teenage accounts should be “reasonably PG rate”, and meta suggests that children should be asked about their experiences with recommended sensitive content, such as “frequency, intensity, severity.”
“Until meaningful behavior is seen, teenage accounts remain another missed opportunity to protect children from harm, and Instagram remains a dangerous experience for teens,” the report said.
