Two current and two former META employees have disclosed the documents to Congress, according to a Washington Post report.
According to their claims, Meta changed its policy regarding researching sensitive topics such as politics, child, gender, race, and harassment six weeks after whistleblower Haugen leaked internal documents showing how whistleblower Frances Haugen found that Instagram’s own research could damage teen girls’ mental health. These revelations, released in 2021, launched a long-standing hearing in the Congress on Child Safety on the Internet.
As part of these policy changes, Meta proposed two ways that researchers could limit the risk of conducting sensitive research, the report says. One proposal was to introduce it into lawyer research and protect communications from “adverse parties” due to lawyer and client privileges. Researchers can also write more vaguely about their findings and avoid terms such as “not compliant” and “illegal.”
Jason Suttizaan, a former meta researcher specializing in virtual reality, told The Washington Post that his boss had deleted recordings of his interview.
“The global privacy regulations make it clear that information from minors under the age of 13 must be removed if collected without verifiable parental consent,” a Meta spokesman told TechCrunch.
However, whistleblowers argue that the documents they filed with Congress show a pattern of employees who are prevented from discussing and investigating concerns about how children under the age of 13 use Meta’s social virtual reality apps.
“Several of these examples are sewn together to fit a given false narrative. In fact, since its launch in 2022, Meta has approved nearly 180 real-world lab-related studies on social issues such as youth safety and well-being,” Meta told TechCrunch.
TechCrunch Events
San Francisco
|
October 27th-29th, 2025
In a lawsuit filed in February, Kelly Stonelake, a former 15-year meta employee, raised similar concerns with these four whistleblowers. She told TechCrunch earlier this year that she led a “to-market” strategy to bring Horizon Worlds to teenagers, international markets and mobile users, but she felt the app had no proper way to keep users under the age of 13. She also flagged the app that there are persistent issues with racism.
“The leadership team recognized in one test that users with black avatars entered the platform, which took an average of 34 seconds before being called racial slurs, including “N-word” and “monkeys,” Suit argued.
Stonelake separately sued Meta on suspicion of sexual harassment and gender discrimination.
These whistleblower allegations are concentrated on Meta’s VR products, but the company has also faced criticism about how other products, such as AI chatbots, affect minors. Last month, Reuters reported that Meta’s AI rules previously allowed chatbots to have “romantic or sensual” conversations with children.
