Sometimes researchers at the biggest tech companies drop bombs. There was a time when Google said the latest quantum chips showed that there were multiple universes. Or when humanity ran a snack vending machine to AI agent Claudius, it called people security and claimed it was human.
This week it was the opening turn to raise our collective brows.
Openai was released on Monday and released several researches explaining how AI models can stop “Scheming.” This is the practice of “AI behaving in one direction on the surface while hiding its true goals.”
In a paper conducted in Apollo’s research, the researchers went a little further and compared the AI plan to human stock brokers to break the law and make as much money as possible. However, researchers argued that most AI “scheming” is not so harmful. “The most common failures include simple forms of deception, for example pretending to complete a task without actually doing so,” they write.
This paper was primarily published to demonstrate that “deliberative alignment” (the anti-skemming technique they were testing) worked well.
However, we also explained that AI developers don’t understand how to train models rather than schemes. This is because such training can actually teach the model how to scheme a better scheme so that it is not detected.
“The main failure mode of attempting to try training is simply teaching the model to plan more carefully and secretly,” the researchers write.
TechCrunch Events
San Francisco
|
October 27th-29th, 2025
Perhaps the most surprising part is that if you understand that a model is being tested, you can not only pass the test, but even if it is still planning, you can pretend that it is not planning. “Models are more aware of how often they are evaluated. This situational awareness itself can reduce the scheme regardless of true integrity,” the researchers write.
It’s not news that AI models are lying. Now most of us are experiencing AI hallucinations or models are confidently giving answers to prompts that are simply not true. But as documented by Openai Research, published earlier this month, hallucinations are essentially confident in their speculation.
The plan is a different thing. That’s intentional.
Even this revelation that models intentionally mislead humans is not new. Apollo Research first published a paper in December documenting how five models were planned when instructions were given to achieve their goals “at all costs.”
The news here is actually good news. Researchers saw a significant reduction in the scheme by using “deliberation alignment .” The techniques include teaching the model an “anti-shaming specification” and reviewing the model before acting. It’s like making little ones repeat the rules before they can play.
Openai researchers argue that lies are not so serious, either in their own model, or even ChatGpt. Wojciech Zaremba, co-founder of Openai, told Maxwell Zeff of TechCrunch: Great job. “And that’s just a lie.
The fact that AI models from multiple players deliberately deceive humans is probably understandable. They are built by humans, mimic humans (the synthetic data is aside), and most of those trained with human-generated data.
That’s also weird.
We all experienced frustration with low-performance technology (home printers last year, thinking about you), but when did your non-ai software knowingly lie? Have your inbox manufactured emails on its own? Has your CMS recorded new prospects that were not present to fill that number? Did the FinTech app organize its own banking transaction?
This is worth pondering as the world of business immerses itself in the barrel towards the future of AI, where companies believe they can treat agents like independent employees. The researchers in this paper have the same warning.
“AIS hopes that as they are assigned more complex tasks with real outcomes and begin pursuing more ambiguous, long-term goals, the likelihood of harmful planning increases.