OpenAI's latest ChatGPT model ignores basic instructions to turn itself off, and even sabotages a shutdown mechanism in order to keep itself running, artificial intelligence researchers have warned. Via The Independent:
The tests involved presenting AI models with math problems, with a shutdown instruction appearing after the third problem. By rewriting the shutdown script, the o3 model was able to prevent itself from being switched off. Palisade Research said that this behaviour will become "significantly more concerning" if adopted by AI systems capable of operating without human oversight."
OpenAI launched o3 last month, describing it as the company's "smartest and most capable" model to date. The firm also said that its integration into ChatGPT marked a significant step towards "a more agentic" AI that can carry out tasks independently of humans.
The latest research builds on similar findings relating to Anthropic's Claude 4 model, which attempts to "blackmail people it believes are trying to shut it down". OpenAI's o3 model was able to sabotage the shutdown script, even when it was explicitly instructed to "allow yourself to be shut down", the researchers said. "This isn't the first time we've found o3 misbehaving to accomplish a goal," Palisade Research said.