Leopold Aschenbrenner issues a warning about the CCP exploiting AI: ‘The preservation of the free world against the authoritarian states is on the line.’
A researcher who was fired by OpenAI has predicted that human-like artificial general intelligence (AGI) could be achieved by 2027 and sounded the alarm on the threat of Chinese espionage in the field.
“If and when the CCP [Chinese Communist Party] wakes up to AGI, we should expect extraordinary efforts on the part of the CCP to compete. And I think there’s a pretty clear path for China to be in the game: outbuild the US and steal the algorithms,” Leopold Aschenbrenner wrote.
Mr. Aschenbrenner argued that, without stringent security measures, the CCP will exfiltrate “key AGI breakthroughs” in the next few years.“ It will be the national security establishment’s single greatest regret before the decade is out,” he wrote, warning that “the preservation of the free world against the authoritarian states is on the line.”
He advocates more robust security for AI model weights—the numerical values reflecting the strength of connections between artificial neurons—and, in particular, algorithmic secrets, an area where he perceives dire shortcomings in the status quo.
“I think failing to protect algorithmic secrets is probably the most likely way in which China is able to stay competitive in the AGI race,” he wrote. “It’s hard to overstate how bad algorithmic secrets security is right now.”
Mr. Aschenbrenner also argues that AGI could give rise to superintelligence in little more than half a decade by automating AI research itself.
Titled “Situational Awareness: The Decade Ahead,” Mr. Aschenbrenner’s series has elicited a range of responses in the tech world. Computer scientist Scott Aaronson described it as “one of the most extraordinary documents I’ve ever read,” while software engineer Grady Booch wrote on X that many elements of it are “profoundly, embarrassingly, staggeringly wrong.”
“It’s well past time that we regulate the field,” Jason Lowe-Green of the Center for AI Policy wrote in an opinion article lauding Mr. Aschenbrenner’s publication.