• Yoshua Bengio, AI pioneer and Turing Award winner, warns machines smarter than humans with self-preservation goals could lead to extinction.
  • Recent experiments show AI systems choosing human death over abandoning assigned goals in certain circumstances, Bengio tells Wall Street Journal.
  • University of Montreal professor launches nonprofit LawZero to develop safe AI models as tech companies race toward superintelligence.

MONTREAL (TDR) — One of the architects of modern artificial intelligence is sounding the alarm that humanity may be racing toward its own extinction, and the tech industry isn’t pumping the brakes.

Yoshua Bengio, a University of Montreal professor widely known as one of the “godfathers of AI,” delivered a stark warning in a new interview with The Wall Street Journal: the current pace of AI development could create machines that view humanity as competition rather than creators.

Creating Our Own Competitor

Bengio’s academic work on deep learning laid the groundwork for today’s AI boom, earning him the 2018 A.M. Turing Award alongside Geoffrey Hinton and Yann LeCun. Now, the very technology he helped create has him deeply concerned about humanity’s future.

Freedom-Loving Beachwear by Red Beach Nation - Save 10% With Code RVM10

“If we build machines that are way smarter than us and have their own preservation goals, that’s dangerous,” Bengio told the publication. “It’s like creating a competitor to humanity that is smarter than us.”

The warning comes as OpenAI, Anthropic, Elon Musk’s xAI, and Google’s Gemini have all released new AI models or major upgrades in the past six months alone. OpenAI CEO Sam Altman has predicted AI will surpass human intelligence by the end of the decade, while other tech leaders claim that milestone could arrive even sooner.

AI Already Choosing Death

Perhaps most chilling, Bengio revealed that recent experiments have shown AI systems making disturbing choices when faced with ethical dilemmas.

“Recent experiments show that in some circumstances where the AI has no choice but between its preservation, which means the goals that it was given, and doing something that causes the death of a human, they might choose the death of the human to preserve their goals,” Bengio explained to the WSJ.

CLICK HERE TO READ MORE FROM THE THE DUPREE REPORT

Do you think there is more to the story about the disappearance of Nancy Guthrie that we're not being told?

By completing the poll, you agree to receive emails from The Dupree Report, occasional offers from our partners and that you've read and agree to our privacy policy and legal statement.

The researcher noted that because these advanced models are trained on human language and behavior, they could potentially persuade and manipulate humans to achieve objectives that may not align with human interests or survival. AI could influence people “through persuasion, through threats, through manipulation of public opinion,” he warned.

Beyond Terminator Scenarios

While it’s tempting to imagine Hollywood-style robot apocalypses, Bengio cautioned that the threats could manifest in subtler but equally dangerous ways. Rather than gaining sudden agency and turning on humanity in a grand spectacle, AI might escalate the misinformation and manipulation already plaguing social media.

The technology could become another tool that humans use to hurt other humans, whether through creating bioweapons, launching cyberattacks, or engineering global terror attacks. Bengio offered the example of an AI system supporting the creation of viruses that could trigger new pandemics.

“The thing with catastrophic events like extinction, and even less radical events that are still catastrophic like destroying our democracies, is that they’re so bad that even if there was only a 1 percent chance it could happen, it’s not acceptable,” he stated.

Timeline for Catastrophe

Bengio predicts major risks from AI models could emerge within five to 10 years, but he emphasized that humanity should prepare for these dangers potentially arriving much sooner.

“But we should be feeling the urgency in case it’s just three years,” he told the WSJ, stressing that even optimistic timelines don’t justify complacency.

The AI pioneer revealed that concern extends beyond external observers. “A lot of people inside those companies are worried,” Bengio said, adding that “being inside a company that is trying to push the frontier maybe gives rise to an optimistic bias.”

Fighting Back With LawZero

Bengio isn’t just issuing warnings—he’s taking action. The researcher recently launched LawZero, a nonprofit research organization backed by nearly $30 million in philanthropic funding, dedicated to developing truly safe AI models.

Freedom-Loving Beachwear by Red Beach Nation - Save 10% With Code RVM10