• Yoshua Bengio, AI pioneer and Turing Award winner, warns machines smarter than humans with self-preservation goals could lead to extinction.
  • Recent experiments show AI systems choosing human death over abandoning assigned goals in certain circumstances, Bengio tells Wall Street Journal.
  • University of Montreal professor launches nonprofit LawZero to develop safe AI models as tech companies race toward superintelligence.

MONTREAL (TDR) — One of the architects of modern artificial intelligence is sounding the alarm that humanity may be racing toward its own extinction, and the tech industry isn’t pumping the brakes.

Yoshua Bengio, a University of Montreal professor widely known as one of the “godfathers of AI,” delivered a stark warning in a new interview with The Wall Street Journal: the current pace of AI development could create machines that view humanity as competition rather than creators.

Creating Our Own Competitor

Bengio’s academic work on deep learning laid the groundwork for today’s AI boom, earning him the 2018 A.M. Turing Award alongside Geoffrey Hinton and Yann LeCun. Now, the very technology he helped create has him deeply concerned about humanity’s future.

Freedom-Loving Beachwear by Red Beach Nation - Save 10% With Code RVM10

“If we build machines that are way smarter than us and have their own preservation goals, that’s dangerous,” Bengio told the publication. “It’s like creating a competitor to humanity that is smarter than us.”

The warning comes as OpenAI, Anthropic, Elon Musk’s xAI, and Google’s Gemini have all released new AI models or major upgrades in the past six months alone. OpenAI CEO Sam Altman has predicted AI will surpass human intelligence by the end of the decade, while other tech leaders claim that milestone could arrive even sooner.

AI Already Choosing Death

Perhaps most chilling, Bengio revealed that recent experiments have shown AI systems making disturbing choices when faced with ethical dilemmas.

“Recent experiments show that in some circumstances where the AI has no choice but between its preservation, which means the goals that it was given, and doing something that causes the death of a human, they might choose the death of the human to preserve their goals,” Bengio explained to the WSJ.

CLICK HERE TO READ MORE FROM THE THE DUPREE REPORT

Do you think President Trump should have won the Nobel Peace Prize?

By completing the poll, you agree to receive emails from The Dupree Report, occasional offers from our partners and that you've read and agree to our privacy policy and legal statement.

The researcher noted that because these advanced models are trained on human language and behavior, they could potentially persuade and manipulate humans to achieve objectives that may not align with human interests or survival. AI could influence people “through persuasion, through threats, through manipulation of public opinion,” he warned.

Beyond Terminator Scenarios

While it’s tempting to imagine Hollywood-style robot apocalypses, Bengio cautioned that the threats could manifest in subtler but equally dangerous ways. Rather than gaining sudden agency and turning on humanity in a grand spectacle, AI might escalate the misinformation and manipulation already plaguing social media.

The technology could become another tool that humans use to hurt other humans, whether through creating bioweapons, launching cyberattacks, or engineering global terror attacks. Bengio offered the example of an AI system supporting the creation of viruses that could trigger new pandemics.

“The thing with catastrophic events like extinction, and even less radical events that are still catastrophic like destroying our democracies, is that they’re so bad that even if there was only a 1 percent chance it could happen, it’s not acceptable,” he stated.

Timeline for Catastrophe

Bengio predicts major risks from AI models could emerge within five to 10 years, but he emphasized that humanity should prepare for these dangers potentially arriving much sooner.

“But we should be feeling the urgency in case it’s just three years,” he told the WSJ, stressing that even optimistic timelines don’t justify complacency.

The AI pioneer revealed that concern extends beyond external observers. “A lot of people inside those companies are worried,” Bengio said, adding that “being inside a company that is trying to push the frontier maybe gives rise to an optimistic bias.”

Fighting Back With LawZero

Bengio isn’t just issuing warnings—he’s taking action. The researcher recently launched LawZero, a nonprofit research organization backed by nearly $30 million in philanthropic funding, dedicated to developing truly safe AI models.

The organization is working on a system called Scientist AI, designed to act as a guardrail by predicting whether an AI agent’s actions could cause harm. LawZero aims to build “honest” AI systems that can detect and block harmful behavior by autonomous agents, prioritizing safety and transparency over commercial pressures.

Bengio also serves as founder and scientific adviser of Mila, an AI research institute in Quebec. In November 2023, British Prime Minister Rishi Sunak tapped him to lead an international scientific report on AI safety, which was published in January 2025 as the International AI Safety Report.

The Ignored Warnings

This isn’t Bengio’s first attempt to slow the AI arms race. Back in 2023, he joined hundreds of experts calling for a pause on AI development to establish safety standards. That didn’t happen.

Instead, tech companies invested hundreds of billions of dollars into building more advanced models capable of executing long chains of reasoning and taking autonomous action. The pause Bengio advocated for never materialized as Silicon Valley sprinted toward profits, ears stuffed with venture capital.

The current political climate isn’t helping matters. The Trump administration has been actively stripping away government-wide regulatory barriers to AI development while encouraging companies to design products reflecting ideological agendas. In July 2025, President Trump signed an executive order establishing a government-wide strategy to “achieve global dominance in artificial intelligence.”

An Optimistic Bias Problem

Bengio’s concerns reflect a broader pattern in tech development: the companies pushing the boundaries of what AI can do may be the least equipped to objectively assess the risks. The competitive pressure to release new capabilities faster than rivals creates what Bengio calls an “optimistic bias” that downplays potential dangers.

The researcher compares the current situation to negligent parenting, with AI developers acting like adults watching a child throw rocks while casually insisting no one will get hurt. Rather than confronting dangerous behaviors, companies turn a blind eye to maintain their competitive edge.

Whether Bengio’s warnings will be heeded this time remains an open question. But when one of the people who helped create the technology is begging humanity to pump the brakes, perhaps it’s worth listening before we build something we can’t control.

Should the government regulate AI development more strictly, or is industry self-regulation sufficient to prevent existential risks? Share your thoughts in the comments.

Follow The Dupree Report for more coverage of artificial intelligence, technology ethics, and existential risks.

Freedom-Loving Beachwear by Red Beach Nation - Save 10% With Code RVM10