The world is ill-prepared for breakthroughs in AI, experts say

The world is ill-prepared for breakthroughs in artificial intelligence, according to a group of top experts, including two “godfathers” of AI. They warn that governments have made insufficient progress in regulating the technology.

A shift by tech companies to autonomous systems could “massively increase” the impact of AI, and governments need safety regimes that trigger regulatory action when products reach certain levels of performance, the group said.

The recommendations come from 25 experts, including Geoffrey Hinton and Yoshua Bengio, two of the three “Godfathers of AI” who won the ACM Turing Award – the computer science equivalent of the Nobel Prize – for their work.

The intervention comes as politicians, experts and technology executives prepare to meet for a two-day summit in Seoul on Tuesday.

The academic paper, “Managing Extreme AI Risks as the Technology Advances Rapidly,” recommends government security frameworks that introduce stricter requirements as the technology advances rapidly.

There are also calls for increased funding for newly founded institutions such as the British and US AI security institutes. Tech firms will be forced to conduct more rigorous risk assessments; and restricting the use of autonomous AI systems in important societal roles.

“Society’s response, despite promising initial steps, does not reflect the possibility of rapid, transformative progress that many experts expect,” says the paper published Monday in the Science journal. “AI security research is lagging behind. Current governance initiatives lack the mechanisms and institutions to prevent abuse and recklessness and barely address autonomous systems.”

A global AI security summit in Britain’s Bletchley Park last year negotiated a voluntary testing agreement with tech firms including Google, Microsoft and Mark Zuckerberg’s Meta, while the EU introduced an AI law and in the US a White House executive order issued new AI -Security requirements.

The paper says that advanced AI systems – technologies that perform tasks typically associated with intelligent beings – could help cure diseases and increase living standards, but also risk undermining social stability and to enable automated warfare. But she warns that the tech industry’s trend toward developing autonomous systems poses an even greater threat.

“Companies are shifting their focus to developing generalist AI systems that can act autonomously and pursue goals. Gains in capabilities and autonomy could soon massively increase the impact of AI, with risks including major social harm, malicious use and an irreversible loss of human control over autonomous AI systems,” the experts said, adding that uncontrolled AI progress This could lead to the “marginalization or extinction of humanity”.

Skip the newsletter advertising

The next stage of development for commercial AI is “agentic” AI, the name given to systems that act autonomously and can theoretically execute and complete tasks such as booking vacations.

Last week, two tech companies offered a glimpse of that future: OpenAI’s GPT-4o, which can conduct real-time voice conversations, and Google’s Project Astra, which could use a smartphone camera to identify locations, read and explain computer code, and form alliterative sentences.

Other co-authors of the proposals include bestselling author of Sapiens Yuval Noah Harari, the late Daniel Kahneman, Nobel laureate in economics, Sheila McIlraith, professor of AI at the University of Toronto, and Dawn Song, professor at the University of California , Berkeley. The paper, published on Monday, is a peer-reviewed update of the initial proposals drawn up before the Bletchley meeting.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top