The future with AI

How to avoid AI Catastrophic Risks

7. May 2024
Canadian computer scientist and Turing Award winner Yoshua Bengio is one of the world's foremost experts in AI and deep learning. On 7 May he spoke at the University of Vienna about potential catastrophic scenarios from advanced AI systems and how to prevent them. We had the opportunity to ask him a few questions beforehand.

Rudolphina: When and why did you realize that AI might become an existential threat?

Yoshua Bengio: It gradually dawned on me during the 2022-2023 winter as I continued digging into the abilities of ChatGPT, and even more so when GPT-4 came out. That forced me to revise my expectations for the timeline to reach human-level AI (aka AGI), which I used to put at many decades or more. I now believe with 50% probability it could fall within a few years to a couple of decades. I also realized that we were not ready for this, neither scientifically nor politically. Scientifically, we simply do not know how we could control a smarter-than-human AI in case its objectives (which may be set by humans or a consequence of human-given objectives) were in conflict with humanity's well-being. 

Politically, we have close to zero regulation in place (still a couple of years to the EU AI Act, which is ahead of the curve) and no international treaty. Indeed, even if we knew how to build a completely safe AI, how do we make sure everyone on this planet follows the right protocols, given that AGI will also give huge powers to those who control it, and that there is a fierce competition between corporations and between countries which could drive down the importance of protecting the public and humanity's future in a dangerous tragedy of the commons (because safety is a global common good).

Yoshua Bengio's lecture "Obtaining Safety Guarantees to avoid AI Catastrophic Risks", which will take place on Tuesday, May 7th at the University of Vienna as part of the Computer Science Anniversary 2024, will be broadcast live on YouTube and will be available to watch afterwards.

Rudolphina: AI in the hands of a few could lead to a dramatic concentration of power. What must be done at the policy level to avoid that?

Yoshua Bengio: Because future advanced AI will give tremendous power to whoever controls it, it is imperative to put in place very strong multi-stakeholder, democratic and multi-lateral (international) governance mechanisms so that no individual person, corporation or country could abuse that power at the expense of others.

You may also read
Rudolphina Experts: AI Policies
The regulation of artificial intelligence comes with both challenges and opportunities at the interface between technological progress and social values. A perspective by legal scholar Iris Eisenberger.

Rudolphina: To make AI safe in the long term, what are some of the key technological requirements?

Yoshua Bengio: We need to find ways to build "safe-by-design" AI systems, with as strong mathematical guarantees as possible. Right now we have nowhere near any kind of quantitative estimation of risk, and all the safety protections that AI companies have created have been quickly defeated by hackers and academics shortly after these systems were put out. And there are fundamental scientific reasons why this is a hard problem, maybe even unsolvable.

Rudolphina: The future of AI seems to be dictated by tech companies while regulation seems to lag behind. Is there anything we can do as citizens to have a greater say in the future?

Yoshua Bengio: Yes. Politicians follow the pulse of public opinion. Right now, a vast majority of citizens polled think that AI should be regulated and governments should make sure we avoid catastrophic outcomes, but they also put a very low priority on this issue compared to the many others they are asked about. The attitude of governments is generally similar with important differences from one country to the next depending on their exposure to the issues.

The single most important factor that will increase global AI safety is increased awareness of the nature of current AI systems and the associated risks and benefits. If everyone saw the risks as clearly as I do, I assure you that governments would (a) invest massively in the needed R&D to design safe AI systems and (b) move quickly to set up regulation and improved governance of future advanced AI systems (those which do not exist yet but could become a risk in coming years if the current trends in AI capability advances continue).

More about Yoshua Bengio

Yoshua Bengio is Full Professor at the Department of Computer Science and Operations Research at Université de Montreal, as well as the Founder and Scientific Director of Mila and the Scientific Director of IVADO. He also holds a Canada CIFAR AI Chair. Considered one of the world's leaders in Artificial Intelligence and Deep Learning, he is the recipient of the 2018 A.M. Turing Award with Geoff Hinton and Yann LeCun, known as the Nobel prize of computing. He is a Fellow of both the Royal Society of London and Canada, an Officer of the Order of Canada, Knight of the Legion of Honor of France and member of the UN's Scientific Advisory Board for Independent Advice on Breakthroughs in Science and Technology.