Artificial intelligence and law

"We should not be afraid of AI"

The public discourse fails to address the potential benefits of artificial intelligence, finds IT law expert Nikolaus Forgó. Can AI even give us back autonomy? As uncertain as the future with AI may be, law graduates will be in higher demand than ever before.
The legal scholar Nikolaus Forgó is part of the panel discussion on 17 June in the Main Ceremonial Hall of the University of Vienna. To conclude the semester, the high-calibre panel will explore the question: Do we know what AI will know? © Universität Wien

Nikolaus Forgó has always been interested in new technologies and the associated opportunities and risks. His personal Internet story begins in 1992, when the Internet was still in its infancy. In 1997, his dissertation was one of the very first in Austria that could be accessed online. Today, he and his team work on the crossroads between AI, data protection and society in numerous interdisciplinary research projects, e.g. AI-empowered personalised medicine. As host of the podcast "Ars Boni", the legal scholar also demonstrates his talent for communicating knowledge and his appreciation of thinking outside the box. In the build-up to the panel discussion on the semester question, we met him for a chat about AI benefits, regulation and data protection.

Rudolphina: Nikolaus Forgó, do we know what AI will know?

Nikolaus Forgó: One inconvenient characteristic of the future is that it is difficult to anticipate, even for experts. I believe that the only serious answer to your question, regardless of the discipline, can only be: No, we do not know what AI will know. Especially because predicting future developments is such a difficult feat, the current strategy is to try to contain the potential risks through political and legal measures. And we have to do this before we realise that we did not know what AI knows.

Rudolphina: You are alluding to the new EU AI Act. Together with political scientist Barbara Prainsack, you have expressed criticism of this act in a recent paper. In your view, which aspects does the AI Act neglect?

Nikolaus Forgó: The EU AI Act defines four levels of risk for AI systems: unacceptable, high, limited and minimal. This classification is based on certain assumptions about the future as well as on various ethical beliefs that have not always been discussed at a broad level. For example, let us take a look at real-time video surveillance: Do we want to have surveillance systems that identify persons ‘live’ using AI? The initial answer was a clear no. But what followed was a very long debate on whether we actually want to answer this question with a ‘no’. And now we have a text that says, simply put: We do not want this in Europe, unless we need it. The Act therefore allows for the use of such systems under certain conditions (key word: serious crimes). There are certainly good arguments in favour of it and also good arguments against it. But I would have liked the discussion on such issues to be much broader, because it has only reached few experts.

The Act is very long and complex indeed but it has a strong focus on the topic of risks. The text focuses much less on the interaction between the Act and other European laws – for example, with the General Data Protection Regulation and the Digital Services Act. This has practical consequences: Which authority is actually responsible if I somehow fall victim to an AI or if I want to use an AI system? That is not clear. Remarkably little thought has been given to this.

And the Act has such a strong focus on risks while hardly addressing the potential benefits. We could have given much more thought to how AI could support the interests of the common good and not only those of major companies.

You may also read
AI ethics
An interdisciplinary body of experts, including three academics affiliated with the University of Vienna, supports Austria in the implementation of the UNESCO Recommendation on the Ethics of Artificial Intelligence. Rudolphina asked them why AI governance requires a global approach and which possibilities and risks this entails.

Rudolphina: Where do you see opportunities for the common good?

Nikolaus Forgó: Our world has become very complex. This makes it difficult in many situations to make informed decisions in our own best interests. You have to trust a doctor even if you do not know how willing this doctor is to take risks. If you have a nasty cough, the doctor will consider your situation carefully and provide advice accordingly: Do you need to take a chest X-ray or not? Nine out of ten coughs are harmless. If the doctor sends home every patient without an X-ray, they take the chance of sending home the one case that is not harmless. On the other hand, nobody is unnecessarily exposed to harmful radiation. Is it better to x-ray all ten patients instead, even though it is not necessary for nine out of ten patients?

Decisions like this affect us strongly, but we do not know how willing our doctor is to take risks. AI might give us back some autonomy by delegating decisions such as this one to a customisable, adaptive risk perception system that is tailored to your own risk affinity. For example, how important is it for me to take the chest X-ray in comparison to exposure to radiation? I consider this increased autonomy one of the greatest potentials of AI.

We do not know what AI will know. Therefore, we need to discuss early on how to manage the fact that we do not know what AI will know.
Nikolaus Forgó

Rudolphina: But how transparent are AI decisions?

Nikolaus Forgó: Maybe it is not even that important for me how the AI reaches a decision as long as I have specified: In case of doubt, always send me to take an X-ray, or only if it is highly likely that there is an issue.

But I agree that transparency is an essential issue in this context. Taking decisions based on data from the past poses considerable risk. If the data contain prejudices or any other unknown factors, the AI’s decision-making does not function the way it should.

You may also read
Rudolphina Experts: AI Policies
The regulation of artificial intelligence comes with both challenges and opportunities at the interface between technological progress and social values. A perspective by legal scholar Iris Eisenberger.

Rudolphina: The major tech companies are the main drivers of AI innovations. In your view, what role do universities, science and the University of Vienna in particular play?

Nikolaus Forgó: We are currently facing an enormous problem with regard to science policy: It has become almost impossible to remain competitive in AI research for large universities, even for US universities. So, maybe for the first time in the history of Western science, we are in a situation in which the universities do not play a central role in a very important research area because private research has the upper hand.  

This situation poses several challenges: We have to consider how we can regulate private research in a meaningful way to ensure that it does not only promote the interests of its shareholders but also the common good. We absolutely have competitive basic research in Europe, also in the field of AI, but we are having a hard time developing something that can be put on the market. And we need to try to remove these limitations as best as possible. 

Rudolphina: What advice do you want to give to students who will be confronted with AI systems when they embark on a career as legal experts?

Nikolaus Forgó: The most important thing I want to share with them: I also do not know what to expect, since nobody is able to predict the future. The second message: We should not be afraid of AI. And the third piece of advice: A degree programme teaches you how to apply certain tools in areas that you are not yet familiar with. And this will certainly also hold true for the next generation of legal experts.

So, be glad about these developments because they will bring work in an extent that we have rarely seen before. Graduates will have a lot to do because legal issues will be lurking around every corner – they will certainly not be replaced by AI soon. The generation that is studying now will enter a market that is more employee-friendly than any market in the last few decades.

Rudolphina: What should the users of widely used AI programmes consider in terms of privacy and data protection?

Nikolaus Forgó: When you use a widely used, free system, it must be clear that you are paying with the data that you enter – that is a huge amount of personal information. This can be sensitive in terms of data protection, especially if I share data about third parties. For example, when I upload photos of my family and ask the AI to determine their age.

This dilemma is very difficult to resolve, in particular with regard to compliance with European data protection standards. For example, when the relevant companies are not based in Europe but e.g. in China and thus do not really care about the GDPR.

Rudolphina: Where do you currently see knowledge gaps or aspects that are not addressed enough in the public discourse on AI?

I believe that we are not talking enough about the potentials. My impression is that we can still expect relatively little fundamental technical knowledge even in the relevant specialised communities. In my view, it is very difficult – even when you have sound general knowledge of and an interest in the topic – to assess what research is being done and what the objectives of this research are. I also believe that it will be the task of universities to impart more knowledge about the technologies and their regulation.

© Barbara Mair
© Barbara Mair
Nikolaus Forgó is Professor of Technology and Intellectual Property Law. He is a board member of the Institute for Innovation and Digitalisation in Law and deputy head of the Governance of Digital Practices platform at the University of Vienna. He has been an expert member of the Austrian Data Protection Council since 2018.

He studied law, philosophy and linguistics in Vienna and Paris. In 1997, he completed his doctorate with a dissertation on legal theory and has been head of the university programme for information law and legal information at the University of Vienna since 1998. From 2000 to 2017, he worked at the Faculty of Law at Leibniz Universität Hannover. He has been Professor of Technology and Intellectual Property Law at the University of Vienna since October 2017.