AI ethics

Why we need global governance frameworks for artificial intelligence

20. September 2023 by Thiemo Kronlechner
An interdisciplinary body of experts, including three academics affiliated with the University of Vienna, supports Austria in the implementation of the UNESCO Recommendation on the Ethics of Artificial Intelligence. Rudolphina asked them why AI governance requires a global approach and which possibilities and risks this entails.
© Pixabay

The rapid developments in the field of artificial intelligence (AI) raise fundamental issues that require societal discussion at a wider scale. Demands for regulation in the field of AI are increasing. Therefore, the UNESCO has already issued a recommendation on the ethics of artificial intelligence in 2021. It serves to provide orientation in developing so-called AI governance mechanisms.

UNESCO Recommendation on the Ethics of Artificial Intelligence

The UNESCO recommendation aims to protect human rights and fundamental freedoms, human dignity and equality, including gender equality, and to strengthen democracy and constitutional states. In addition, the recommendation aims to realise the full potential of AI and mitigate risks of discrimination and misinformation as well as of negative impacts on the environment and data protection. The recommendation promotes equitable access to and participation in developments and knowledge in the field of AI, the sharing of benefits as well as ethical guidance throughout the life cycle of AI systems.

To accompany the implementation of the UNESCO recommendation in Austria, an interdisciplinary advisory board on the ethics of artificial intelligence (“Fachbeirat Ethik der Künstlichen Intelligenz”) was appointed in July 2023. The objective of this Advisory Board is to raise awareness of the ethical implications of digital technologies, to initiate a social dialogue and contribute academic expertise to the development of regulatory frameworks for artificial intelligence.

Political scientist Barbara Prainsack and philosopher of technology Mark Coeckelbergh from the University of Vienna as well as Sandra Wachter, expert on law from the University of Oxford and alumna of the University of Vienna, are members in this Advisory Board.

Rudolphina: At the end of 2021, following two years of negotiations involving all 193 member states, the UNESCO adopted the first globally coordinated recommendation in conformity with international law on the ethics of AI. Why do we need global frameworks for the use of artificial intelligence?

Mark Coeckelbergh: AI does not stop at borders, and neither do its consequences. It is therefore vital to set up a global governance framework for AI. Institutions such as UNESCO and the UN can play a key role in facilitating this.

Sandra Wachter: We have to consider technology from a global perspective since it never impacts only one country. This means that we also have to take a global perspective on governance, i.e. the way we deal with technologies. In this context, we must not compete for more favourable regulations because we would run the risk of having to accept incisions with regard to data protection and the protection of our fundamental rights. We need measures that guarantee that the power does not rest with only a few major companies. 

Barbara Prainsack: It is clear that new digital technologies – especially those that fall into the category of artificial intelligence – already have and will have an effect on many areas of life. Some aspects are already being widely discussed, for example, the impact of AI on the professional world. Other aspects still receive too little attention, including the role of AI in the energy sector both as a possible catalyst of the transformation towards a sustainable energy policy and as a technology that requires massive amounts of energy. It is thus beyond question that we need to develop statutory framework conditions. However, opinions differ on the question of what these conditions should look like. Therefore, it is all the more important that we act in an internationally coordinated manner, as far as possible.

AI does not stop at borders, and neither do its consequences.
Mark Coeckelbergh

Rudolphina: Especially in view of its versatile possibilities of use and the rapid development of this type of technology, it seems urgent to regulate this technology quickly and at the international level. What risks are we facing if we fail to regulate AI? And what potentials does artificial intelligence offer if we regulate it appropriately?

Mark Coeckelbergh: AI creates a lot of opportunities and we should not be blind to them. Sometimes when we read about AI in the media it looks like AI is a catastrophe for the world. Some people might have an interest in saying this. But such doomthinking is distracting from concrete and specific risks. I see the danger that we don’t properly regulate legal responsibilities when AI is used for autonomation. I see how AI can be used for manipulation in digital social media contexts, and I also believe that there could be risks for democracy, when misinformation is easily created and spread for example. But we can create more and more effective national and especially global governance of the technology to minimize those risks and contain the dangers. There is no reason for panic.

Sandra Wachter: The applications of AI are very diverse and affect almost every area of life. Whether they present an opportunity or rather a risk depends on the way in which we regulate them. If we manage to create sound legal and technical framework conditions, decisions could become even more transparent using AI than before.

Barbara Prainsack: The discourse on opportunities and risks is part of the problem. The assumption is that the technology itself bears risks and offers opportunities. However, at this point the opportunities and risks still depend on the way we design and use AI – and also on the way we regulate them. These decisions must be subject to democratic control and should not be entirely up to private companies. However, as long as we are leading a social discourse which associates public control with sluggishness and prohibitions and which associates private innovation with efficiency and progress, we will not make progress.

Read more
Discriminating algorithms
Sandra Wachter is one of the leading researchers in the field of Internet and artificial intelligence. In our interview, the alumna of Law, who has been recently appointed Professor at the Oxford Internet Institute, explains why it is so important to take a look behind the decisions of algorithms.
Opportunities and risks still depend on the way we design and use AI – and also on the way we regulate them.
Barbara Prainsack

Rudolphina: The recently appointed Advisory Board on the ethics of artificial intelligence aims to accompany the implementation of the UNESCO recommendation at the national and international level. How does work as a member of this Advisory Board look like in practice?

Mark Coeckelbergh: The Advisory Board can propose ways in which to implement the UNESCO recommendations in Austria, without, of course, have the final say on these matters. It can focus on specific points of the recommendations that need more work in this particular national context.

Barbara Prainsack: The concrete tasks of the Advisory Board include the monitoring of the UNESCO Recommendation on the Ethics of AI and the exchange of information and discussion of relevant topics and developments with regard to the UNESCO recommendation. This includes providing support in the definition of focus topics, accompanying and promoting the national implementation of the recommendation, promoting a broad social discourse in Austria as well as supporting measures aimed at raising awareness and public relations.

Sandra Wachter: The Advisory Board is definitely not a surveillance body. We hope to initiate a dialogue between all stakeholders, raise awareness, publish recommendations and harmonise the developments with these recommendations. It is important to show that we are actually all pulling in the same direction.

If we manage to create sound legal and technical framework conditions, decisions could become even more transparent using AI than before.
Sandra Wachter

Rudolphina: What role does Austria play in the development of artificial intelligence and in research on the topic?

Barbara Prainsack: Austria conducts outstanding research in the field of AI – in the development of technologies and in research on ethics and the societal aspects of AI. However, with regard to funding, we are not able to keep up with countries that take research on AI seriously – even if we take into account the limited scale of our country. There is a lack of education, computing power and, ultimately, also political support.

Sandra Wachter: Of course, I hope that Austria considers itself an important player and part of the overall picture. I believe that we have many excellent academics who conduct research on this topic. I consider it important that the government continues to support and promote research in this area. Many academics are wooed away by the industry, but it would be so important to enable these persons to continue their research at the universities and contribute their ideas and innovations to society – to counterbalance the concentration of power in the private sector. Especially in research on AI, a few companies worldwide are setting the tone.

Mark Coeckelbergh: Austria can play a modest but significant role in this. In the first place of course by means of technical-scientific advancements. But given Austria’s position in the diplomatic world, I believe it also has a unique position to push for ethically and politically responsible development of AI and for bringing global players together around this theme. The Advisory Board can facilitate this and make recommendations along these lines.

© Barbara Prainsack
© Barbara Prainsack
Barbara Prainsack is Professor of Comparative Policy Analysis, head of the research platform Governance of Digital Practices and chair of the European Group on Ethics in Science and New Technologies. Her research focuses on health, science and technology policy.
© Jana Plavec
© Jana Plavec
Mark Coeckelbergh is a full Professor of Philosophy of Media and Technology at the Department of Philosophy at the University of Vienna. His expertise focuses on ethics and technology, in particular robotics and artificial intelligence.

He is a member of the High Level Expert Group on Artificial Intelligence for the European Commission as well as member of the Technical Expert Committee (TEC) for the Foundation for Responsible Robotics. Currently, he is involved in a European research project in the area of robotics, PERSEO, and contributed to past international projects such as DREAM and The SIENNA Project.

© Sandra Wachter
© Sandra Wachter
Sandra Wachter is Professor for Technology and Regulation at the Oxford Internet Institute at the University of Oxford, at which she is leading the Governance of Emerging Technologies research group. In her research, she addresses legal, ethical and social questions arising from the use of new information technologies. Sandra Wachter studied Law at the University of Vienna.