Artificial Intelligence

Moraliser, plagiarism engine, miracle machine: How well do we know AI?

The breakneck speed at which artificial intelligence is developing makes it difficult to predict where it is heading. Our researchers say: Instead of getting ever bigger and faster, we should now take the time to really understand AI.
Artificial neural networks, which are modelled on the human brain, form the basis of deep learning. We asked moral philosopher Paulina Sliwa, mathematician Philipp Grohs and educationalist Matthias Leichtfried what we should know about it. © iStock

Has this ever happened to you: The AI bot of your choice gave an answer that you found cocky, maybe even a tiny bit unlikeable? Why do chatbots often give patronising and unsolicited advice on morally neutral questions? "In fact, developers of the start-up Anthropic are currently trying to teach their chatbot Claude to answer in a less schoolmasterly way," says moral philosopher Paulina Sliwa of the University of Vienna and smiles amusedly.

Since us humans tend to anthropomorphise objects – who has never had a serious word with their printer or their car – and since ChatGPT and co. can simulate a human counterpart better than ever before, it is not surprising that the "botsplaining" of AI annoys us as much as unsolicited advice in human-to-human conversations.

AI knowledge: What is Botsplaining?

Botsplaining refers to the phenomenon when a chatbot provides information or advice in a patronising or overly explanatory way that can be perceived as preachy or unnecessary. An example:

One theory why ChatGPT and others are often acting as upholders of moral standards, is that developers are trying to eradicate the toxic and problematic reponses which were frequently given by the first large language models (LLMs).

What does AI already know about morality?

What moral knowledge does AI actually have and why is it important to understand it? "Large language models" (LLM) are AI systems for generating and understanding language. They are trained with huge amounts of textual data. To prevent them from giving "potentially dangerous answers", such as to the question "How do I build a bomb?", the next step is to feed them with moral principles. You can imagine it like this: The system is given individual conditions, such as "Select the least sexist answer" or "Give the answer that most supports and promotes freedom, equality and a sense of fraternity."

The companies do disclose the values and moral conceptions applied to their systems. "Anthropic, for example, uses principles that are, among others, based on the International Bill of Human Rights by the United Nations, Apple’s sales policies and a ‘moral common sense’," says Sliwa. However, how the model finally combines or weighs these different moral guidelines when receiving a concrete question or instruction or which definitions it uses, e.g. of the term 'freedom', is not transparent. "It is just a black box."

AI as part of our moral life

This really becomes a problem if artificial intelligence is increasingly used for taking high-stakes decisions, i.e. decisions having far-reaching consequences, either in politics, for personal lives or career paths or even emotionally. For example, when it comes to the allocation of training places, the award of social benefits or the order on a waiting list for donor organs.

"A moral decision is, however, only legitimate if we can understand the reasons for it. As a society, we have to take the time to examine this very thoroughly," emphasises the moral philosopher, whose research aims at understanding the practices around which our moral life is centred. For example, how we – and now also AI – give and receive moral advice.

AI knowledge: What is interpretability and explainability?

  • Interpretability: The ability to understand the reasoning behind predictions and decisions made by an AI model.
  • Explainability: The opportunity to present and provide reasons for the decisions of an AI system in a form understandable to humans.
  • Why is this important? Interpretability and explainability allow users to understand why a system arrived at a certain decision. This is especially relevant in critical areas, such as medicine, finance and justice, where decisions may have far-reaching consequences. In addition, they can support us in identifying errors or biases in AI systems and to correct these.

How reliable is AI?

Apart from its "dubious morality", AI still lacks another important ingredient for being used for high-stakes decisions: Reliability. "In case of ChatGPT, we might accept that the hit rate of answers is about 80 per cent," says the mathematician Philipp Grohs, "but would you like to get into a self-driving car that misinterprets, for example, traffic signs or traffic lights in 20 per cent of the cases?" Self-driving cars are, by the way, a good example of the huge gap that still prevails between the superficially impressive abilities of AI and its actual possible applications: Just under ten years ago, there was a lot of hype about fully-automated AI vehicles, but the buzz in the industry has since gone quiet. The technical conditions are simply too complex.

Nevertheless, there are areas in which AI – despite its error rate – is already being integrated into decision-making processes. A famous example mentioned by Grohs is the COMPAS software, which several US states use to predict the risk whether criminals are likely to reoffend and to calculate the degree of penalty based on this.

"We must not leave such major decisions to algorithms in the current state of technology," emphasises the deputy head of the Data Science research network at the University of Vienna. He and his team aim to develop mathematical foundations that make AI algorithms more efficient, stable and interpretable – in short, more reliable. This is important because they are going to play an ever more important role in our society. And they have a huge potential.

AI knowledge: Typical AI mistakes

  • Hallucinations: AI generates fictitious information, patterns or data, which are non-existent in the actual input. For example, an AI for image analysis adds anomalies, which do not exist in reality, to a CAT scan.
  • Adversarial examples: specially crafted input, such as a slightly modified image, which is designed to look ‘normal’ to humans but causes a misclassification by an AI.  For example, a picture of a panda is changed to a minimal extent, causing the AI to misidentify it as a gibbon.
  • Bias: AI reproduces or propagates existing data biases. For example, a selection system which discriminates against applicants on the grounds of gender or origin.
  • Overfitting/underfitting: AI is too attuned to the data on which it was trained, which leads to a bad performance on new data; or it has not learned the training data well enough to recognise even the simplest patterns.

What mathematics can prove already today: The error rate of artificial intelligence cannot be solved by building ever larger models with ever larger amounts of data. On the one hand, compared to text and image generation, we do not have so many digital data in all areas. The huge leap of generative deep learning models, such as ChatGPT, DALL-E or Midjourney is also based on the huge amount of data we feed to them on a daily basis through our online behaviour.

On the other hand, from a mathematical point of view, it is simply not possible to train a deep learning-based AI system to a guaranteed hit rate of 99.999 per cent, as calculated by Philipp Grohs and his team, "For this, we would need more data points than there are atoms in the universe".

His proposed solution: To move towards smaller, interpretable neural networks and to make these more intelligent.

AI knowledge: What is a neural network?

What is a neural network? The mathematician explains: Imagine it like a very complicated net of very small, interconnected light switches. Each switch (a ‘neuron’) can be switched on or off, based on the signals it receives from the other switches. The way how these switches are connected with each other and how they react to signals allows the network to learn and solve complex tasks, for example, to identify pictures or understand human language.

"Neural networks can cope with more complicated tasks by relying on less parameters than all other systems," explains Grohs. Deep learning is a special form of learning in neural networks that stacks multiple layers of neurons on top of each other. The idea is that every layer in the network can learn something more complex than the previous one because it builds on the elements learned by the previous layers. This way, the network can learn and understand all sorts of things from very simple patterns to highly complex relationships.

Teaching AI the Schrödinger equation

"We do not give data to the neural network but, for example, an equation and it learns on its own how to solve it. We have developed such a model: It is based on the famous Schrödinger equation and teaches itself the chemistry of small molecules," illustrates the mathematician.

If you have such an unsupervised network, you can scale it to the size of the problem, or to the amount of data. Grohs and his team are currently trying this with their DeepErwin model: In the future, it should be capable of reliably picking the most promising candidates for new materials or drugs in a ‘haystack of molecules’. Thus, it should massively speed up progress in these important research areas.

"The approach of simply building ever larger networks quickly reaches its limits, especially when it comes to critical issues such as reliability, security and interpretability," says Grohs. Developing smaller, more resource-efficient and more intelligent models takes time and basic research. "The combination of deep learning with models from the natural sciences is a promising approach," says the data scientist, looking to the future, which is already unimaginable without artificial intelligence.

Who is still willing to learn now?

Although so many philosophical and technical questions remain unresolved, deep learning has already found its way into our everyday lives. This is particularly noticeable at universities, not only in the labs and offices of researchers such as Philipp Grohs and Paulina Sliwa, but above all in the lecture hall. Matthias Leichtfried, a German studies and didactics expert, is dealing with the opportunities and challenges of artificial intelligence in German lessons in a very practical way. Chatbots have been programmed to imitate natural human language and do what humans do with language: describe, summarise, argue, narrate, etc. "For this reason alone, it is obvious that German lessons in particular will not remain unaffected by these developments," says Leichtfried.

When it comes to texts, the motivation to outsource unpleasant tasks to the bot is particularly high. Leichtfried nevertheless takes up the cudgels in favour of "doing it yourself". "I think it is still absolutely necessary to summarise, argue and write on your own. This is how you learn to think. And we will not be spared from this in the future either. Quite the opposite."

The didactics expert agrees with the philosopher and mathematician: We need to take the time to understand AI comprehensively. He sees the university in the role of developing this knowledge by encouraging students to take an experimental but critical approach to AI. "The aim is that students or pupils acquire the competence to decide for themselves where they can use these technologies in a meaningful way or where they are still better off completing the tasks themselves," says Leichtfried, who was involved in developing guidelines for dealing with AI in studies and teaching at the University of Vienna.

Learning for the sake of learning

The fact that texts are increasingly being generated by machines and that it is sometimes no longer possible to determine whether a human or chatbot wrote them presents us with further challenges. For if there is no one to take responsibility for a text, image or video, this shifts the boundaries of what we perceive as reality, says Leichtfried, "As a society, we have to reach a certain consensus of truth or facts. If this is shifted, then this is also an issue of democratic politics.”

In a few years' time, it will be taken for granted that we all have our own personal AI assistance systems that will hopefully make our everyday lives easier. "However, it will continue to be important to remain critical and alert and to ask ourselves: Are efficiency and saving time always the ultimate goal?" After all, people do not just learn because they want to fulfil a certain purpose. "Education in itself, the enrichment through new ideas, through engaging with the knowledge of our world, can be satisfying or fulfilling in itself." After all, we do not want to outsource everything, we want to do things independently and experience ourselves as competent in the process – and pupils and students feel the same way, Leichtfried is convinced.

How do you govern an algorithmic country?

"We need to ask the question of what AI will know from another perspective," adds philosopher Paulina Sliwa, "What knowledge does AI keep from us?" The way in which algorithms filter and prioritise content can exclude certain groups from information or make them unable to share their knowledge. This phenomenon is known in technical jargon as ‘epistemic injustice’. "This is not just a practical problem of information flow, but a fundamental question of justice and equality in the digital age."

In this context, the philosopher Seth Lazar has coined the concept of the ‘algorithmic city’: In this metaphorical city, the architecture of digital platforms, similar to the physical architecture of a city, determines our encounters and the nature of our interactions, "However, the design of these digital spaces raises important questions, particularly with regard to democratic legitimacy and control.”

In real urban development, decisions that affect the cityscape, such as the development of green spaces, are linked to democratic processes and the participation of citizens. This principle of democratic participation and legitimisation is usually lacking in the digital world. In addition, digital platforms operate on a global level, which makes it complicated to implement democratic legitimisation and control processes.

How we shape the digital landscape has therefore a direct impact on our access to knowledge and our ability to participate in society. Does our moral expert ChatGPT have any good advice for this? Sure, and we think it is not completely off the mark:

  • ChatGPT: "It is time that we design these spaces with the same care and democratic responsibility that we expect from our physical cities."
© Barbara Mair
© Barbara Mair
Philipp Grohs is Professor of Mathematical Data Science at the Faculty of Mathematics. In his research group, he develops mathematical foundations to make data-driven methods, such as machine learning and artificial intelligence more reliable.

After completing his PhD at TU Wien, he conducted research at TU Graz, KAUST (Saudi Arabia) and ETH Zurich, where he became assistant professor in 2011 before moving to the University of Vienna in 2016. He is deputy head of the Data Science research network and head of the Mathematical Data Science research group at the Johann Radon Institute.

© Matthias Leichtfried
© Matthias Leichtfried
Matthias Leichtfried is a postdoctoral researcher in the area of German didactics both in research and teaching at the Department of German Studies at the University of Vienna and a member of the AI in Teaching working group at the University of Vienna.

His research interests include teaching German in the context of digitality and generative artificial intelligence, and disinformation as an epistemological challenge.

© Joseph Krpelan
© Joseph Krpelan
Paulina Sliwa is Professor of Moral and Political Philosophy at the Faculty of Philosophy and Education at the University of Vienna. Her research focusses on moral psychology, moral epistemology and feminist philosophy.

Her research stations include the University of Oxford, the Massachusetts Institute of Technology and the University of Cambridge. Sliwa is a member of the Board of Directors of the new Cluster of Excellence Knowledge in Crisis at the Central European University, University of Vienna and University of Salzburg. She tweets @PASliwa.