Rudolphina Experts: Brain-AI interfaces

How can AI help persons with speech disabilities?

22. May 2024 Guest article by Moritz Grosse-Wentrup
AI technologies open up new ways of communication for people with speech impairments, but also raise ethical concerns. Neuroinformatics specialist Moritz Grosse-Wentrup offers an insight into a highly topical research area.
In his guest article as part of the semester question “Do we know what AI will know?”, Moritz Grosse-Wentrup takes a glimpse into the future of intelligent assistive communication systems. © Volodymyr Hryschenko/Unsplash

Imagine you lost your speech – not only the ability to talk, but also the ability to think in the form of language. This chilling scenario is reality for many people, for example, as a result of a stroke. Bleeding into the brain or poor supply of oxygen due to a blocked blood vessel can cause this condition and damage areas of the brain. If the stroke affects the language areas of the brain, this can cause aphasia. In aphasia, a person's ability to think and communicate through the use of language is limited. Intensive training as part of speech and language therapy can help many patients to partially regain their speech. But how can we help people who have permanently 'lost their speech' and are now having trouble with everyday communication?

How state-of-the-art technology could help

Patients can make use of a wide range of assistive communication systems. They are usually based on the following principle: they display pictograms that represent frequently used phrases on a computer or tablet. Upon selection of a pictogram, the system outputs the text stored for this pictogram in a synthetic voice. It is even possible to customise the system by adapting the texts and pictograms, thus ensuring that the user can quickly access frequently used phrases. In response to the question “What would you like to eat today?”, the user’s selection of a Pizza pictogram would prompt the computer to output the sentence “I fancy Pizza today”.

These assistive systems help those with aphasia to deal with everyday situations. However, they are less useful in new situations for which the system has not yet been fed with a suitable pictogram or phrase. This is where artificial intelligence (AI) comes in. Instead of working with pre-determined phrases, AI can listen to a conversation, analyse it and suggest suitable answers as part of the assistive communication system.

For example, when calling the tax office, AI could discern from the phone greeting that the matter of the conversation most likely regards a tax return issue. The AI could then suggest the phrases “I would like to enquire when my tax return is being processed” or “I would like to request an extension of the deadline for submitting my tax return”. Once the user has selected the appropriate answer, the system would output it.

From intelligent assistive communication systems to brain-AI interfaces

Intelligent assistive communication systems as described above are not yet available on the market, but they are being developed and will presumably be available in a few years. At the Neuroinformatics research group at the Faculty of Computer Science of the University of Vienna we are thinking one step ahead, examining what these systems could look like in the more distant future.

We are especially focusing on the process of answer selection, which currently requires the user to interact with the system, for example via a mouse click. We aim to develop an intuitive system: Similar to our language system, which works automatically, we want AI to communicate for people with speech disorders without having to make a conscious choice or operate a screen. For this purpose, we are developing a new generation of brain-computer interfaces, which we call brain-AI interfaces.

This is how a chat could work using a brain-AI interface. The video from the Neuroinformatics research group at the University of Vienna shows (here in an experiment with simulated telephone conversations) how a "speech neuroprosthesis" enables complex communication without the need to generate speech. © Neuroinformatics UniVie

Future vision: intuitive communication via AI

These interfaces use sensors to measure the user’s brain activity, from which the AI discerns the desired answer – the user does not have to explicitly interact with the system to select an answer. Our vision is that AI will in future be able to replace the language system of persons affected by severe aphasia, thus enabling these people to communication with ease again in everyday life.

Not only technological but also ethical challenges

Like many new developments in basic research, this topic too raises important ethical issues. For example, how can we guarantee that AI does not disclose any confidential information? Should the system allow the user to lie? And how can we rule out the possibility of hackers taking control of a brain-AI interface? It is crucial to take all these questions into account from the beginning in the development of the brain-AI interface.

In addition to these ethical considerations, which mainly affect the users of brain-AI interfaces, we also have to consider greater, societal questions. Is AI-supported communication also interesting for people without a speech disability, for example, to replace the text input on smartphones? And how would the increasing use of AI-assisted communication influence our language? These questions underline the importance of exchange between the developers of AI systems and all those who will be affected by these developments in one way or the other.

  • This article was published as part of the semester question cooperation with derStandard.at.
© Barbara Mair
© Barbara Mair
Moritz Grosse-Wentrup is Professor of Neuroinformatics at the Faculty of Computer Science at the University of Vienna. His research focuses on the development of AI algorithms for analysing neural data and their applications in the field of neurotechnologies.