Protect Democracy, Regulate the Internet! But How?
Liberal democracies depend on access to reliable knowledge, on their citizens listening to different points of view and on finding the most convincing solutions in a fair and inclusive discourse. At first glance, digitalisation seems to offer great hope for strengthening democratic participation: When in history has globally available knowledge been as easily accessible as it is today on the Internet? When were individual citizens as easily heard as they are nowadays on social media? When could you communicate globally in real time about more or less anything for free?
Yet what we are seeing in the world around us is anything but a fully participatory digital democracy. Rather, there are increasing fears that digitalisation may pose a serious threat to our democracies. Experiences such as Cambridge Analytica, the Brexit referendum or the storming of the US Capitol underline that these concerns have real-world relevance.
Five phenomena that threaten our democracy
In particular, there are five closely related phenomena that could potentially threaten democracy:
- Disinformation and fake news on an unprecedented scale, facilitated by the absence of traditional media companies that would guarantee, even informally, the credibility of the source
- The generation of traffic by bots and trolls that simulate majority opinion by masquerading as authentic users
- Micro-targeting of users, i.e. the very granular targeting of individuals or specific groups of people, often by exploiting their specific propensities or vulnerabilities
- Algorithmic selection of information that is consistent with the user's previous views, avoiding confrontation with alternative views and potentially creating filter bubbles and echo chambers
- Amplification, i.e. increasing the reach and apparent relevance of content through the use of private digital directories, social media groups and the like.
The consequences of these five factors include a distorted perception that many citizens have of reality and the actual distribution of opinion in the population. We are also seeing a fragmentation of the public sphere. Put simply, when each digital circle of friends lives in its own parallel world, democratic deliberation can no longer take place in a meaningful way.
Who can, may or should intervene?
The major communication platforms play a key role in this game, as they have obvious financial interests linked to the above phenomena. Micro-targeting and amplification are techniques for which political advertisers are willing to pay significantly higher prices.
But even beyond targeting and amplification, there are perverse incentives for platforms: Broadly speaking, they make money by keeping the attention of as many users as possible focused on platform content for as long as possible, because this allows them to serve more ads to their users. What keeps users on the platform longer is often content that is particularly shocking, disturbing or entertaining, and certainly content that confirms their views. This is also the type of content that is most likely to be shared.
This is why the major platform operators (such as Meta, X or even Google) could most effectively counter the threats to democracy. However, there are pitfalls in holding platform operators accountable: If platforms are charged with countering disinformation and the like, they will also be given the power to decide which content is allowed in the new virtual public sphere and which is not.
If we mistrust platform operators, should we perhaps consider entrusting the task of filtering content to an independent government body? On reflection, this is hardly a solution either: Freedom of expression and information is a fundamental right, and for the state to decide which content reaches its audience and which does not would amount to censorship and be incompatible with a liberal democracy.
There is another problem: Much of the content mentioned above that threatens democracy is not illegal, but simply harmful. For example, lying as such is not generally prohibited by law, nor is the manipulation of political opinion. And often it is not even the content itself that is harmful, but the way in which it is presented.
Often it is not the content that is harmful, but the way it is presented.Christiane Wendehorst
The Digital Services Act (DSA): transparency and accountability in online platforms
Internet regulation is necessary but difficult. There are high hopes for Europe's new Digital Services Act (DSA), which will apply from 17 February 2024. This Act deals mainly with illegal content, but also seeks to respond to other practices that threaten democracy.
For example, the DSA includes provisions that demand transparency in online advertising. Platform providers are required to disclose information for each individual ad, including its nature, the person on whose behalf it is presented and who paid for it, and details about the parameters used to identify the recipient, along with instructions on how to change them. Profiling based on particularly sensitive categories of personal data, such as political opinions or religious or philosophical beliefs, is prohibited, as is any type of profiling where the recipient is a minor.
Similar transparency rules apply to recommender systems (such as the recommendation engines of online retailers or streaming services). In the case of very large online platforms (VLOPs), recommender systems must provide at least one option that is not based on profiling.
There are additional systemic risk management obligations for VLOPs. This includes not only regular risk assessment and mitigation measures, both of which are subject to independent audit, but also a crisis response mechanism for situations where exceptional circumstances lead to a serious threat to public security or public health.
VLOPs must also provide access to data necessary to monitor and assess compliance with the DSA upon request by the authorities. Access to data must also be provided to ‘vetted researchers’ for the purpose of conducting research that contributes to the detection, identification and understanding of systemic risks and the evaluation of risk mitigation measures.
Overview: How the Digital Services Act affects big online companies like Amazon and Facebook
The Digital Services Act is a new EU regulation designed to hold any digital service operating in the EU accountable for illegal activity. The tightest obligations will apply to 17 companies that have been designated as “very large online platforms”, including Amazon and Facebook and two “very large online search engines”, Google and Bing.
Some of the key measures include:
- Platforms must tackle the sale of illegal products and services, which will affect the Amazon and Facebook marketplaces amongst others.
- New measures are designed to cut down on illegal content such as hate speech, harassment, war propaganda, interference with elections and child abuse.
- Targeting children with advertising based on their personal data or cookies will be prohibited.
- Social media companies will be banned from using sensitive personal data, including race, gender and religion, to target users with adverts.
- “Dark patterns” will be banned. Dark patterns are deceptive user interface designs that trick users into taking or not taking certain actions. Common examples include hard to find unsubscribe links or deceptive button designs for accepting cookies or subscriptions.
In July 2023, Amazon launched a legal challenge against its categorization as a “very large online platform”. In late September, the EU General Court ruled in Amazon’s favour in the first round of the court battle, thereby suspending for now the DSA requirement that Amazon must make its ad repositories public.
Transparency in political advertising
In 2021, the European Commission published a proposal for a regulation on the transparency and targeting of political advertising, which the EU legislator still hopes to adopt in time for the 2024 elections. If it proves effective, this regulation will complement and enhance the rules set by the DSA.
It will apply to both online and offline activities, broaden the types of information required for political ads, and include a wider range of service providers. It will also introduce limits on the use of personal data for targeting and amplification techniques.
The details are still being negotiated between the European Commission, Council and Parliament in the so-called trilogue. This includes the question of whether the bans on targeting and amplification should only apply to particularly sensitive categories of data, such as religious and philosophical beliefs, or whether the bans should go much further. In this context, it should be noted that any focus on particularly sensitive categories of data is delicate, as it may leave room for circumvention, for example, through the use of proxy servers.
For modern platform regulation, we need the commitment of many
Together with self-regulation by the platform industry, such as by means of the Strengthened Code of Practice on Disinformation 2022, the new regulatory approach to platform regulation in the EU relies on many different tools and many different actors: platform operators, national authorities, the European Commission, newly created bodies at EU level, vetted researchers and a critical public. Engaging the latter in a meaningful way will require, among other things, digitally enhanced quality journalism, civil society fact-checking initiatives, independent monitoring by academics and improved digital education at all levels. For modern platform regulation to be effective, it needs the support of all of us.
Her current research focuses on the legal challenges of digitalisation and she has provided her expertise on issues such as digital content, the Internet of Things, artificial intelligence and the data economy for many governmental and supranational organisations, and her research has visibly influenced a number of legislative instruments at EU level.