Sunday, April 26, 2026

‘TRUSTED FLAGGERS' TO POLICE THE WEB




‘TRUSTED FLAGGERS' TO POLICE THE WEB - Cy Mail 26/4 by Theo Panayides


When does social-media rage turn into hate speech? Is Eoka a terrorist organisation? Should it be legal to blame Nato expansion for the war in Ukraine? And who gets to decide?

The Radiotelevision and Digital Services Authority (RTDSA) announced last month that it’s now accepting applications for ‘trusted flaggers’ of online content.

Trusted flaggers (TFs) are part of the EU framework for regulating online platforms like Facebook and YouTube, as laid out in Regulation EU 2022/2065, known as the Digital Services Act (DSA).
Article 22 of the DSA specifies that “providers of online platforms shall take the necessary technical and organisational measures to ensure that notices submitted by trusted flaggers… are given priority and are processed and decided upon without undue delay”.

The role of TFs is to alert platforms to illegal content. To quote the EU’s digital strategy website, “they are experts at detecting certain types of illegal content online, such as hate speech or terrorist content, and notifying it to the online platforms”.

The application form on the RTDSA website lists 75 categories of illegal content that TFs could potentially flag, adding that the list is not exhaustive.

The various strands range from the obviously criminal, like ‘Scams and/or fraud’ – categories include ‘Phishing’, ‘Pyramid schemes’ and ‘Impersonation or account hijacking’ – to less clear-cut cases like ‘Illegal speech’, ‘Negative effects on civic discourse or elections’ (including ‘Foreign information manipulation and interference’, i.e. interference by foreign countries) and ‘Risk for public security’, with categories like ‘Terrorist content’.

“I believe it’s well-intentioned,” Larnaca-based lawyer Andreas Shialaros told the Cyprus Mail.

However, “we are walking on thin ice. We have to balance freedom of speech with online safety…

“Do you actually, as a journalist, see such illegal content online as to necessitate the existence of the DSA and trusted flaggers? Personally, no.” After all, he adds, if someone – a paedophile, say – were to post something truly vile, “they’re not going to publish it on Facebook. They’re going to do it on the dark net”.

That may be true – but there’s still plenty of content one could plausibly be concerned about, like defamation and misinformation.

The danger, of course, is that such concerns could be weaponised. Thus, for instance, the category ‘Risk for public health’ calls to mind the censorship in 2021-22 over opposition to Covid vaccines – just as the aforementioned ‘Foreign information manipulation’ is a reminder that news network RT (Russia Today) has been banned in the EU since the invasion of Ukraine.

Some will retort that there’s nothing wrong with ‘anti-vaxxers’ or ‘Russian propaganda’ being removed from the public sphere. Once you have the authorities deciding that certain groups are beyond the pale and can legitimately be suppressed, however, you are indeed walking on thin ice.

After all, points out Shialaros, “Palestinians are seen as terrorists by the Israelis, but at the same time Eoka was considered a terrorist organisation by the English. So it depends from which viewpoint you’re looking at things”.

Even he, however, admits there are limits, and that a balance must be struck. “If there’s a Facebook group actively promoting death to people, or harming people, it should go down.”

All that said, the specific issue of trusted flaggers may be more straightforward, designed with safeguards “to keep TFs from opinion-based moderation,” as Antigoni Themistocleous of the RTDSA noted in an emailed response to the Cyprus Mail.

Firstly, what they flag has to be illegal – not just harmful or inaccurate – content, as determined by EU and national law.

Thus, for instance, one of the 75 categories is ‘Historical negationism, apology of crime against humanity or war crimes denialism’. Holocaust denial could therefore potentially be flagged – but only in countries like France or Germany, which have Holocaust-denial laws, not in Cyprus (which doesn’t).

In the same way, a video celebrating Eoka’s struggle against the British could theoretically have been flagged in Britain – if it were still an EU member – but not in Cyprus, which obviously would never think of it as ‘terrorist content’.

Secondly, the process of becoming a TF is “subject to a thorough and very detailed examination,” says Themistocleous – and the DSA also imposes duties and restrictions on those who qualify.

Article 22 stipulates that TFs “shall publish, at least once a year easily comprehensible and detailed reports” listing the allegedly illegal content they flagged, and what action was taken by platforms.

A TF’s status can be suspended or revoked if it’s determined by the digital services coordinator (the RTDSA, presumably) that they submitted “a significant number of insufficiently precise, inaccurate or inadequately substantiated notices”.

Who can apply to become a trusted flagger? Theoretically anyone, but the EU wording is ‘entities’, not individuals, and the form explicitly mentions (without being limited to) government or semi-government organisations and NGOs.

Section C13 also asks for a detailed breakdown of the applicant’s funding – but it’s unclear what kind of funding would constitute a red flag (the RTDSA says such concerns are “examined on an ad hoc basis”), and how to ensure a TF won’t report content based on their own beliefs and ideological biases.

That’s not the only aspect that remains unclear. The big question – still not conclusively answered – is how much weight a TF’s intervention carries, and how far online platforms may be compelled, in practice, to remove flagged posts or videos.

“The notices submitted by [TFs] must be treated with priority, as they are expected to be more accurate than notices submitted by an average user,” says the EU website.

“Online platforms can neither simply disregard nor ignore TFs, though they can lawfully disagree with them,” says the RTDSA email, adding: “Moreover, in case online platforms reject a TF notice, they must provide clear justification and demonstrate good faith and diligence”.

All well and good – but what if online platforms don’t actually ‘disregard or ignore’ the flaggers, do prioritise their notices and examine them carefully, but still end up deciding that they disagree with them, and won’t be removing the content?

“In terms of legal weight, trusted flaggers’ recommendations are not binding,” admits Dimitrios Koukaidis, assistant professor at the School of Law of the University of Nicosia, in an emailed response.

“However, they do carry enhanced practical and evidentiary significance.

“Platforms are expected to treat them as well-founded and credible notices, and a consistent failure to give them due consideration could [emphasis added] be interpreted as a lack of diligence, and thus a violation of the DSA.”

In theory, the introduction of TFs is merely a useful tool to fast-track the reporting of illegal content.

Making their tip-offs legally binding, says the RTDSA, “would raise concerns regarding freedom of expression, and/or privatisation of law enforcement”.

In practice, though, the fact that almost all notices will have some legal foundation – helped by the proliferation of hate-speech and anti-terrorist laws throughout the EU – coupled with the fact that many TFs will be well-connected entities that platforms would prefer not to clash with, is likely to make their recommendations binding in all but name, especially if a case goes to court and Koukaidis’ ‘could’ turns into ‘will’.

Is this wrong? Not necessarily. After all, the end result will be less offensive and objectionable stuff online – the 75 categories include obvious scourges like ‘Animal harm’, ‘Stalking’ and ‘Child pornography’ – and who could argue with that?

On the other hand, the introduction of TFs seems like a case of EU regulation through the back door, smuggled behind a fair-minded façade of polite suggestions and free expression – and of course a free-speech absolutist might well question why it’s needed at all.

“Because in the end,” points out Shialaros, “everything is subject to interpretation.”