Friday, April 19, 2024

AI RUSSIAN PROPAGANDA AGAINST UKRAINE SAYS MICROSOFT

 Filenews 19 April 2024



The latest Russian-backed influence campaign — aided by some new AI-based tactics — is underway ahead of the 2024 presidential election, though it appears to have started more slowly than in previous election cycles, according to a new report from Microsoft's Center for Threat Analysis.

Influencers from Russia — the same ones identified in previous campaigns — have shown new signs of activity over the past 45 days, though their activity appears to be occurring at a "slower pace" than before the 2016 or 2020 presidential election, according to Microsoft's analysis.

Russian influence efforts in this election cycle have largely focused on turning public opinion in the U.S. against Ukraine and NATO, with Microsoft's analysis finding that at least 70 Russian actors have been using both traditional and social media to spread disinformation about Ukraine over the past two months.

Microsoft has identified several Russian-linked actors behind the influence operations—including the group Microsoft refers to as Storm 1099, which was responsible for the widespread "Doppelganger" disinformation campaign in 2022.

The "most productive" of these factors, however, are linked to the Administration of the Russian Presidency, which, according to Microsoft, shows the "increasingly centralized nature" of these influence campaigns – a shift from the 2016 and 2020 campaigns that were more closely linked to the so-called Internet Research Service and intelligence services.

The role of artificial intelligence

Artificial intelligence is shaping how these factors work — though not as much as experts have long feared, and not in the way government officials expected.

While the emergence of artificial intelligence has raised fears that so-called deepfake videos could be used to deceive and manipulate the public, Microsoft says such attempts have been largely unsuccessful, generally failing to fool the public or attract much interest.

Instead, the public was more likely to fall for "simple digital forgeries" — fake news with fake logos, for example — that disinformation actors have been using for years.

Microsoft says AI was more convincing when it was used to change or improve content that already existed than when it was used to create content from scratch, and even then, AI-generated audio is generally more convincing than videos.

The report also found that AI-generated content about lesser-known faces is more likely to fool audiences than content with well-known personalities.

What to look out for

These groups often follow similar tactics to peddle disinformation. An actor Microsoft refers to as Storm-1516, for example, typically injects misinformation into video channels, claiming that his source is a whistleblower or a freelance journalist. The team then uses a network of websites that operates secretly to amplify this information, which leads to further dissemination and ultimately deceit of the public.

Forbes