
[ad_1]
As the European elections approach, algorithm expert Paul Bouchaud says Meta (the company that owns Facebook, Instagram and WhatsApp) is not preventing pro-Russian propaganda from spreading political messages on its platform.
As part of your research, you study social media algorithms and their impact on society, particularly elections. How did you go about this work?
Paul Bushaw: At the ISC-PIF Institute for Complex Systems, I regularly work with AI Forensics, a European nonprofit that investigates influential and opaque algorithms used by YouTube, Google, TikTok, and Meta (parent company of Facebook, Instagram, and WhatsApp). Together with a team of “digital detectives” (experts in IT, law, ethics, sociology, psychology, and communications), I am part of an independent technical investigation to uncover the damage caused by these algorithms. We focus on election monitoring, as this has a direct impact on democracy and fundamental rights, and then bring these investigations to the attention of the media in the hope of promoting transparency and accountability related to these influential algorithms.
The 2016 US election raised awareness of the risks of social media manipulation…
PB: In fact, it will take a long parliamentary and judicial investigation to uncover The Kremlin’s propaganda campaign during the election campaign that brought Donald Trump to powerIn 2018, an indictment filed by US special prosecutor Robert Mueller revealed that 126 million Facebook users and 1.4 million Twitter users had been exposed to messages that attempted to divide American society through fake profiles and targeted ads.
Special Counsel Robert Mueller, who is investigating Russia’s alleged interference in the 2016 U.S. election, holds a hearing before a House committee in Washington on July 24, 2019.
Some practices and control procedures have since been adjusted, but they are largely inadequate; a repeat of 2016 is likely. The US legislation is much less restrictive than the EU and, if implemented, could fighting False Informationas my research today shows.
Will these new European regulations verify that Meta is complying with the rules on posting political propaganda online?
PB: Yes, the Digital Services Act (DSA), which comes into force on August 25, 2023, aims to limit the spread of hate, child pornography and terrorist content online, as well as the distribution of illegal (fake or dangerous) products. It does not challenge the limited liability of platforms for these products and the content they host (the concept of “passive” hosting). However, they must provide users with a way to report these products and open the advertising libraries of 20 platforms to the public for this purpose.
So technically, I trained the algorithm I developed on all the ad examples provided by Meta on Facebook and Instagram, in 14 European languages, starting from August 2023. The algorithm detects which ads are political and if Meta has identified them as political. If an ad is reported as political, additional information about who paid for the ad, the target audience (age, gender, location) and the reach audience (same information) is made public.
This is an example of a political ad published on Facebook that Meta approved without review and Paul Bouchaud identified with his algorithm. On the right, Bouchaud uses flags to indicate the target countries, as well as the number of accounts reached and the date, in addition to providing a proposed translation of the text.
What control does Meta have over the content of advertisements posted online?
PB: Advertising is the dissemination of paid information on Meta (or other platforms), whether or not the content is commercial in nature. Thus, a company or individual can pay Meta to publish a message aimed at selling socks or voting for a certain political party (targeting a smaller or larger audience, or a more or less targeted audience). Furthermore, the same ad can be published on multiple networks at the same time (e.g. Instagram and Facebook). The only guarantee is that if the content is of a political nature, the advertiser must check the “My ad is political” box, which requires Meta to verify it and then approve or reject it.
I sifted through 30 million ads published in 16 European countries in January and February 2024: about 98% were commercials, but of the remaining 2% (about 600,000), 95% were not identified by Meta as conveying a political message. This may not seem like much, but for France, for example, that means 200 ads a day, and for 16 EU countries, 1,300 ads a day, some of which were clearly from coordinated campaigns and seen by millions of people.
You also uncovered a major pro-Russian propaganda network?
PB: In the seven months between August 2023 and March 2024, pro-Russian propaganda aimed at undermining European governments and EU support for Ukraine was spread to 38 million accounts in France and Germany. Of the 3,826 Facebook pages involved, less than 20% were reviewed by Meta, even though the messages were shown to users at least 2.6 million times. The campaigns intensified in the run-up to the elections. Between May 1 and 27, 2024, Meta allowed at least 275 pro-Russian posts to spread to 3 million French, German, Italian, and Polish accounts. What’s more, these messages are now heavily targeted at Italy and Poland, as our latest report shows. (See table below).
In the run-up to the European elections, the number of accounts reached by pro-Russian propaganda ads continues to increase, and these ads are still awaiting approval from Meta.
Specifically, what form do these ads take?
PB: The emails are in text form, often with cartoon illustrations, and contain recent news to satirize the government, criticize its negligence, or denounce political measures. For example, the email targeting France claimed that President Macron could not defend a small territory like New Caledonia, let alone Ukraine. Another example claimed that if the United States is getting richer and Europe is getting poorer, it is because the European Union is using its resources to support Ukraine. This last post also includes the per capita productivity growth rate of the two continents, which helps to make it believe that it comes from a “serious” source. Many emails are conceived in this way, making them easy to deceive the targeted individuals.
Do the authors of these ads identify themselves?
PB: The names that appear at the top of these messages (“tempnafor”, “Awudud Online Store”, etc.) are automatically generated by advertisers and do not correspond to any specific content. In fact, users almost never look at them. In recent days, an advertisement appeared that usurped the graphic identity of the “Weekly News” Viewwhich are indistinguishable from the real thing. As for the advertisers, they are rarely identified.
This is an example of media plagiarism, in this case a text appears to have been published in Le Point, but the source address is “lepoint.wf” instead of “lepoint.fr”, which makes it possible to detect forgery.
However, one particularly influential network has been revealed: a network of pro-Russian “clones” that has, since the outbreak of the Russo-Ukrainian war, spread fake news about the conflict by copying the websites of European ministries and by distributing Russian articles. Reliable latest news “Media”, publishes disinformation content every day in multiple languages, including French. In July 2023, the European Union imposed sanctions on two companies (Social Design Agency and Structura National Technologies) that were suspected of playing a central role in the dissemination of these articles. However, this network continues to operate and is now trying to manipulate the conflict between Israel and Hamas in the same way.
Is your algorithm complex to develop?
PB: It took me a month. I developed it myself, without the help of a team. So if a company of Meta’s size can’t detect hidden political ad campaigns quickly, it’s only because it doesn’t have the will to do so. Of course, it’s harder to control where there are few moderators. According to Meta’s data (see table), it would be complex for three Estonians to check all the information posted on the country’s network. However, France and Germany, for example, with more than 200 moderators, are better equipped.
Some countries have very few moderators, which makes it more difficult to control content on social media in local languages. This table on the Meta website shows the number of moderators in each country in the European Union.
Additionally, as our research shows, content moderation tools are also effective. However, Meta did not disclose how it moderates these ads, only saying that it uses people and tools to review them because it is obligated to do so.
As an individual, can I post anything?
PB: Any message that conveys hate or false information can be removed. Automatic detection of hate content is quite effective. On the other hand, with political false information, unless your message is widely circulated, it is almost impossible for Meta to detect it. Someone has to report it.
What pressure can be put on Meta to ensure that it implements controls?
PB: This research directly challenges the EU’s efforts to ensure fair and transparent elections. The continued misuse of social media for political purposes highlights the need for strict monitoring and proactive measures by regulators and digital platforms themselves. I have informed the European Commission, which announced on April 30 that it had filed formal proceedings against Meta for violations of the DSA. As a result, Meta must take steps to prevent its advertising system from being used for propaganda purposes. There is no deadline for this procedure. If the dispute continues, the European Commission can impose sanctions, i.e. fines totaling up to 6% of the platform’s global turnover.
Propaganda ads rely on news information (in this case, the deteriorating condition of public schools) to disparage government policies or actions.
In the past, we’ve observed how online algorithms influence what happens offline, including attacks against entire populations…
PB: An Amnesty International report (which I was not involved in) shows how Facebook’s algorithmic systems helped fuel the Myanmar military’s atrocities in 2017 by inciting hatred against the Rohingya. Some actors who sought to dehumanize the Rohingya posted violent messages that disarmed members of the military responsible for ethnic cleansing and mass rape. By choosing which messages to show in each user’s feed, Meta highlights content that evokes strong reactions, both positive and negative, which is the hallmark of hate content. Facebook gave them an unprecedented audience in Myanmar. Multiple Rohingya groups filed legal cases against Meta, but the damage was already done. Only state regulation could have prevented this. ♦
source
“On Meta’s implementation of political advertising policy: An analysis of coordinated campaigns and pro-Russian propaganda”, Paul Bouchaud, 2024. hal-04541571v1
Further reading on our website
Is the Internet a disinformation superhighway?
How social networks manipulate public opinion
The 2017 Presidential Election from Twitter (French)
[ad_2]
Source link