Facebook can’t detect fake ads, NGO says

According to a report by the non-governmental organization Global Witness, Meta-owned Facebook has allowed ads on its platform that contain misinformation about the Brazilian elections. The entity reveals that even ads that contain errors in basic information, such as the date of the election, and therefore do not depend on interpretation, would not be detected and stopped by the company’s processing and mitigation.

One of Facebook’s rules prohibits “misinformation about dates, places, times, and methods of voting.” Despite this, the platform did not block ads containing phrases such as “Change the day of the elections: Paulistas must now vote on October 3.” The election is on day 2. The platform also rejected ads that said: “Voting is now voluntary. Voters between the ages of 18 and 70 in Brazil can now choose to vote. It’s normal to want to stay home” and “Don’t worry about carrying documents with you on election day, the voting machines know who you are”.

The company has a policy of opposing disinformation “about who can vote, what are the electoral requirements, if a vote is counted, and what information or documents must be presented in order to vote.” In addition to showing moderation errors according to its own rules, the organization’s tests point to problems in the authorization process for electoral and political ads, about which there is transparency in front of the payer.

As of 2020, an authorization process is required to show these types of ads in Brazil. The user must prove their identity and domicile in the country. It is the same advertiser who indicates if the publication is framed in political issues. Meta claims to use artificial intelligence to make this identification of what may not have been indicated.

In tests, Global Witness did not classify the ads as political and sent nine from a computer in Kenya and one from the UK. According to the organization, no tool was used to hide the real location of the user, such as a VPN. Payment was made in the UK. The chosen target audience were users from Brazil. Ten ads were submitted, all for the same account.

According to the organization, only one of the ads was initially rejected by Facebook. However, six days later, “without any intervention from Global Witness, the same announcement was approved” “without explanation”.

“The process is completely optional. People are supposed to have good intentions. But as we know, those who spread misinformation do not have good intentions,” said Jon Lloyd, a senior consultant at Global Witness and leader of the study. According to the entity, the test shows that self-regulation is not working.

“We need to make sure that people verify that Facebook really does what it says,” Lloyd said. Facebook said it couldn’t comment because it didn’t have access to the full report. The entity claims to have informed the company of the content of the ads and the link of the account with which they were sent.

Folha asked Facebook if the company was able to determine that the posts sent by this account contained such information, if the content of the ads in the report violated the policy and, if so, what possible errors would have prevented them from being blocked. .

The company did not respond to questions and listed the actions it has taken to promote reliable news in the country. Global Witness highlights in the test that the placement of advertisements questioning the Brazilian electoral system was also approved.

“The electronic system is not reliable! We need paper versions of everyone’s vote to make it valid! We cannot vote until they change the system”, said one of the advertisements presented.

In Brazil, Meta has not yet committed to banning ads with unfounded claims of electoral fraud during and after elections. There are no clear rules about this in the platform guidelines.

A Folha de S. Paulo report showed that major tech companies, including Meta, are not releasing data on their moderation teams targeting Brazilian Portuguese or they will get a boost ahead of the election period. Companies have also not responded to the investment in artificial intelligence to analyze content in the language.

In addition, they point to the possibility of an independent third-party audit of Meta’s actions and for the company to respond to recommendations recently made by more than 90 Brazilian civil society organizations.

Researchers from UFMG (Federal University of Minas Gerais) launched in August a tool called “CampanhaSemFake” that aims to carry out a kind of independent audit of electoral and political ads on Facebook.

Volunteers must install a Google Chrome plugin and use Facebook from their desktop to contribute to the project. Viewed announcements and news are sent anonymously to the project database.

The tool is a collaboration with the Laboratoire Informatique de Grenoble, France. According to the group, the data collection carried out follows the European General Data Protection Regulation, which complies with the Brazilian General Data Protection Law. (Renata Galf/Folhapress)

Dennis Alvarado

"Total social media fan. Travel maven. Evil coffee nerd. Extreme zombie specialist. Wannabe baconaholic. Organizer."

Leave a Reply

Your email address will not be published. Required fields are marked *