Facebook and Instagram say they removed 600,000 content for violence and hate speech before the 1st round | Technology

Meta, the parent company of Facebook and Instagram, said on Monday (10) that social networks removed more than 600,000 content that violated its policies against violence and hate speech between August 16 and October 2, the day of the first election round.

Among the 310,000 content removed for violence and incitement, there are posts urging people to show up at polling stations with guns. The transport of weapons by collectors, sport shooters and hunters (CACs) in elections was prohibited by the Superior Electoral Court (TSE).

  • Share on WhatsApp
  • Share on Telegram

Among the 290 thousand posts removed for hate speech, there are cases of attacks on the population of the Northeast. Meta said it identified such content on October 2, even before the end of counting votes, and removed those that violated its rules.

The company also claimed to have removed posts promoting incorrect election dates and times or featuring incorrect candidate numbers. In these cases, the content violated the platforms’ policies on electoral interference.

  • TSE agreement with social networks for elections is progress, but does not solve the problem of fake news, say experts
  • Remember what WhatsApp, Telegram, TikTok, Facebook and YouTube promised to do against fake news in the elections

Facebook also reported that, on the weekend of the first round, reduced the reach of posts with false claims that “urses had pre-registered votes”. The reach of content can also be reduced if it is flagged as false by partner fact-checking agencies.

These were the actions taken by the company before the first round:

  • Removal of 310,000 content for violating violence and incitement policies on Facebook and Instagram between August 16 and October 2;
  • Removal of 290,000 content for hate speech between August 16 and October 2;
  • Serving more than 4.7 million people through TSE’s FAQ on WhatsApp between April 1st and October 2nd; 85.7 million messages were exchanged in the tool;
  • Magnification from four to six partners in the fact-checking program in Brazil;
  • Creation of a direct communication channel with the TSE to receive complaints as provided for in an agreement signed in March, in addition to the use of artificial intelligence;
  • Activation of the Operations Center for Electionsin which several teams work to speed up response time to potential threats.

According to Meta, the work also included the rejection of political advertising on Instagram and Facebook that did not comply with transparency rules and changes to social networks to facilitate access to reliable information about the elections.

The company said it is “extremely challenging” to end hate speech and other content that violates its rules because of the size of the platforms.

“Thanks to machine learning, our technology helps us identify posts that are highly likely to violate our policies to reduce their distribution and, if applicable, remove them,” Meta said.

Also according to Meta, the Operations Center for Elections brought together “experts in intelligence, data science, public policy, law, security, content moderation and engineering, among dozens of other teams”.

Source link

About Admin

Check Also

Brittney Griner: How it was a prisoner exchange between Russia and the USA that freed basketball star

December 8, 2022 Credit, Reuters photo caption, Brittney Griner was sentenced to nine years in …

Leave a Reply

Your email address will not be published. Required fields are marked *