Facebook Intends to ramp up efforts to fight misinformation Before the European Parliament election in May and will partner with German news agency DPA to Improve its fact-checking, a senior executive said on Monday.
Facebook continues to be under pressure around the planet because the US election in 2016 to stop the use of fake accounts and other varieties of deception to sway public opinion.
The European Union last month accused Alphabet’s Google, Facebook and Twitter of falling short of the pledges to combat fake news ahead of their European election once they signed a voluntary code of behavior to fend off regulation.
On Monday, Facebook stated it was setting up an operations centre that would be staffed 24 hours every day using engineers, information scientists, researchers and policy specialists, and coordinate with outside organisations.
“They will be proactively trying to identify emerging threats so that they can take action on them as quickly as possible,” Tessa Lyons, head of information feed ethics in Facebook, told journalists in Berlin.
Facebook also announced it’s teaming up with Germany’s biggest news agency, DPA, to help it check the accuracy of articles, along with Correctiv, a non-profit collective of journalists who has been flagging fake news to the company since January 2017.
It is going to also train over 100,000 students in Germany in media literacy and endeavor to prevent paid advertising being abused for political ends.
Germany has been especially proactive in trying to clamp down on online hate speech, executing a law last year which forces companies to delete offensive posts or face fines of up to EUR 50 million ($56.71 million or about Rs. 391 crores).
The issue of elections and misinformation became prominent after US intelligence agencies concluded that Russia tried to influence the outcome of the 2016 US presidential elections in Donald Trump’s favour, partly by using social websites. Moscow denied any meddling.
Lyons stated Facebook had made progress in limiting fake news in the previous two decades, adding that it would raise the number of folks working on the issue globally to 30,000 at the end of the year from 20,000 currently.
In addition to human intervention, she stated Facebook is continually refining its machine learning programs to spot untrustworthy messages and restrict their distribution.
“This really is a very adversarial area, and if the poor actors are financially or ideologically motivated, they will attempt to get around and adapt into the job that we are performing,” she explained.