NEW DELHI, Mar 31: Facebook is taking several measures, including reducing distribution of content deemed to be hate speech, as part of its efforts to curb spread of misinformation during elections in four Indian states.
Facebook will also temporarily reduce the distribution of content from accounts that have recently and repeatedly violated the company’s policies, the social media giant said in a blogpost on Tuesday.
“We recognise that there are certain types of content, such as hate speech, that could lead to imminent, offline harm…To decrease the risk of problematic content going viral in these states and potentially inciting violence ahead of or during the election, we will significantly reduce the distribution of content that our proactive detection technology identifies as likely hate speech or violence and incitement,” it added.
This content will be removed if determined to violate Facebook’s policies, but its distribution will remain reduced until that determination is made, the blog noted.
Facebook has drawn flak in the past for its handling of hate speech on the platform in the country. India is among the biggest markets for Facebook and its group companies, WhatsApp and Instagram. According to government data, India has 53 crore WhatsApp users, 41 crore Facebook users, and 21 crore users of Instagram.
The US-based company said it has invested significantly in proactive detection technology to help identify violating content more quickly.
In its blog, Facebook said based on lessons it has learned from past elections in India and globally, it is taking steps to enhance civic engagement, combat hate speech, limit misinformation and remove voter suppression amid elections across Tamil Nadu, West Bengal, Assam, Kerala and Puducherry.
“We also continue to closely partner with election authorities, including to set up a high priority channel to remove content that breaks our rules or is against local law after receiving valid legal orders,” it said.
Facebook pointed out that under its existing Community Standards, it removes certain slurs that it determines to be hate speech.
“To complement that effort, we may deploy technology to identify new words and phrases associated with hate speech, and either remove posts with that language or reduce their distribution,” it added.
Outlining the steps, Facebook said its policies prohibit voter interference, defined as objectively verifiable statements such as misrepresentation of dates and methods for voting (for example text to vote).
“We also remove offers to buy or sell votes with cash or gifts. Additionally, we also remove explicit claims that you will contract COVID-19 if you vote,” it explained.
In 2019, led by industry body IAMAI, Facebook had set up a high priority channel with the Election Commission of India (ECI) for Facebook, Instagram and WhatsApp, to receive content-related escalations.
The Voluntary Code is applicable for this election as well, the blog said.
“We believe Facebook has an important part to play in creating an informed community, and helping people access all the information they need to take part in the democratic process. We also remind people to exercise their democratic right to vote,” it said.
The company will offer Election Day reminders to give voters accurate information and encourage voters to share this information with friends on Facebook and WhatsApp.
“…WhatsApp specifically rolls out public education campaigns and digital literacy trainings to build awareness about refraining from forwarding frequently forwarded messages, turning on group permissions to help decide which groups to join, reporting or blocking a suspicious contact or number, and prohibiting bulk or automated messages,” the blog said.
The company also works with third-party fact-checkers around the world, including eight partners in India, to provide people with additional context about the content they’re seeing on Facebook.
In addition to English, these eight partners fact-check in 11 Indian languages including Bengali, Tamil, Malayalam and Assamese.
When a fact-checker rates a story as false, such content is labelled and shown lower in News Feed, significantly reducing its distribution.
This stops its spread and reduces the number of people who see it, Facebook said. (PTI)