Social media giant Facebook took action on 2.5 million posts for violent and graphic content, on 1.8 million posts for nudity, sexual activity, and 3 lakh posts for hate speech, between May 15-June 15 in India, as per its first transparency report under the new IT rules.
The California-headquartered company took action on 1 lakh posts for terrorist propaganda, 75,000 posts for organised hate, according to the report.
On Instagram, Facebook took action on nearly 5 lakh posts for adult nudity and sexual activity, and nearly 7 lakh posts for violent and graphic content during the period.
Also read:
IT sector Q1FY22 preview: Strong order bookings, broad-based revenue growth key highlights
Content actioned represents the number of pieces of content (such as posts, photos, videos or comments) Facebook takes action on, including removing content or displaying a warning. Proactive rate shows the percentage of all content or accounts acted on that Facebook found and flagged before users reported them.
The report shows that Facebook has been able to act on more than 90 percent of the content before users report it, for most violation categories.
"Over the years, we have consistently invested in technology, people, and processes to further our agenda of keeping our users safe and secure online and enable them to express themselves freely on our platform. We use a combination of Artificial Intelligence, reports from our community and review by our teams to identify and review content against our policies. We’ll continue to add more information and build on these efforts towards transparency as we evolve this report," a Facebook spokesperson said in a statement.
Also read: What are NFTs and how can you buy them in India?
However, Facebook said there were challenges to pinpoint the country of producers of content.
"Given that such violations are also highly adversarial, country-level data may be less reliable. For example, bad actors may often try to avoid detection by our systems by masking the country they are coming from. While our enforcement systems are global and will try to account for such behaviour, this makes it very difficult to attribute and report the accounts or content by producer country (where the person who posted content was located), " the company said in the report.
"Given the global nature of our platforms where content posted in one country may be viewed almost anywhere across the world, other ways to attribute the country of content removed in a technically feasible and repeatable manner, become almost meaningless. So these estimates should be understood as directional best estimates of the metrics," the company said.
Also read: Looking to get touchless personal loan online? Here are key things to know
Facebook added that a more detailed report will be published on July 15, containing details of user complaints received and action taken, "We expect to publish subsequent editions of the report with a lag of 30-45 days after the reporting period to allow sufficient time for data collection and validation. We will continue to bring more transparency to our work and include more information about our efforts in future reports."
Earlier this week, Google released its Transparency Report for the month of April under the new Information Technology (Guidelines for Intermediaries and Digital Media Ethics Code) Rules, 2021, in India that came into effect starting May 26, 2021. The report shows that the company received 27,762 complaints from individual users, with most of the complaints being around copyright issues.
Also, Google carried out 59,350 removal actions during the month.
Social media platform Koo, the domestic competitor to Twitter, has also released its compliance report for the month of June under the new IT rules, which shows that 5,502 posts were reported by users during the month, of which 22.7 percent (1,253) were removed, while other action was taken against the rest – 4,249.
Also read: Richard Branson announces trip to space, ahead of Jeff Bezos
Koo said it also took steps to proactively moderate 54,235 posts, of which 2.2 percent (1,996) were removed while other action was taken against the rest – 52,239. Other action includes overlay, blur, ignore, warn, etc., on posts that do not comply with the government guidelines, the company said.
The IT rules, which came into effect for social media intermediaries with over 50 lakh users on May 26 this year, require the intermediaries to "publish periodic compliance report every month mentioning the details of complaints received and action taken thereon, and the number of specific communication links or parts of information that the intermediary has removed or disabled access to in pursuance of any proactive monitoring conducted by using automated tools or any other relevant information as may be specified."