Meta published these numbers as part of new guidelines under the IT Rules, 2021, which mandate digital and social media platforms
with more than 5 million users to publish monthly compliance reports.
Meta said it took action against more than 19.52 million such content across 13 policies for Facebook, and 3.39 million pieces of content across 12 policies on Instagram for the period between November 1 and November 30.
“This metric shows the scale of our enforcement activity,” the company said in its transparency report. “Taking action could include removing a piece of content from Facebook or Instagram or covering photos or videos that may be disturbing to some audiences with a warning.”
Of the different categories, on Facebook, 14.9 million of such content was actioned for ‘Spam’ followed by 1.8 million for ‘Adult nudity and sexual activity.’
On Instagram, ‘Suicide and self-injury’ was the category that had the most amount of content taken down (1 million) followed by ‘Violent and graphic content’ (727,200).
Discover the stories of your interest
Further, Meta said it received 2,368 complaints on Instagram under the Information Technology Rules, 2021. Out of these, the maximum – 939 complaints – were about account hacking. It also said it received 889 complaints on Facebook under this rule.
On Wednesday, Meta-owned instant messaging platform
WhatsApp said it banned over 3.72 million bad accounts between November 1 and November 30, an increase of nearly 60% compared to October, when it had banned 2.32 million accounts.
Of the total accounts banned in November, WhatsApp said almost 10 lakh accounts were proactively banned which means it was done before any reports from users.
WhatsApp published these numbers as part of the guidelines under the new IT Rules 2021 that mandates big digital and social media platforms, with more than 5 million users, to publish monthly compliance reports.
For all the latest Technology News Click Here
For the latest news and updates, follow us on Google News.