Facebook took action on 33.3 million pieces of content during the period June 16 to July 31, Marketing & Advertising News, AND BrandEquity


[ad_1]
The proactive rate, which shows the percentage of all content or accounts that Facebook found and reported on using technology before users reported it, was in most of these cases between 86.8 and 99. , 9%.

Facebook proactively “acted” on more than 33.3 million pieces of content in ten categories of violations from June 16 to July 31 in India, the social media giant said in its compliance report on Tuesday. Facebook’s photo-sharing platform, Instagram has taken proactive steps against around 2.8 million pieces in nine categories during the same period.

The company said it received 1,504 user reports for Facebook and 265 reports for Instagram through its Indian complaints mechanism between June 16 and July 31, and the social media company responded to all of them.

A Facebook spokesperson said that over the years the company has consistently invested in technology, people and processes to keep users safe online and allow them to express themselves freely on its platform. .

“We use a combination of artificial intelligence, reports from our community and reviews by our teams to identify and review content against our policies. In accordance with IT rules, we have released our second monthly compliance report for the 46-day period – June 16 to July 31, “the spokesperson said in a statement to PTI.

This report contains details of content that was proactively removed using automated tools and details of user complaints received and actions taken, the spokesperson noted.

In its report, Facebook said it “took action” on more than 33.3 million pieces of content in ten categories from June 16 to July 31, 2021.

Telegram competes with popular platforms like WhatsApp and Facebook in India, but there is a difference …

This includes content related to spam (25.6 million), violent and graphic content (3.5 million), adult nudity and sexual activity (2.6 million) and hate speech (3 24 300).

Other categories in which content has been covered include bullying and harassment (1,23,400), suicide and self-harm (945,600), dangerous organizations and individuals: terrorist propaganda (1,21,200) and dangerous organizations and individuals: organized hatred (94,500).

“Powered” content refers to the number of pieces of content (such as posts, photos, videos, or comments) for which action has been taken for violating the standards. Taking action may include removing a piece of content from Facebook or Instagram or covering photos or videos that may disturb certain audiences with a warning.

The proactive rate, which shows the percentage of all content or accounts that Facebook found and reported on using technology before users reported it, was in most of these cases between 86.8 and 99. , 9%.

The proactive rate of removal of bullying and harassment related content was 42.3%, as this content is contextual and very personal in nature. In many cases, people need to report this behavior to Facebook before it can identify or remove such content.

Under the new IT rules, large digital platforms (with more than 5 million users) will be required to publish periodic compliance reports every month, listing details of complaints received and actions taken against them. The report should also include the number of specific communication links or pieces of information to which the intermediary has removed or disabled access as part of proactive monitoring carried out using automated tools.

During the period of May 15 to June 15, Facebook had “taken action” on more than 30 million pieces of content in ten categories of violation while Instagram had taken action against approximately two million pieces in nine categories during from the same period.

For Instagram, 2.8 million pieces of content were processed in nine categories between June 16 and July 31. This includes content related to suicide and self-harm (8 11,000), violent and graphic content (1.1 million), adult nudity and sexual activity (6 76,100) and bullying and harassment (1,95,100).

Other categories in which content was processed include hate speech (56,200), dangerous organizations and individuals: terrorist propaganda (9,100) and dangerous organizations and individuals: organized hatred (5,500).

Between June 16 and July 31, Facebook received 1,504 reports through its Indian complaints mechanism. “On these inbound reports, we provided users with tools to resolve their issues in 1,326 cases. These include pre-established channels for reporting content for specific breaches, self-correction feeds where they can upload their data, ways to troubleshoot hacked account issues, and more. , “It said.

During the same period, Instagram received 265 reports through the Indian complaints mechanism and provided tools for users to resolve their issues in 181 cases, he added.

Earlier today, Google said it received 36,934 user complaints and removed 95,680 pieces of content based on those complaints, and removed 5,76,892 pieces of content in July following automated detection at month of July.

FACEBOOK-INSTAGRAM-Instagram will force users to share their birthday amid youth security push

[ad_2]

About Coy Lewallen

Check Also

The crypto asset market in Indonesia is weakening. What is the solution?

Jakarta, Indonesia, October 27, 2022 /PRNewswire/ — The Crypto Asset Industry in Indonesia is still …

Leave a Reply

Your email address will not be published.