How Effective Is Facebook at Detecting Bad Content?
Facebook’s latest Community Standards Enforcement Report not only shows how many pieces of content in violation of its rules the company took action on, but also how effective Facebook is at identifying such content.
Looking at what Facebook calls the “proactive rate” for different types of violations, i.e. the percentage of violating content that the company identified before anyone reported on it, reveals one of the main challenges the world’s largest social network faces in trying to keep its platform clean: while it’s very easy for artificial intelligence to identify images involving nudity or graphic violence as well as filtering out blatant spam postings, it’s much harder to identify hate speech, bullying or harassment, which often requires context and human understanding of nuance.
Relying mainly on technology to identify potentially harmful content, with humans getting involved at a later stage in the review process, it doesn’t come as a surprise that Facebook still struggles to identify hate speech or bullying before its users do. While the company’s success rate in filtering hate speech has improved from 52 to 80 percent over the past 12 months, it is still significantly lower than it is for more trivial types of violating content.
Description
This chart shows the percentage of violating content detected by Facebook before users reported it.
Related Infographics
Any more questions?
Get in touch with us quickly and easily.
We are happy to help!
Statista Content & Design
Need infographics, animated videos, presentations, data research or social media charts?