Has your Facebook account ever been suspended without warning? Or has your meticulously curated fans club page ever been disabled, or with some of your account functions blocked?

Recently, Facebook has announced that 7 pages, 3 groups and 5 accounts have been suspended. Head of Cybersecurity, Nathaniel Gleicher, pointed out that these accounts were involved in spreading falsified information in a targeted, systematic manner, affecting more than tens of thousand of users. At the same time, Twitter shut down 936 accounts for the same reason, citing suspicion that they were state-run, based on their content, source of funding and political pressure, which they feared would disrupt a rational platform for open discussion.

Then how is it that social media websites determine whether the content of myriads of posts have violated cybersecurity policies? And who exactly is in charge of such a seemingly trivial task?

Facebook’s Community Standards

Screen Shot 2019 08 28 at 5.57.41 PM

In a recent interview with Liberty Times, Facebook’s Public Policy Manager Sheen Handoo cleared the air on Facebook’s Community Standards and provided a few real life examples to show how it works. Facebook allows users to openly express their views but prohibits users from expressing inflammatory remarks towards a certain social group. At the time, Handoo used an example to illustrate this rule with the two following phrases: ‘I hate Christianity’ and ‘I hate Christians’. Undoubtedly, both contain degree of hatred, but only the latter would be removed. The reason being, you can express your distaste towards Christianity’s beliefs, but you may not launch personal attacks on members of the religion.

In terms of posts concerning adult content, users are forbidden from posting nude pictures or pornographic videos. Links which lead users to said adult content are also readily removed. Handoo stressed that in issues concerning ‘sex’, Facebook’s Community Standards will be strictly enforced, with the goal of preventing further injury to concerned parties. Because of this, speech mocking sexually abused victims is also forbidden.

A point to note is that according to Facebook’s Community Standards, users, pages or groups who have previously committed serious infarctions will be banned for life from using Facebook. Simply put, if you break the rules one too many times, your meticulously maintained posts will all be removed.

Artificial Intelligence (AI) automatically detects posts that have breached the terms of service and concerned parties are automatically penalized

In early 2007, Facebook CEO Mark Zuckerberg started incorporating Artificial Intelligence (AI) into Facebook’s systems to filter out falsified information. The AI can scan the content of the post to evaluate whether the posts contains offensive rhetoric or adult content.

If your account is suddenly suspended or disabled, it may be that the content of your posts has breached the official Terms of Use set by Facebook and the Community Standards or related policies on suspicion that you posted content related to nudity, pornography, hate speech, violence, or committed identity theft etc. In addition, if your account shows excessive promoting or selling in a short time may cause the AI to come knocking on the door for the possibility of spam.

fb1

With the incorporation of AI, most accounts that are registered with fake names, or are related to terrorism and associated with other offensive rhetoric are picked up by the system before users report them. The aforementioned example relating to Facebook are automatically scrubbed by the AI system. However, despite the best efforts of Facebook’s Chief Technical Officer (CTO) Mike Schroepfer and  his team of 150 engineers, there are always offensive posts popping up here and there, showing the AI still needs time to reach full maturity. Furthermore, the AI’s ability to survey content is inconsistent. For example, towards offensive speech, the AI can only accurately flag 38% of these posts.

Even with the rapid advancement of AI, Schroepfer understands that cybersecurity is a never-ending task. One cannot deny that AIs aren’t quite able to pick up the subtle nuances of language. Even humans ourselves sometimes have problems distinguishing rights and wrongs. So, besides the huge support provided by the AI, there are also content moderators on stand by to filter out offensive content for users once again. 

With 23 billion active accounts, Facebook has 7000 content moderators, evaluating a single post in one minute on average

Content moderators mainly evaluate posts reported by users and remove them if they violate the Terms of Service. While some of the offensive content has already been selectively filtered by the AI, the aforementioned bullying, hate and political interference posts or posts localized to a specific group rely on moderators to remove them.

Screen Shot 2019 08 28 at 5.52.38 PM

The Verge once wrote a piece regarding the rights of content moderators. Content moderators work less than 8 hours daily and on average have to evaluate 8,000 posts. However, their hourly rate is only USD15. The worst thing is, content moderators have to spend long periods of time looking at pornographic or violent images that are highly disturbing, causing great psychological pressure and, in some cases, lead to post-traumatic stress disorder.

AI and content moderators work hand in hand to ensure cybersecurity on social media, providing a pleasant platform for us to use. From a technological perspective, to this day, AI is still unable to single-handedly evaluate all the posts effectively, which leaves more to be desired from the advancement of AI. On the other hand, from the perspective of human rights, before a more effective AI system is developed, it is hoped that Facebook can pay more attention to the rights of content moderators so they may get the pay and benefits they deserve.