Tech This Week | Will Facebook’s ‘Supreme Court’ make the web a safer place?

12 May 2020 10:09 AM
Collected
Earlier the other day, Facebook’s Oversight Board (often dubbed Facebook’s Supreme Court) announced co-chairs and first twenty members. The board allows users to appeal removal of their posts, and after request will also issue advisory opinions to the business on emerging policy questions.

Why did we arrive here? With vast amounts of users, Facebook has already established a content moderation problem for a while. In an excellent world, good posts would stay up, and bad posts would be pulled down. But that's not how it works. When it comes to Facebook posts, morality isn't always black and white. Arguments can be made on either side for some posts regarding where in fact the right to free speech ends. Similarly, whether politicians ought to be allowed to lie in ads.

Status quo has historically dictated that Facebook takes these decisions and the world continues on. However, that process generally has been perceived such as a black box. There hasn’t been a whole lot of transparency around how these decisions are taken, in addition to the minutes of Facebook’s Product Policy Forum, that is a mixed bag. 

An intended and anticipated consequence of the board is that it'll instil more transparency in to the process of what stays up and why. By reporting on what the board discussed and didn't discuss, it can help bring more clarity around the most prevalent problems on the platform. It may help reveal whether bullying is a bigger problem than hate speech or how (or where) harassment and racism manifest themselves. 

There is the problem of if the decisions taken by the board will be binding. Mark Zuckerberg claimed that “The board’s decisions will be binding, regardless if I or anyone at Facebook disagrees with it,” so that it is safe to state that Facebook vows they'll be.  The board could have the power to eliminate particular bits of content. The question is if the board’s judgements may also apply to bits of content that are either similar or identical. Otherwise it would make no sense for the board to pass a decision on every single little bit of content on Facebook.

Regarding this, Facebook’s stance is, “in instances where Facebook identifies that identical quite happy with parallel context - that your board has already decided after - remains on Facebook, it will take action by analysing whether it's technically and operationally feasible to use the board’s decision compared to that content as well”.

In simple speak, board members (who will not all be computer engineers) could make recommendations that cannot be implemented over the platform. In which case, Facebook will not go ahead with replicating your choice for every single little bit of decision on the platform. Also, in the event the board does just do it with an exceptionally radical recommendation (say, turn off the like button), Facebook can ignore that.

On the bright side, so far as content moderation can be involved, there seems to be little reason behind Facebook to not in favor of your choice of the board anyway, taking into consideration the body has been established to take this responsibility (and blame) off Facebook’s hands. 

The billion dollar question is whether it'll make Facebook a safer place. The short answer is no (followed by too early to state). The board is only going to manage to her a few dozen cases at best. New members of the board have committed to an average of 15 hours a month to the work, which is to moderate what stays up for a user base of 3 billion people.  Even if the members were regular, the quantity of cases the board would have been able to see and pass judgement on is a drop in the ocean. Based on how your body is structured, it makes sense for the members to deliberate on the most high profile or charged cases (such as for example political advertising or the occurrence of deepfakes on the platforms). 

It has historically been a difficult process for society move the needle forward, and the board an effort to do that. The very best case scenario here is that your body achieve incremental progress by laying out key principles that guide Facebook’s content moderation efforts.  So far as whether the board can make Facebook (and by extension, the web) a safer place, it is too early to state but seems unlikely. For each visible deepfake of Nancy Pelosi or Mark Zuckerberg, there are thousands of content moderation decisions that require to be produced. Low profile instances of misinformation, bullying, harassment, and abuse plague platforms like Facebook, Instagram, and WhatsApp and can not magically cease to exist.

Instead, content moderation at Facebook will be an extended fraught battle, led by the board. Here is the beginning of 1 of the world’s most significant and consequential experiments in self-regulation. Time will tell how it shapes up.

Tags