Value Tensions

As we have shown, large platforms play host to many direct and indirect stakeholders who hold a variety of values, not all of which are compatible. When controversies erupt about when and how to moderate content on these platforms, we see that the underlying cause is often a tension between competing values.

The platforms have not made matters easy for themselves: it is clear that they scramble to addresses crises because ethics and values were not taken sufficiently into account when the platforms were built. This was a serious mistake and a design failure. Treating content moderation problems like bugs and attempting to patch them after-the-fact does nothing to mitigate the real harms that people and society have already borne, to say nothing of the reputational and financial costs that the platforms have suffered due to them.

In this section, we briefly examine how VSD can be applied preemptively, at the design stage, to address value tensions and lay the groundwork for thoughtful, empathetic content moderation systems.

Addressing Value Tensions Through Design

There is no getting around the fact that content moderation is a hugely challenging task that is fraught with competing values. That said, the solution is not to avoid the problem by falsely claiming “our platform is neutral”. Instead, the solution is to earnestly take on the ethical challenges that the task presents from the very inception of a platform’s design through its implementation within complex and often diverse social and political systems.

Here are some considerations and tensions that aspiring platform engineers must consider with respect to inciteful speech (to take just one example of a challenging content). In this case the value tension is between free expression on one hand, and human welfare, calmness, respect, etc. on the other hand.

  • How to define hate speech and other forms of violent speech? This should be done in concert with representatives from impacted groups, so that their values can be clearly articulated and incorporated. Furthermore, this process should be sensitive to historical power imbalances between groups that underlie many forms of intolerance.

  • How to take context into account? This includes dealing with challenging content, like historical quotes and artistic expression, that may seem to violate content guidelines but should not be moderated. In extreme cases, where content cannot be effectively moderated due to a lack of cultural context (for example, Facebook in Myanmar), the platform owner must seriously consider whether it is ethical to open the platform in the impacted areas.

  • What process to use for moderating content? Crowdsourcing, paid human moderators, and machine learning all have their place if they can be made complementary. However, these systems also need to be tempered with transparent and accountable processes for contesting decisions and requesting review. This acknowledges the reality that no content moderation system will ever be perfect.

  • How to enforce moderation decisions? In addition to censoring content and banning users, there are other options such as adding warning labels alongside content, putting content behind a warning blockade, or preventing content from being algorithmically promoted (e.g., in search results or news feeds). Keep an open mind to even simpler, proactive solutions: for example, research shows that reminding people about the community guidelines naturally reduces the amount of offending content that is produced.

All of these issues require grappling with questions connected to human values. It is not possible to design an online content platform well without taking values into account.