At this point we have outlined the contours of the content moderation problem and discussed classes of techniques that can be used to begin addressing these challenges. With this background in mind, we can now begin the process of performing a value sensitive analysis of content moderation, as a means towards designing empathetic content moderation systems. The first step in this process is envisioning who the stakeholders are and the values that they hold. We separate our discussion between direct stakeholders (i.e., the people who control, develop, and use a given platform) and indirect stakeholders (i.e., people who do not use a platform but are still potentially impacted by the platforms’ content moderation stance).
We begin with direct stakeholders. Because online platforms are so massive and ubiquitous in the modern day, the scope for direct stakeholders is equally broad. Here we discuss a sample of direct stakeholders, some of whom are obvious and others who are not.
The platform owner. The platform owner is presumably an individual or organization interested in generating profit, which means attracting and retaining people to their platform. Seen from a capitalist perspective, content moderation is a means to an end for the owner: a cost center that is necessary to ensure that people do not abandon the platform, as well as legal compliance (e.g., with copyright laws). That said, platform owners may also hold beliefs and values that they wish to see borne out on their platform through content moderation, regardless of the business implications. For example, a platform owner may decide that they want to craft a “family friendly” space, which may necessitate curbs on free speech, e.g., a prohibition against pornography.
The platform developers. The people who build a platform, in technical and non-technical roles, will have their own ideas about the values that should be promoted by the systems they build. With respect to content moderation, the developers may want to build systems that are accessible to the broadest range of people, promote human welfare, and facilitate basic human rights.
Content moderators. Content moderators are “insiders” who require special consideration. Because content moderators, be they volunteers or professionals, are tasked with triaging content that may be hateful, violent, disgusting, or otherwise repugnant, care must be taken to ensure their human welfare and calmness. All too often, content moderators are not shown respect, which again points to the need to build support structures for them.
General users. The people who use social media platforms may appreciate free expression, i.e., the ability to post and discuss whatever they choose, but they also expect respect for their human welfare, e.g., content moderation policies and systems that protect them from ad hominem attacks and violent speech. Furthermore, when content is being moderated, people expect transparency (i.e., clear rationale for what is being moderated and why), accountability (i.e., systems to contest moderation decisions and policies), and freedom from bias (i.e., reasonably uniform application of content moderation policies).
Members of historically oppressed groups. Among the general user population, groups such as women, BIPOC, and LGBTQIA+ deserve special consideration for three reasons. First, they are likely to be targets of hateful content that must be moderated. Second, they are often the targets of coordinated campaigns that weaponize content moderation systems, e.g., attempts to censor them by disingenuously reporting their accounts and content. Third, automated content moderation systems may exhibit bias against individuals from these groups.
Victims of sexual violence. People who are victims of trafficking, stalking, and intimate partner violence often suffer losses of autonomy and human welfare when their personal information or private images (e.g., non-consensual “revenge” pornography) are posted to social media by their abusers. Content moderation policies and systems must have mechanisms for dealing with these cases.
Children and the young. Even when platforms have policies that forbid young children from joining, they may surreptitiously join anyway. Thus, developers and designers must at least consider how to build content moderation systems that deal with this particular worst-case scenario, e.g., a young child gaining access to content on the platform. Even for older children and young adults who are allowed on the platform, special considerations must be made. For example, the mental health of children and young adults is fragile in the face cyber bullying.
People living under oppressive regimes. Many countries have laws and practices that Americans would regard as oppressive. For example, the Chinese government routinely censors political content. Thailand has a lèse-majesté law that forbids criticism of the monarchy and is vigorously enforced. Platform owners and designers must think carefully about how to approach these laws: ignore them on principle to favor human rights and free expression, at the risk of being sued or even blocked from these countries entirely; or, comply and favor access (as well as potential profit from users in these countries).
Obviously this list of stakeholders is incomplete. Especially with respect to global platforms, the list of stakeholders must take into account the values of groups from all over the world.
In addition to people who directly build and utilize an online platform, other groups may be indirectly impacted by the content on the platform and moderation systems that govern this content.
The public. Recent history has vividly demonstrated how, left unchecked, misinformation and disinformation may run rampant on social media platforms. In addition to impacting people who directly read and interact with this content, such mis- and disinformation can have broader consequences for the public. For example, health and vaccine-related misinformation can compromise public health, and election-related misinformation can undermine democratic institutions.
Governments. Democratic governments have an interest in making sure their citizens are well-informed and enfranchised. However, this may be undermined by coordinated attempts to spread mis and disinformation on social networks by foreign and domestic actors.
Law enforcement. Despite being public platforms with no privacy guarantees, criminals routinely use social media platforms to exchange illegal data (e.g., child pornography) and offer illicit goods for sale (e.g., drugs and guns). Law enforcement has an interest in seeing that this content is not permitted on these platforms and is taken down. Law enforcement also has an interest in collecting data on the people who post this content on social media so that they may be tracked down and prosecuted.