Test your knowledge with a few review questions.
A social networking platform could take this position – the important thing is to recognize that doing so is an explicit choice that has moral and ethical dimensions. “Neutral” is not a default, apolitical baseline. A platform adopting this approach is specifically favoring the value of free expression over competing values like human welfare, as people will inevitably become targeted by bullying, trolling, harassment, hate speech, etc. on this platform.
Nope. Machine learning, and other forms of automation, are just one tool in the toolbox of content moderation. Predictive models always have false positives and false negatives, meaning that human review will always be necessary to adjudicate content that the automated systems misclassify. Further, machine learning systems can have their own biases, which can end up harming specific groups of people.
Close, but you’re forgetting about the importance of policy! Without robust and nuanced policy, it’s impossible to calibrate automated systems, adjudicate moderation disputes, or hold oneself accountable for the performance (or lack thereof) of moderation systems.
No. Consider trolls, who cynically value free expression as a means to harass others. Trolls are part of the public, but they are not operating in good faith, and therefor their values are irrelevant. Or consider criminals who want to use the platform so sell illicit items. There is no reason to accommodate the values of this particular group.
Yes! Media studies scholar Tarleton Gillespie argues that content moderation is actually the primary business of online social media platforms. As such, it is critical that the people building these platforms be honest about the business they are in and embrace the challenges of content moderation in earnest.