Moderation
The consequences of moderation decisions can be far-reaching and, in individual cases, even threaten people’s livelihoods. On most established platforms they are made opaquely and at the operators’ discretion, based on vague rules (e.g., “no harmful content”). Today a mix of automated moderation and (often precariously employed) human moderators is usually used. This centralized moderation often shows little understanding of the social and cultural context of content and frequently misreads irony, satire, and meta-level discussion of problematic content. Some platforms, like Reddit, use a hybrid moderation system in which decisions are partly made by moderators who are also regular users.
On the one hand, more reliable and smarter detection of rule violations using artificial intelligence is an active and generally promising area of research; on the other hand, moderation concepts should be compared with one another and new approaches developed and studied.
Who should decide what may be said, and who should enforce it and how? These questions about society and governance may be as old as civilization itself. They are by no means finally solved—which, given their inherent subjectivity and complexity, may be impossible—but they are well studied. Over history, humans have developed various concepts for fair and resilient systems. Parallels can reasonably be drawn to moderation of public social networks. Transferring standards and concepts can be useful—both in search of solutions and to illustrate the state of things: few would find appealing the idea of living in a society where profit-driven, opaque private organizations set vague rules about acceptable speech and enforce them at will—suppressing individual utterances and, in extreme cases, imposing permanent bans. Regarding the digital aspects of our lives, however, this is not a dystopia but reality. The comparison is imperfect because a state’s laws provide a framework for what must be moderated—but not for what may not be moderated and how moderation should be done. In other words: some things are forbidden on social media because they would violate laws (e.g., incitement or defamation), and platforms must moderate those and report them to authorities. Whether this works reliably is another question. Platforms may also set their own rules that go further and ban speech that is legally permitted. This scope for shaping what falls under free expression is outside democratic control and is enforced without transparency. Why doesn't this worry more people? One reason may be that many still see the internet as an “optional add-on” to physical public space: “If you don’t like the rules there, you don’t have to use the internet.” This view overlooks decades of change: digital media platforms are now a fixed, deeply integrated, and for many purposes indispensable part of modern social life.