It’s fall 2020, but we’re already well into our winter of discontent. In the grip of a seemingly endless omni-crisis, America lurches towards what could very well be a disputed election and all of the various bad to worse scenarios associated with it. The President has issued what many interpret as a coded signal to white supremacist groups during the first debate with former Vice President Joe Biden. All in all, things aren’t looking great. So here comes former Twitter CEO Dick Costolo, who tweeted the following in an argument with some other tech industry people over a cryptocurrency company:
Me-first capitalists who think you can separate society from business are going to be the first people lined up against the wall and shot in the revolution. I’ll happily provide video commentary.
What prompted Costolo, not exactly a Ho Chi Minh man, to start posting excerpts from the Tet Offensive? As bizarre and strange as it sounds, a blog post from Coinbase CEO Brian Armstrong. But I won’t be talking about that in this post. I imagine Costolo will, as soon as he has an appropriate response, make some effort at clarifying his words and responding to the growing outrage over them. At the risk of some of this analysis becoming outdated or even irrelevant as new information arises about it over the next day or so. I do think that the basic problem it dramatizes is sufficiently abstract as to at least be salvageable after any such evolutions.
It is tempting to micro-analyze what Costolo “really” meant, but based on past experience I do not think it would be that revelatory. It reminded me of an incident I talked about in a post I wrote last year about trolling and collective meaning prompted by watching someone with a staid administrative job, milquetoast politics, and a boring personality endorse revolutionary political violence. Were they just trolling? Were they participating in some collective performative ritual? And if I challenged them about it, would they even recognize what they had said? I concluded, at the time, that both the overall dysfunction of our political situation and the increasingly chaotic nature of online communication made it difficult if not impossible for me to productively answer these questions.
To call it chaotic, though, is still something of an understatement. Online social media content, unlike print writing, is far from “inert” and “tame.” Rather, it is volatile, immediate, and escalatory. Print culture was associated with a psychological conception of a “buffered” self that autonomously made its own way safe from various external influences and contagions. Oral culture, which social media has “re-animated,” instead brings to mind a “porous” and “vulnerable” self dependent on a surrounding community to regulate its conduct via “liturgical” and “ritualized” processes. Furthermore, if the era of the buffered self was a secularizing and rationalizing one, the era of the re-animated porous self is distinguished by the decidedly demonological themes latent in folkloric and institutional mainstreaming of paranoid delusions brought on by the informatization of society.
The paradox is that the fiercely individualistic culture of Silicon Valley, in its quest to make online havens for freethinkers, so thoroughly undermined the very basis of modern liberal individualism through this re-animation that one must deploy theories about collective human sacrificial rituals to understand social media’s epidemic of peer-to-peer surveillance, harassment and scapegoating. Further flattening individuality is the manner in which social networks collapse context and shape content. This often leads to both externally directed disdain for other people’s increasingly programmatic personalities and internal self-loathing over one’s own consciousness of their lack of uniqueness. I have discussed some of the cultural and psychological dimensions of this in prior posts.
The subject of this post is not how to understand the Costolo tweet but rather what to do about it. What to do about things like the Costolo tweet is at the heart of the Vietnam-like imbroglio of online content control in the absence of a legible narrative context for social media content. Content moderation and speech policing online are hard. But why? For one, automating moderation at scale creates problems like censoring people fact-checking misleading information because their posts include references to the misleading information in question. On Techdirt, several recent case studies of Twitter moderation mistakes give a flavor of the sometimes comically haphazard social media filtering regimes that run on Twitter and other similar sites.
But if one runs afoul of faulty content regulation systems, getting out of “purgatory” is far easier for those with influence and social capital. Additionally, high-profile politicians are often granted flexibility in cases where ordinary users would be shown little mercy. Therefore, while we hear a lot about the evils of anonymity, most long-term social media users understand that people of stature often have little compunction about blatantly engaging in anti-social behavior under their own names. And cynical users understand full well that if supplication to escape the banhammer is one of the defining experiences of social media moderation, the powerful generally find the process of supplication far easier to navigate. This points to a potential answer to the question raised earlier about why governance of social media is so difficult.
Social media governance is not hard necessarily because of the enormous challenges and tradeoffs of content moderation. Rather, it is hard because content moderation lacks legitimacy. The spectacle of a former Twitter CEO waxing poetic about people being “lined up against the wall and shot” while ordinary users are harshly punished would seem to underscore this lack of legitimacy. Double standards and favoritism, real and imagined, certainly can erode legitimacy. But the opaqueness and incoherence of moderation decision-making also worsens the problem. Social media companies are primarily reliant on mechanized judgment, with mechanization here referring to both computers that automate content-filtering bureaucratic rules at scale or armies of underpaid men and women that must automatically apply bureaucratic rules to filter content at scale. Mechanized judgment superficially resembles liberal proceduralism, but does not constitute legitimate authority. Or at least legitimate authority as most would conceive of it.
It is not able to give coherent explanation or defense of its own reasoning, it is rule by law instead of rule of law, and it encourages supplication to anonymous centralized power rather than mature self-governance. All of this should be considered as Twitter and other social media companies crank up their moderation in the wake of COVID-19 and increased domestic political turmoil. Lastly, all moderation decisions must play out in a social climate already rife with distrust, polarization, and bad faith. Consequentially they are often received with overwhelming cynicism by the digital masses even when Twitter and other social media companies behave judiciously and appropriately. And while it would be ideal if moderation decisions could be more soundly formulated and justified, even ostensibly “fair” social media moderation decision-making is a kind of secular casuistry deployed in the absence of any overarching shared principles of morality to inform its analysis of conflicted individual situations.
All of these factors are important. But I would not make too much of these particular problems alone to explain why social media moderation inherently lacks legitimacy, especially as it pertains to the Costolo tweet. I wish to go beyond the typical response to outrage over social media content filtering – accusations of double standards and ineptitude – to note that just because a policy is sub-optimally enforced does not imply that it would work if it was somehow optimally enforced. All moderation tends to expose an abstract clash between the social context of a particular personalistic online interaction and the anonymous and depersonalized rules of social platforms. In an ideal world, these clashes could be resolved by the judicious and pragmatic application of interpretive discretion. But in even minor culture war blowups, there are often a mishmash of dueling communities that each have their own subjectively determined patterns of interaction and interpretation.
Some people understand the damn thing is just kayfabe, other people tend to take even the words of self-proclaimed liars and tricksters at face value. Some people are on that many levels of irony, my dude. But other people strongly believe irony is just a disguise for bigotry. Making a moderation decision requires at least rejecting a particular bespoke interpretation of social context, and people that want their own interpretive context to automatically take preference over those of hostile outsiders refer to the latter as “bad faith actors weaponizing the rules.” Right or wrong, in the absence of any clear bailiwick for rejecting and accepting bespoke interpretations, the interpretations of the aforementioned “bad faith actors” cannot be easily dismissed.
With that being said, let’s go back to the Costolo tweet and take another look. Costolo’s tweet emerged from a conversation between Costolo and two other tech industry insiders, one of which (to whom the bellicose tweet was directed) claimed that he did not feel threatened or offended:
If I felt Dick were actually the kind of person who would seriously advocate political violence I’d block him. I took it as a joke directed at me and chose not to be offended. Others can make their own judgement…It was a dumb thing to say, especially as the former CEO of Twitter. I’m not absolving him of that. I took it as hyperbole in the context of the thread & our relationship. It’s forgetting Twitter in the round & mobs strip context that gets ya. I think we’ve all been there.
Another person Costolo was initially interacting with also pleaded with on his behalf: “I give dick the benefit of the doubt with that tweet — he’s a comedian and a deeply empathic person.” Given the vituperative nature of Costolo’s arguments with both of these individuals prior to the offending tweet, neither statements are trivially disposable. But should they be considered exculpatory? And, more importantly, should they take precedence over other impressions of Costolo simply because of their seemingly greater familiarity with him? The problem is that as soon as Costolo hit the “post” button, these sorts of nuances rapidly became irrelevant. That several people that directly participated in the interaction and know Costolo well vouched for him does not really cancel out the enormous amount of people who saw the interaction who do not know him well and instead condemn him.
Because the conversation took place in public, on a site structurally designed to take conversations (especially by power users) and blast them out to everyone within digital shouting distance, it quickly drew in all manner of other actors with their own divergent inputs and interpretations. And in the context of what looks to outsiders like a clear violation of Twitter’s prohibition of violent and threatening content, should the benign interpretation offered by the people Costolo was primarily interacting with take precedence? If Twitter thought such immediate contacts’ judgments of Costolo should be privileged, most people that are now indirectly or directly aware of Costolo’s tweet would not have heard about the incident to begin with. That they have is the issue, and perhaps the reason why Twitter moderation cannot work. Here, I introduce some ideas from Jonathan Bjorn Nelson to emphasize the material and technical underpinnings of the problem.
In the the opening of his newsletter series on social media, Nelson develops a rather powerful insight: Twitter is a combination of small and often very much dyadic interactions built on rich social cues and mental models and aggregate collective interactions in which pattern-matching substitutes for sparse social cues.
Unsurprisingly, most interactions take place between friends (i.e. people the ego follows) and around a third of interactions take place between mutuals. But, an impressive amount takes place outside these relationships. Your friends and mutuals do not form anything like a blanket for interactions you only occasionally escape. And, rightfully so — ambient discovery couldn’t work so wonderfully well if they did! However, this also presents a critical problem… In search of lacking and needed structure, our remarkable pattern-matching machinery gets to work manufacturing cues from scant material. To some degree, that’s what culture affords. On twitter, it means we create meaning from things like “pronouns in bio” or American flags in a display name or the hashtags in someone’s tweets or even the vocabulary someone uses. These cues provide a means to aggregate experiences, transcending experiential sparsity in a way that affords stable expectations. Thus, when we’re confronted with contexts that have limited, ambiguous, noisy, and error-prone available information — something that happens with high frequency on twitter — cues offer a strong and readily available signal for integration. Absent the necessarily experience, the resulting mixture allow us to construct “good” models in a better-than-chance predictive sense. Unfortunately, the social cues end up dominating the accessible information.
The shared social space of interactions outside core interactions between followers and mutuals functions according to ambient discovery. While a large space of possible friend-of-a-friend interactions is a generic feature of most offline social settings, as Nelson explains the frequency of these interactions is what allows for Twitter to continuously connect any number of arbitrary cliques that otherwise would not come into contact with each other. However, it is also impossible to develop experience-based mental models for a large number of people outside the core interaction set. Individual connected people can see each other as rich, granular images but collectives outside of the core follower/mutual dynamic look more like graphically weak games designed to run on underpowered PCs. Stereotypes are the basis for interactions between strangers (online or offline but particularly online), but they are not necessarily irrational modeling choices given that they seem to work “just good enough” in the presence of ambiguous, novel, and/or conflicting information.
Another critically important factor that Nelson diagnoses is that big accounts also tend to rob social attention from everyone else. A big account with many followers directly propagates their tweets to their followers, but also indirectly to far more people. This is because followers may recirculate their content to their own networks, but it can be because followers of big accounts interacting with the content in other ways (such as liking or responding to it) may show up on the timelines of followers of followers. The more followers someone gets, the less likely any interaction context featuring that person can remain locally grounded due to the dynamics Nelson elaborated on earlier. Nelson uses these stylized technical facts to argue that social media and Twitter in particular overwhelms human sense-making capacities. But I go one step further to argue that these stylized facts make effective and legitimate moderation impossible at scale.
The way ambient discovery allows for the de-localization of context in any particular dyadic interaction also makes contentious dyadic interactions potentially the site of a crime that hundreds, thousands, or even millions will collectively cast judgment on – despite themselves often being participants in that crime! Juries with a far more well-developed and legitimated system of jurisprudence letting police officers walk can trigger riots when a case is sufficiently publicized. Is it any wonder that moderation decisions, a pathetic and silly imitation of the court system, do not enjoy legitimacy? It would be far more bizarre and stupefying for these decisions to be accepted without generating further grievance! And we can see this very much at play both in the Costolo example and the difficulties in how we ought to deal with its potentially mitigating circumstances. Or whether those mitigating circumstances really should matter at all.
To fairly interpret the micro-interactions that generated the inflammatory Costolo tweet, a moderating authority would need to give it a level of individual attention and care that runs at variance with the way that Twitter as a platform works. If the mental models of only people intimately familiar with Costolo should take precedence in a moderation decision, the site would not make it so easy for people that do not have such models to observe him and interact with him. Precisely because it privileges so many additional observations and interactions of the kind that Nelson describes (and quantifies to some degree), Twitter pre-emptively delegitimizes every single moderation decision it makes of even moderate publicity and consequence. Imagine the Costolo conundrum, albeit with innumerable medium and large-follower accounts and moderately known and famous personalities, iterated for years. And you have a formula for the production of popular cynicism about moderation among every political faction and social clique on the site.
It is often said that conflicts on social media are human and social problems that cannot be solved by engineers and technicians. Perhaps, but given the dynamics Nelson has discussed, it is also hard to see how policy can generate improvements without primarily technical change. Much of social rule-tweaking and enforcement optimization is premature optimization. Donald Knuth warned against “premature optimization” not because he had anything against micro-optimizing code, but rather because he understood that poor initial choices about system performance (improper selection of algorithms, data structures, or important software and hardware system components) created bottlenecks that later hand-optimization could not tweak away. The same is true of designing technical structures that are incompatible with later social expectations about how they are to be legitimately governed. This post has discussed how a basic feature of Twitter – ambient discovery and the inability to keep interactions purely localized – undermines the hope of living up to expectations about the legitimacy of moderation as the site becomes larger and more conflict-ridden. But the generalized issue of architecture-rule mismatch does not just lie with Twitter alone.
The problems that plague many social networks are, at heart, really just technical problems. This does not contradict humanistic insights about technology, rather it simply paraphrases one of the most venerable philosophers of technology in noting that technical decisions make a system potentially incompatible with certain social needs. Initial engineering choices, compounded and reinforced over time, remove or at least dramatically lower the possibility of governing social media platforms according to Anno Domini 2020 norms and expectations. Moderation decisions are unlikely to ever be popular. But they must be seen as legitimate. The technical design of Twitter and other platforms works against this goal. And without dramatic change of some sort (one possibility is quite “retro”) in the underlying architecture of the system, nothing more ambitious than triage, mitigation, and incremental bugfixing is likely to occur.