The Adversarial World

Chances are, you may scroll through your newsfeed and see a story about some quirky solution that foils facial recognition, content analytics software, recording devices, etc. Special makeup that confounds cameras. Neat little tricks that foil virality algorithms. Jewelry or clothes packed with microcontrollers that jam or spoof bugging. The list goes on. But why don’t you feel any safer afterwards? Why does the sense of doom you feel keep growing? It may because these solutions are too exotic for mere mortals to use. But some of them are practical. Look at adblockers. Adblockers work. They work enough to get people in digital media nervous or angry. And they’re easy to install. Adblockers might be the exception to the rule in another way, though. There’s a very particular movie scene I get reminded of whenever I read those stories and you may not be surprised to know that it involves space aliens and hapless redshirt equivalents that smoked because they think they’re smarter than the aliens.

In the movie Predator 2, there is a great scene in which a government black ops team tries to ambush a Yautija hunter. They have special suits that mask their thermal signatures. But the hunter flips through his helmet’s sensor modes until it detects the team’s ultraviolet flashlights. Likewise, when you read a story about clever teens that manage to fool a social media giant’s algorithms, understand that the giant has many other ways of tracking them. And that, in particular, it may already be working on a way to mitigate the particular vulnerability that the teens are exploiting. That being said, if enough of the teens do it, it raises the costs that the social media giant has to take on. And if enough people are constantly thwarting the social media network and countering its countermeasures, they can begin to create unfavorable tradeoffs for the defender that become less and less resolvable in ways acceptable to the defender. That’s adversarial dynamics.

You can abstract a lot of the features of competitive interactions into an abstract duel between adversaries. The ultimate goal is not victory – there is no condition in which the adversary can disappear – but relative advantage. When all physical, spatial, and personal aspects of competitive interaction disappear, all that is left is differences in what actors know and how useful it is for them to know it. The purpose of knowledge in adversarial interaction is making accurate observations and interpretations of the world that can enable particular actors to somehow alter the competition in their favor, even if the advantages they gain are almost always ephemeral and temporary. What makes knowledge useful? Obviously, it must be actionable. But it also must be accurate. Any observation or interpretation is only accurate if it takes into account

  • The nature of the thing being observed
  • The adversary’s knowledge of the thing being observed
  • The adversary’s knowledge of our knowledge about the thing being observed
  • ….

All knowledge in adversarial interaction is vulnerable to denial, deception, and/or interference. But even more importantly, knowledge is also perishable. A window of opportunity can easily vanish. The adversary may discover our preparations and undertake countermeasures. Knowledge is not generalizable across time and context. Secrecy is necessary. And constant feedback and experimentation also is necessary to refresh one’s picture of the situation. People often speak about “asymmetric” or “symmetric” methods. But all adversarial interactions either have built-in asymmetries or develop asymmetries over time. Very few things are perfectly symmetrical to begin with. And if they are, people do their hardest to change that. And the rest of the story is, quite literally, history.

The subject of this post is not the dynamics of adversarial behavior in the abstract, though a lot of theory and abstraction will be appropriately referenced and invoked. Rather, it is the specific subject of content moderation on large computational platforms. Large platforms like Facebook, Twitter, YouTube, or Instagram are bottlenecks for social expression and behavior. This makes them critical for establishment actors to control and also makes them attractive targets for disruption by various forces – from scammers to special intelligence operatives – with an interest in undermining the conditions the establishment actors want to preserve. The platforms themselves have their own interests that do not exactly align with either those of the insurgents or counter-insurgents. They want first and foremost to preserve their economic position and continue to expand their business both quantitatively and qualitatively.

This requires protecting their autonomy from external forces that want to regulate and control the social networks while also fighting against a potpourri of insurgents that systematically subvert both the spirit and letter of the social networks’ informal laws. Thus there is some convergence between the desire of external forces (politicians, policymakers, activists) to clamp down on platforms and the need for platforms to survive and thrive. Letting the rabble run free is bad for business. And if the rabble are allowed to run amok with impunity, external actors may threaten the autonomy of corporate actors that run the platforms. So this necessitates unrelenting battle with spammers, trolls, political operatives, and intelligence agents infesting the platforms. The platforms have at their disposal powerful tools to automate regulation, hordes of manual automaton-like human laborers that mechanically execute regulation, and enormous physical and financial resources.

But despite their formidable advantages, they have a significant and maybe fatal weakness. One would think that problem would be the sprawling size of the social networks, their complexity, and their enormous attack surfaces. This is all true, especially given that the nuances of speech and language are difficult to automate and the regulatory burdens of platforms themselves may be too intractable to fix even in an ideal world. But what it misses is the problem of adversarial dynamics themselves. So let’s backtrack a bit. The nature of the competition concerns the promulgation and enforcement of rules. Rules that regulate what people can post, how they can post it, and where/when can they post it. These rules are enumerated in Terms of Service, content guidelines, and similar documents. They are not the same thing as laws but they also have the force of law.

In general, what are the purpose of rules? Rules are, at least if you take Weber at his word, standardizing expectations about behavior. Authority is impersonal. Action is the result of calculation rather than emotions, values, or traditions. And rules are supposed to be uniformly enforced with little variation. Rules are key to the development and sustainment of bureaucratic authority, which is intrinsically legal-rationalized authority. And luckily for platforms rules, bureaucracies, and computers all tend to blur into each other. Bureaucracies and computers go together for both historical reasons associated with the involvement of bureaucrats in scientific and technical matters as well as the analogies between the organization of computers and the components of rationalized organizations. And yet, while bureaucracies are needed for adversarial behavior at scale, but they are far from sufficient.

A rational-technical endeavor like building a bridge is very different when someone is shooting at you while you’re building the bridge. The logic of adversarial behavior is incompatible with bureaucracy as usual. When production is streamlined, standardized, and mechanized we see that costs are cut and more products are sold. But in adversarial situations, making a lot of the same thing in the same way at the same time means that an adversary can just beat you with a rock to your scissors. You need rocks, paper, and scissors even if doing so is costly and inefficient. When the business takes the fastest possible route in the most pleasant terrain, the package gets delivered faster and the shipping costs less. But in an adversarial situation, the obvious route is filled with land-mines and bad guys are waiting to ambush you. So you again have to choose the costlier and less efficient option.

Finally, in a functional organization authority is standardized but in an adversarial situation everything is put on a “need-to-know” basis and compartmentalized. Lying to both superiors and inferiors is acceptable if some ultimate authority allows it at least retroactively. Bureaucratic-legal authority, as people know from movie cliches about the cop or mil/intel operative’s hatred of “paper-pushers” back at HQ, is what untrammeled adversarial behavior undermines. The paper-pusher is ridiculed for his insistence on following proper procedure and the audience is made to resent him as well. How dare he insist that Officer Badass McBadass do anything except bust criminals? Screw that guy! He’s sitting back in Washington in his air-conditioned office giving orders while the Real Men are out risking their lives in places America can’t admit that they are fighting in. Everyone knows that Lt. Awesome J. Hugeguns shouldn’t have to put up with that!

With respect to platforms seeking to regulate speech and behavior, all of those nice little generalized, context-indifferent rules they’re supposed to apply across the board means that they’ve just created a target-rich environment that adversaries can exploit. Telling people exactly what the line is gives them the ability to figure out how to push things right up to the line without exactly crossing it. Or they can effectively rules-lawyer their way out out of violation when they are caught crossing it. Similarly, they can – through obsessive study and experimentation – discover various ingenious ways to confuse and thwart enforcement mechanisms. These can be purely mechanical, in the sense of fooling or spoofing enforcement mechanisms or hiding offending content within admissible content. Or they can be more elaborate and involve complex coordinated tactics that allow them to remain one step ahead of the enforcers. Sounds hopeless. What do you do?

In a true dictatorship, you could simply just close down the social networks by fiat. The Grand Poobah has decreed that it is sinful to “slide into these DMs.” Anyone caught accessing a forbidden network on a VPN will be sent to the gulag. Or you could maintain your own social networks that heavily restrict how many people can simultaneously interact with each other and are heavily bugged, monitored, and manipulated to ensure that anyone that engages in subversive behavior is quickly silenced. States like China and Russia give people access to homegrown and/or foreign networks but always heavily limit how they can use them, spy incessantly on users, and officially and unofficially disrupt anything that smacks of unsanctioned political agitation. But suppose neither of these options are available to you.

What do you do then? If all you care about is narrow adversarial advantage, and you can’t just close down the social networks by fiat or heavily restrict how many people can interact inside them simultaneously, you do the following. First, you reduce the legibility of rules. Rules should be vague, obscure, numerous, and contradictory. People should know enough about what kinds of broad classes of behavior are prohibited but not enough about where the line is and how the rules are enforced. Do not explain why people are punished, just suggest that you too could be harshly punished if you cause trouble. Use Franz Kafka as an instruction manual rather than a warning. Make – pun intended – the term “Kafka-esque” a feature rather than a bug of the environment of contestation.

Rules should not apply equally to everyone. They must be applied differentially depending on the situation, the adversary’s current state, and your own self-interest. Impartiality is for fools and suckers. Additionally, you must always strive to make your adversaries fearful and paranoid. Deny adversaries knowledge of enforcement patterns and make deployment of enforcers as unpredictable as possible. Inject noise into adversary communications, infiltrate their organizations, run double agents, and generally sow fear and discord among their ranks. An adversary should never feel safe and should be constantly looking over his shoulder in anticipation of an attack. If he feels he can settle in and relax, you have failed. Randomization of enforcement coupled with offensive counterintelligence and covert action can achieve outsized effects.

So what’s the downside? Let’s start with the obvious: what I have just described sounds remarkably like the behavior of authoritarian dictatorships. After all, it is the authoritarian that seeks to frustrate collective action via rule-by-law (the opposite of rule-of-law), secret police, and Kafka-esque trials and punishments. To a lesser extent, it also resembles the authoritarian excesses of liberal states. It evokes the dynamics of illiberal programs within democracies like the Northern Ireland Stakeknife affair, the American COINTELPRO, or merely just the everyday mundane realities of abusive policing in many underprivileged parts of the Western world. More charitably, even far milder versions of this plan undermine legitimacy of authority. As Matthew B. Crawford wrote:

When a court issues a decision, the judge writes an opinion, typically running to many pages, in which he explains his reasoning. He grounds the decision in law, precedent, common sense, and principles that he feels obliged to articulate and defend. This is what transforms the decision from mere fiat into something that is politically legitimate, capable of securing the assent of a free people. It constitutes the difference between simple power and authority. One distinguishing feature of a modern, liberal society is that authority is supposed to have this rational quality to it—rather than appealing to, say, a special talent for priestly divination. This is our Enlightenment inheritance.

This is everything that platform policy is not, even if it is at least superficially grounded in the appearance of liberal proceduralism. The complex, opaque, and secretive enforcement of policy via regulatory mechanisms that cannot be fully revealed lest they compromise competitive advantage is a classic feature of adversarial dynamics. We cannot allow full examination of the evidence against the suspect being detained, to do so would give away critical intelligence and alert his comrades to our sources, methods, and tactical dispositions. You must trust the benevolence of our authority even if we cannot fully demonstrate it to you in a way you can dispassionately evaluate. This has a corrosive effect on liberal democracies and the legitimacy of their authority even if these effects are not as harmful or foreign to liberal democracies as opponents of secrecy often argue.

But this is also not necessarily the worst problem induced by the adversarial imperative. A corollary of both adversarial principles mentioned earlier is that you must constantly shuffle or upend stable patterns. Adversaries are always coming up with new tricks, you must mitigate them or pre-empt them. Moreover, as previously mentioned, all knowledge – and advantage produced by knowledge – is perishable. There is no end to the competition, there is only the will to keep on competing and the ability to stay one step ahead of the opponent. But this – along with the inconsistency, illegibility, and incoherence of the rules themselves – tends to defeat the purpose of rationalized authority in the Weberian sense even if we abstract away from all of the ethical and political concerns. After all, rules are supposed to make behavior more predictable and adversarial dynamics eschew predictability in favor of raw dominance over an opponent.

The instability of life in a pure adversarial condition makes it hard to practice capitalism as usual, participate in standard social interactions, and generally do anything that is unrelated to competing with the opponent. And more simply it defeats the bureaucratic purpose of using laws and regulations to generate predictable and regularized behavior. If the goal is simply to suppress opposition, that doesn’t matter. As long as people fear you and tend their own gardens, the goal is achieved. But drivers on a road need to have some expectation that everyone will drive on the same side of the road, obey the same traffic laws, and merge lanes in the same way. If the traffic regulations are constantly changing and cops are too random in how they enforce traffic violations, then its impossible to simplify and standardize behavior in the way that bureaucracies want. The nasty character of this tradeoff is an enormous problem for tech companies in their adversarial competition with insurgents constantly probing their computational platforms.

Too much regularity and uniformity in how they enforce rules makes it easy for their opponents to exploit loopholes or arbitrage between regulatory seams and gaps. But not enough regularity and uniformity means that customers won’t use the product, commercial partners will leave, and politicians and activists will get angry. Now consider this on top of the more obvious issues: rules can never be perfectly enforced and the issue of how to simultaneously satisfy all competing preferences within a rule is a significant formal mathematical problem. What is to be done? It depends on the scale of the problem. All things being equal, this tradeoff is manageable when the size of the target of the competition is small and/or the level of adversarial contestation is weak. When the room for maneuver is enormous and the contestation is bitter and intense, then things begin to look much more unfavorable and hopes of a Goldilocks solution dim. One will have to accept either that the adversary will have an undesirable advantage or that all of the undesirable costs of dominating the adversary are tolerable.

In my own (not-so) humble opinion, large platforms like Facebook, Twitter, Instagram, and YouTube have hit this crossover point. The ideal compromise solution is no longer viable. Perhaps it never was to begin with given the formal-technical and behavioral assumptions motivating the design of these platforms. But irrespective of that, it certainly isn’t now. It would be wise to start thinking differently about the nature of the problem. Right now, every time someone exploits a loophole or otherwise defeats enforcement patterns there are inflammatory headlines and calls to do something. Enormous amounts of resources are being devoted to tracking, monitoring, and countering opponents ranging from small bands of shitposters to nation-states. It is questionable whether these resources are being invested effectively given the absence of threat prioritization – a band of anime avatars causing mischief and mayhem are often regarded with the same fear and dread as a political campaign or intelligence operation.

More importantly, there is no alternative to the default policy – regulate speech and behavior using mechanical and/or mechanically executed rules – and the various strategies and tactics that are being used to implement it. As with varied civilian and military quagmires of a more traditional sort, almost everyone acknowledges that things are not working but cannot let go of the actual mission itself. And this issue transcends the blame-casting that dominates debate about this topic. It is true that platforms have not behaved responsibly or effectively, but the same is also true of their critics. Instead, with knowledge of the problems of adversarial interaction in mind, we should start thinking about how to get out of the trap. We should be free in the process to question things that seem to be sacrosanct in our discussions about platforms, mis/disinformation, and adversarial interaction.

In the 1950s and 60s, it seemed self-evident to American policymakers that one step backwards in Third World combat zones like Vietnam would trigger a “domino effect.” Allies and third parties would doubt American resolve and emboldened enemies would capitalize on this by conquering even more seemingly secure bastions outside of the newly vacated combat areas. With hindsight, we can see that this was crudely oversimplified at best and outright false at worst. The spillover consequences of withdrawal were not apocalyptic and abandoning these forward outposts freed us up to compete elsewhere in a way that was more advantageous to us. And staying and fighting would only mean taking on more costs without hope of a better outcome. Similarly, we should be willing to at least conjecture that many of the verities of debates about platforms and their policies could age just as poorly in hindsight.

Part of this involves asking questions about how necessary it is to invest so much importance in fighting people that have parasitic niches within platforms. Is the harm being done significant enough to justify the costs of continued adversarial interaction with these foes at the current level of resourcing and intensity? Could we better allocate those resources elsewhere? Are the costs of inaction as bad as the current costs of action? We should also be bolder and ask ourselves how necessary platforms themselves are as large bottlenecks that significant amounts of international speech and behavior must pass through. Perhaps all of the worst warnings about the dangers of slackening our resolve are correct. But this does not imply that the platforms are defensible in their current form or can be made to be defensible in the future. Maybe they should be replaced by a successor organizing form that has both less potential for systemic failure and is easier to manage.

I do not know the answers. What I am more concerned about is that the right questions are not being asked. Maybe they one day will be as the reality of the adversarial tradeoff sets in. But another feature of adversarial interaction I have neglected to mention is that it can go on for a very long time without clear resolution because it is assumed to be fixed and permanent. We must fight them over there so we do not fight them over here. Up until the collapse of the USSR, science fiction depicting the far future – assuming nation-states still existed in their time frames – often assumed the Soviet Union would always remain and the Cold War would go on eternally. To be fair, sometimes this assumption is justified. You may not care about adversaries but adversaries care about you. And if there is no way to strike a bargain with them, defeat and destroy them, appease them, or surrender to them at an acceptable cost, so be it. The issue is entirely about what the cost is, whether or not it is acceptable, and whether or not there are feasible alternatives.