The conventional wisdom on China at the moment is that its advances in basic and applied technology, particularly machine learning and data analytics, will create a superior form of authoritarianism that succeeds where others have failed. Command over technology will allow China to carry out extensive surveillance and then automate the process of disrupting collective action that threatens Party rule. Some in fact worry that China and Russia have created a sustainable and low-cost form of authoritarian rule and urge tightening of technological export controls in order to combat it. These concerns are of course not really new. They date back to the early 20th century, with each new model of techno-managerial control supposedly guaranteed to succeed where others have failed. Prior to World War II the managerial elites of Fordian industrial economy were destined for greatness. After World War II the future belonged to mathematical programmers and systems analysts using powerful but nonetheless primitive (UNIVAC-era!) computers. Today it is artificial intelligence – or at least the type of AI that is in vogue these days. To be clear, all of these things measurably improved state capacity and granted immense power to political elites capable of exploiting them. At the same time that power was also not without significant limitations, and could even be directly hazardous when applied in certain recurring ways. The same is likely to be true of China.
Henry Farrell has written a very useful post explaining why China may find that its new age authoritarianism could fall prey to the same problems. Some of this simply is timeless:
Authoritarian states were always particularly prone to the deficiencies identified in James Scott’s Seeing Like a State – the desire to make citizens and their doings legible to the state, by standardizing and categorizing them, and reorganizing collective life in simplified ways, for example by remaking cities so that they were not organic structures that emerged from the doings of their citizens, but instead grand chessboards with ordered squares and boulevards, reducing all complexities to a square of planed wood. The grand state bureaucracies that were built to carry out these operations were responsible for multitudes of horrors, but also for the crumbling of the Stalinist state into a Brezhnevian desuetude, where everyone pretended to be carrying on as normal because everyone else was carrying on too. The deficiencies of state action, and its need to reduce the world into something simpler that it could comprehend and act upon created a kind of feedback loop, in which imperfections of vision and action repeatedly reinforced each other.
So China’s would-be Skynet carries on this ignoble tradition by magnifiying and automating the pre-existing biases of the Chinese state worldview. “Bias” here is understood in two ways. There are pre-existing biases and imperfections within the data used by the machines which then become the basis for harmful and destructive behaviors. This is the familiar form of “data discrimination” that scholars have written extensively about. But data is also generated, collected, and processed and does not appear out of the luminiferous aether. This activity – as well as automated learning and inference on the data – is implicitly and explicitly managed by the state and its proxies. It runs against much of what we know about how organizations and governments work to suppose that these activities will not be impacted by ideological “operational codes” or the delicate internal problems (coalition management, information organization, and slack) of collective decision-making writ large. Therefore, it is also subject to structural biases embedded in the state worldview and the ways it is practically operationalized. Both forms of bias – when magnified by machines – can have consequences that range from moderately sub-optimal to apocalyptic.
In particular, Farrell notes that these two biases can lead to a similar dysfunctional feedback loop as the ones Scott cataloged. Authoritarian states have limited ways to correct bad behavior without undermining their own political function. China and its apologists often claim they have found a balance between preserving the order of the political system and adaptively adjusting bad policies. Without engaging in liberal democratic triumphalism, there are strong reasons to doubt that this is in fact the case. Tanner Greer has already written eloquently about the tradeoff China faces between reaping the advantages of science and technology and letting in politically destabilizing ideas. But in a more specific sense it is likely that China is marginally less capable of sound policy formulation and implementation than it was even a short time ago. Why? China has been steadily centralizing decision-making under Xi Xingping, ditching a prior and less centralized decision-making regime that encouraged elite consensus-building and localized solutions to the country’s diverse problems. Automation therefore further compounds the problem by adding in a calcified and rigid form of “reckoning” that cannot substitute for deliberative decision-making. So error can compound error, with little way to correct it until major disasters loom overhead.
All of this is very plausible, but I have one further thing to add to Farrell’s already excellent account. The French philosopher Gilbert Simondon argued vigorously against what he called the “myth” of a self-regulating machine. What is the role of the human in all machines? At the level of an “ensemble” of technical objects aggregated together, humans are what link machines to their environments and determine how they receive and transmit information.
[M]an’s nature is that of the inventor of technical and living objects capable of resolving problems of compatibility between machines within an ensemble; he coordinates and organizes their mutual relation at the level of machines, between machines; more than simply governing them, he renders them compatible, he is the agent and translator of information from machine to machine, intervening within the margin of indeterminacy harbored by the open machine’s way of functioning, which is capable of receiving information. Man constructs the signification of the exchanges of information between machines. The inadequate rapport of man and the technical object must therefore be grasped as a coupling between the living and the non-living.
That is very abstract, but it is something that is almost entirely absent from discussions of systems and computation except in fields devoted to study of how they are actually used. Much tedious work in technology is almost entirely related to keeping machines in sync with a complex, changing, and messy external environment. Did you know that basic problems such as time synchronization, character encoding, and even some last names can cause immense frustration and pain for programmers? It gets worse. Failing to seriously consider the possibilities afforded by system inputs is a common cause of malicious exploits, but poor internal representation of the external environment in and of itself can lead to catastrophic failures. If computers work at all, it is because an immense amount of labor goes into precisely doing what Simondon says: determining the intended meaning of the information that machines are to produce and consume.
Thus, there is significant danger in using machines to automate the perpetuation of a fixed system of meanings that is not open to question and critique. The environment surrounding the machine will change, but the machine’s inner environment cannot. It will still mechanically adjust its behaviors to respond to inputs received from the external environment, but in a way that generates behaviors humans might dub “psychopathological” or merely just “problems of holism in reasoning” manifested via automaton. This is appealing perhaps to authoritarians precisely because it removes meddlesome humans that can sometimes question, disobey, or otherwise fail the will of the dictator. Machines are politically reliable in ways that humans are not, and machines seemingly guarantee control that transcends the limitations of politics and sociality. These assumptions, however, have fatal flaws that fiction illustrates – though not in the way we often imagine. The most significant of which is that calcifying one’s static will within a machine embedded in a dynamic environment creates chaos along with control, and weakness alongside strength.
In the Metal Gear Solid games, an increasingly paranoid and secluded Major Zero creates the Patriot AI system because he does not trust his subordinates to faithfully execute his particular set of ideals. However, Zero is mentally incapacitated shortly after he signs off on the system and the lead system programmer is murdered by her abusive husband before she can finish engineering the computer network. While Zero originally intended the machines to merely filter and process information, his successor determines that the machines must also have wide latitude to make decisions. This seemingly makes sense. But with Zero and the lead programmer either dead or incapacitated, any ability to compensate for the inhuman nature of the machine or give it flexibility in how it interprets its objectives also disappears. Acting adaptively on the information it receives and its fixed set of basic goals, it eventually becomes a malfunctioning menace that undermines and destroys everything Zero had sought to pass on to future generations. The machine is, at the end, totally indifferent to the goals that Zero took so seriously. After all, one man’s idiosyncratic feelings and beliefs do not automatically mean anything to a machine. It is just more data fed into a machine that already considers most of human culture “junk data” to be cleaned and managed.
I do not imagine anything this dramatic occurring with China’s emerging “Skynet with socialist characteristics.” But the point of fiction like this is not to offer predictions, it is to reflect on the present. Cold War science fiction was a way of reflecting on the cultural unease and social problems induced by a “closed world” of command, computers, and control. Consequentially, I do not worry that China will somehow perfect authoritarian rule with computers. Rather, I worry that China is building a machine that will complicate the management of military crisis situations in confrontations with the West and generate internal instability. I do not know how exactly it will do so, like Farrell I am primarily arguing from conjecture and first principles rather than detailed knowledge that – in truth – very few people really have about Chinese internal decision-making. However as a matter of policy I believe that Western decision-makers should be thinking of how to adjust to the ways in which China’s growing machine apparatus – like that of the USSR – will heighten its existing dysfunctions rather than resolve them. After all, the weakness of Soviet Communism – the God that failed – was just as dangerous to the West as its strengths. The Chinese machine apparatus will make Chinese state stronger in some ways, but it is a mistake to make long-term policies without considering how it can make Beijing weaker and less stable.