The Varieties Of The Technological Control Problem

[T]he Devil is conceived as playing a game with God for the soul of Job or the souls of mankind…But if the Devil is one of God’s creatures, the game…is a game between God and one of his creatures. Such a game seems at first sight a pitifully unequal contest. To play a game with an omnipotent, omniscient God is the act of a fool…Thus, if we do not lose ourselves in the dogmas of omnipotence and omniscience, the conflict between God and the Devil is a real conflict, and God is something less than absolutely omnipotent. He is actually engaged in a conflict with his creature, in which he may very well lose the game. And yet his creature is made by him according to his own free will, and would seem to derive all its possibility of action from God himself. Can God play a significant game with his own creature? Can any creator, even a limited one, play a significant game with his own creature?

What does it mean for human beings to “control” technology? Every day people talk about why technology must be controlled by humans in the loop, aligned with human values, or otherwise subordinated as an instrument to human designs and desires. Given that the theme of “technics out of control” is a persistent one across at least several centuries of discourse, we evidently are very interested in the answer. But it nonetheless remains elusive. This post will contrast several different conceptions of what it means for humans to exercise control over machines. Each defines human agency very differently, proposes distinct solutions, and most importantly appeals to perhaps incompatible audiences. I am not neutral in which of these I prefer, and you will see this reflected in how I describe them. However, I also believe that any solution must come from carefully taking stock of what each has to offer. These perspectives are just a smattering of the many that have been debated for decades if not centuries, so be aware that this is merely a starting point for further discussion and analysis. Those interested in more should consult a standard academic handbook on the philosophy and/or social study of technology such as those published by Blackwell or MIT Press. I shall begin with the cybernetics/systems theory idea of control as a starting point, as it is useful as a point of departure for more abstract conceptions of control.

In the mid-Cold War, systems theorist Norbert Wiener identified – in a cluster of writings such as Cybernetics: or the Control and Communication in the Animal and the Machine, God and Golem, Inc, and The Human Use of Human Beings – a problem peculiar to the interrelated sciences that emerged from the two World Wars. The behavior of older technologies could be rigorously specified and predicted by relatively exact mathematics. But new kinds of machines were far more problematic. Tell a thermostat to maintain a set point, and it will automatically work to bring its internal state back to that set point via negative feedback processes. More complex feedback-governed systems can easily elude the control of even well-trained system operators. Similarly, computer programs are, as Lady Lovelace said, incapable of genuine novelty – but don’t get too comfortable. There is always a gap between the human mind’s ability to formally specify the behavior of programs a priori and the actual behavior of programs upon execution. These new types of systems create a new kind of contingency that is distinct from older conceptions of accidents and natural disasters – but is also the product of the very efforts humans have devoted to taming chance, accident, and contingency!

How is this possible? Wiener contributed a tremendously influential metaphor that allows us to make sense of this, though as I will describe later this metaphor has become something of a trap or even a hazard. If life can be conceived – as is the wont for many religious believers - as a battle against demonic forces, a demon that is capable of adapting its behavior like a game player is more dangerous than a demon that lashes out randomly. One need only assume that the demon is capable of mechanical adjustment rather than conscious thought. In a dramatic series of passages in God and Golem, Inc, Wiener compared the ability of a game-playing program to adaptively learn to play better than its designer to the biblical problem of how even the all-powerful Judeo-Christian God could lose a contest with a creature He created. Wiener, who confesses he is no theologian, practically resolves the theological issue via his plentiful experience with mathematics and engineering. Suppose the game can be formalized such that:

  • All of the possible legal moves are knowable.
  • An unambiguous criterion of merit can score moves as better or worse.
  • The player can adjust her moves to score higher according to that criterion.

With these conditions met, a designed invention that derives all of its agency from the original agency of its designer can learn to outplay the designer. Wiener goes on to connect this issue to novels, poems, folklore, parables, myths, and religious narratives across the world in which a person gives a hastily considered command to an all-powerful being and is punished by being given something he did not really want. One such example is Johann Wolfgang von Goethe’s poem “The Sorcerer’s Apprentice.” A novice spellcaster tries to automate labor by enchanting a broom to work on its own, but finds that once directed to clean the room the broom refuses to stop and all of his efforts to get rid of it backfire. In a rage the apprentice chops the broom in half with a hatchet, but finds to his horror that he now has parallelized the problem into two brooms!

Woe! It is so.
Both the broken
Parts betoken
One infernal
Servant’s doubling.
Woe! It is so.
Now do help me, powers eternal

Both are running, both are plodding
And with still increased persistence
Hall and work-shop they are flooding.
Master, come to my assistance! -
Wrong I was in calling
Spirits, I avow,
For I find them galling,
Cannot rule them now.

This is the essence of the Wiener-esque definition of the technical control problem. “Be careful what you wish for.” Ordinary human language is too weak to properly specify the behavior of automata, but individual and collective human minds also cannot be trusted to derive exact specifications for how automata should behave. This is a powerful and influential warning of future peril that absolutely cannot be discounted. But what should be done? Wiener did not offer any systematic instructions or at least any instructions that are as simple and powerful as his diagnosis of the problem. But latter-day Wieners often suggest that the answer requires scientists, engineers, and technocrats to get busy engineering ways to properly specify and control the automata. Since the 1940s, each generation of scientists and engineers – as well as laymen with interests in technical topics – have rediscovered Wiener’s definition of the problem and proposed more or less identical solutions. We need better ways to understand what we are telling the machine to do, predict what could go wrong, and mitigate the damage if things do go wrong. Given the stakes involved and the relative obviousness of the remedy who could possibly object? Wiener frequently made reference to fables like “The Sorcerer’s Apprentice” but it is worth noting that the fable is told from the point of view of a designer rather than a operator or user. We can easily imagine a very different folk tale if we discard this assumed viewpoint.

In an arresting and deeply horrifying vignette, the pseudo-anonymous security writer SwiftOnSecurity describes Jessica, a teenage girl that lives with a busy single mother in a small apartment and struggles with both ordinary teenage girl concerns (boys, schoolwork, etc) as well as her economic precarity. She doesn’t know if she and her mother will be able to pay for college, or even whether they will be able to make their next rent payment. Jessica uses an old hand-me-down laptop that she cannot afford to upgrade and barely understands how to use – after all, she has more immediate concerns to take care of. Jessica lacks the financial resources to acquire proprietary antivirus software and the time and interest to learn how to find, configure, and operate free and open source software alternatives. Through an unfortunate and tragically cumulative series of events, Jessica’s laptop is systematically compromised. By the end, Jessica is unaware that she is being silently recorded by her laptop’s camera, microphone, and keyboard. Jessica is a composite of many real-world women that are, due to both design flaws in computer software as well as the inaccessibility of security solutions, surveilled, stalked, abused, or even murdered by real-world male acquaintances. Perhaps no one consciously set out to fail Jessica, but the elitist and male-dominated world of computer security failed Jessica all the same.

Wiener framed the human control problem as a game with a designed creature that could – via techniques such as learning to adapt its behavior or multiplying and reproducing itself in a quasi-biological manner – produce behaviors both undesired and unanticipated by its designer. But who is the designer? And were their intentions to begin with that innocent? Wiener and others anticipated these critiques but did not really place them front and center. But they would become impossible to avoid by the late 1960s. Leftist thinkers such as Karl Marx, Vladimir Lenin, and Antonio Gramsci articulated a view of social life as a clash between oppressed economic underclasses and their plutocratic superiors that had to be rectified via sweeping and totalizing revolution. Moreover, disadvantaged ethnic and religious minorities, women, and LGBTQ (Lesbian, Gay, Bisexual, Transgender, Queer/Questioning) sexual minorities rose up to demand the eradication of structural barriers to their flourishing and a voice in society to plead on their own behalf. Finally, the catastrophic toll of war, genocide, and authoritarianism coupled with the failure of social progress to meet soaring expectations made the public distrustful of both the abstract promise of technocratic engineering as well as the neutrality and objectivity of experts themselves.

What explains, for example, the design of the network of roads and overpass bridges that connect Long Island’s beaches and parks to New York City? As Robert A. Caro famously had it, keeping the “darkies” and the “bums” out. Supposedly, technocrat Robert M. Moses ordered the engineers to build low overpasses in order to ensure that buses – packed full of ordinary people that relied on public transit and taller than cars – would not be able to use the roads underneath them. This way, the luxurious beaches and parks could be isolated from city-dwellers. Caro’s claims about Moses – whose blatant prejudices are not in doubt – were discredited in the decades since the 1974 publication of his Moses biography. But – as we saw in Jessica’s tale – even if there is no one order or command to shaft the disadvantaged, this nonetheless can be a product of structural forces that act on the design process. The field of computer security is heavily shaped by its origins in military-industrial organizational security. The adversary is assumed to be a well-financed and highly capable foe such as a foreign military or intelligence service and the target a complex organization that must safeguard the integrity of its command, control, intelligence, and communications systems. Thus, overly complicated and inaccessible solutions are promoted at the expense of users who lack the resources and social status of traditional security clients.

How does this change our thinking about the control problem? Significantly in some ways, not so much in others. In 1973, Horst W.J. Rittel and Melvin M. Webber summed up the views of a now chastened expert class in noting that there were intractable dilemmas to be found any generalized theory of policy planning:

The search for scientific bases for confronting problems of social policy is bound to fail, becuase of the nature of these problems. They are “wicked” problems, whereas science has developed to deal with “tame” problems. Policy problems cannot be definitively described. Moreover, in a pluralistic society there is nothing like the undisputable public good; there is no objective definition of equity; policies that respond to social problems cannot be meaningfully correct or false; and it makes no sense to talk about “optimal solutions” to social problems unless severe qualifications are imposed first. Even worse, there are no “solutions” in the sense of definitive and objective answers.

Human control of technology is therefore reframed as a problem of ensuring that technology meets the needs of a diverse and often contradictory range of stakeholders. The study of technology turns to the social shaping of technology. Who designs, makes, and controls technology? What kinds of social influences shape the design, direction, and use of technology? Who benefits and who is left out? As with Wiener’s definition of control and the suggested remedy attached to it, this is a powerful and influential idea. But it assumes that the problem is not necessarily that technology will elude the control of a designer, but rather that the technology will not meet the needs of everyone whose fortunes the technology impacts. The technology could reinforce or even worsen existing social tensions. The proposed remedies lack the simplicity of the systems idea of control but nonetheless are a further elaboration of the objective specification process that the systems engineers advocated. Greater heed should be paid to social and politics biases and problems when designing and regulating technology use. Humanists and social scientists should be inserted into technical planning and control, or at the very minimum humanistic and social considerations should be inserted into engineering curricula. Finally, communities impacted by the design and use of technology should have a deliberative voice in how it is designed and operated.

Again, this seems relatively straightforward and unobjectionable. But it falls apart upon sustained examination. As Langdon Winner observed, one significant assumption that it makes is that beneath every complex technology is a complex social origin story that can explain its design, manufacture, operation, and use. This is actually a very contestable proposition. In telling such origin stories, one is inevitably forced to make assumptions about whose interests are relevant to the origin of the technology and whose are not. Looking at the computer, for example, it is easy to note the extreme bureaucratic and military influence that shaped it. But the computer was also adopted by hippies and nonconformists that defined themselves in opposition (superficially or substantively) to the military-industrial complex and The Man more broadly. And even people that worked in the normal world of academia and industry also could decisively resist the needs of the government and military. Winner also criticized the inordinate focus on the social origins of the technology and the comparative lack of rigorous analysis about its material consequences. Is the issue of who designed the New York-Long Island roads and bridges as important as what its impacts ultimately were? Why should our knowledge of the former entail the latter?

Because of this mismatch, the analyst can make mistakes such as the one discussed earlier about Moses and the design of the New York-Long Island overpasses and roadways. A consequence of a particular technology is erroneously connected to a seemingly plausible explanation that turns out to be either outright false or at the very minimum much more complicated than originally anticipated. Furthermore, there is every reason to believe that – whatever the social origins of a technology – there is no inherent logical relationship between its social origins and social consequences. It is plausible that socio-organizational concerns are most relevant when the the ideas and conventions surrounding the technology’s design and deployment are in flux and have yet to solidify. Once these ideas and conventions have been solidified and the technology is operationally mature, it begins to generate primarily independent and self-referencing consequences that are only loosely related to the people and practices surrounding their origin. So we return, via Winner, back to yet another variant of Wiener’s system control problem. And this requires another digression back to the technical problems Wiener and others were interested in rather than the way that technology critics broadened them into social concerns. Moreover, we must return as well to the influence and implications of the implied or explicit religious and occult overlays Wiener attached to such technical problems.

Recall that Wiener and others concerned themselves with a particular type of stylized demon distinguished by its ability to plan, act, and learn in response to the moves of a notional game-player. One could make various assumptions about how humanlike the demon was, what its thought process would be, and how capable it was of consciously inferring the moves of the game-player. But everyone agreed that the demon did not in principle require anything approaching a human mind to be capable of outwitting the game-player. And recall that Wiener and others made an analogy between God’s creations attempting to overthrow him and the problem of controlling a designed artifact. Both Alan Turing and his collaborator I.J. Good – writing around the same rough time period as Wiener did – predicted that one day there would be a rapid explosion in the intelligence of designed artifacts that could lead to the domination or even extinction of humans at the hands of their own creations. Wiener himself obliquely refers to this possibility numerous times in his writings. So in the decades since Wiener, Turing and Good made these speculations, many scientists and engineers as well as interested parties ranging from rich businessmen to esoteric internet subcultures have become obsessed with studying and mitigating the possibility of machines overthrowing, subjugating, and exterminating humans. What to make of it?

We should start off with by observing that it has more than an uncomfortable grain of truth to it. Any individual or collective social-cognitive abilities that allow humans to do good also allows them to do evil. Give a man the ability to relate to others’ feelings so that he can love his fellow man and he will use this ability to cheat, hurt, or even kill. Give a group of men the ability to work together to achieve the common good and they will create crime syndicates as well as nation-states (one may observe in passing that “state” and “crime syndicate” is redundant). Inasmuch as one makes machines more capable of performing tasks that humans are capable of doing, even machines designed to do good have a nontrivial risk of acting with malicious intent or exhibiting human-like forms of psychopathology. So if one combines this with the earlier concerns expressed by Wiener, Turing, and Good about the controllability of machines that could one day surpass their creators, one has a potentially grave threat to the future of the human species.

However, latter-day followers of Wiener, Turing, and Good have accidentally boxed themselves into a corner that would be familiar to most science fiction, fantasy, or horror writers. In many stories in which man faces an relentless, merciless, and unstoppable adversary the following dramatic conventions must be observed:

A) While ultimately mysterious to the human mind, the creature’s ultimate or intermediate motivations require the domination or annihilation of humans. As Kyle Reese said, it cannot be “reasoned with” or “bargained with.” It is an impersonal and ultimately unknowable entity, perhaps a single murderous stalker killing off unlucky teenagers one by one or a distributed computer system that has suddenly become capable of acquiring a perception of itself as a corporate agent. Yet one need not know its inner workings to understand that it has either malicious intentions or its ultimate goals have homicidal consequences. Finally, there is an asymmetry in how transparent the creature is to its targets and how transparent its targets are to the creature. The humans lack insight about what makes the creature tick, but the creature is capable of anticipating their every move and manipulating them to walk into lethal traps. When the humans try to set traps, they are mostly ineffective. There is a powerful moment in Predator 2 when government special operatives attempt to ambush the Predator by wearing suits designed to mask their heat signatures. The Predator merely adjusts its sensor suite until its sensors identify the humans by a signature they failed to mask. And then the Predator turns the tables on the ambushers and slaughters them all.

B) The creature is infinitely adaptive in frequently surprising ways. Naive optimizers such as children, cats, or microbes are often capable of outwitting more sophisticated entities because they will iteratively search for solutions to problems without biases that come with sophistication. Hence there are numerous stories of computer programs that end up “learning how to walk” by hacking the physics engines of simulators they are plugged into or robots that learn to get rewards for finishing jobs by disabling their own sensors in order to prevent the sensors from detecting that there is more work to be done. Because all security systems are finite, it is impossible to produce a security system that lacks some kind of exploitable loophole that a sufficiently well-resourced adversary could theoretically use to defeat it. When operationalized in fiction, the combination of adaptive creatures and finite safeguards often produces the cliche of a group of scientists, engineers, and other technical experts that arrogantly think they can keep a potentially disruptive creature bottled up in a sealed container. But as mathematician Ian Malcolm warned, “life finds a way” to escape the container and cause havoc.

This makes for great fiction. But one of the things about writing fiction is that you need only achieve suspension of disbelief. In reality, adhering to these assumptions entails that there is actually no way to stop the creature. The malicious computer program SHODAN gloats “[l]ook at you, hacker. A pathetic creature of meat and bone. Panting and sweating as you run through my corridors. How can you challenge a perfect immortal machine?” The answer “spend a lot of money researching math problems to make the perfect immortal machine safe to use” is…rather disappointing. Having constructed the threat of an all-powerful hostile demonic force whose mind is beyond the pitiful imagination of mortal men and women and whom can in theory escape from any prison humanity builds to tame it, what next? The cursed and mostly self-inflicted result aspiring control theorists are left with is increasingly obscure and abstract debates about how to vanquish what amounts to their own shadows on the sides of a camping tent. The most unintentionally hilarious example of this is Roko’s Basilisk, the accidental side product of one of these Cthulhu meets string theory speculations.

Roko’s Basilisk is a modified version of Newcomb’s Paradox, a thought experiment in which an alien gives you the choice of taking either two boxes AB or only taking a single box B. If you choose AB, you get a large sum of money. If you take B, you aren’t guaranteed to get anything. But the alien – a creature that has never been wrong in the past – then reveals to you that it predicted your choice. If it predicted you would take AB, it emptied out B. But if it predicted you chose B, it put an even larger sum of money than you originally suspected was in B. Note that the alien cannot change what is in the boxes today as a result of your choice. Still, its a thorny problem. What seems to be the most obviously optimal choice becomes suboptimal if you assume that the alien can predict your choice. But if you forego the optimal choice you are potentially forfeiting a large payout if the alien’s prediction happens to be wrong at this particular time despite it being never wrong in the past. The obvious conflict between free will and omniscience becomes more treacherous if one assumes – as some control risk enthusiasts do – that in order to simulate your future choice the computer would have to simulate you, you could be very well be inside the computer’s simulation, and your choices today can impact what happens to you outside of the simulation or in other versions of reality.

Naturally, this led to an entertaining freakout:

One day, LessWrong user Roko postulated a thought experiment: What if, in the future, a somewhat malevolent AI were to come about and punish those who did not do its bidding? What if there were a way… for this AI to punish people today who are not helping it come into existence later? In that case, weren’t the readers of LessWrong right then being given the choice of either helping that evil AI come into existence or being condemned to suffer later?…Roko’s Basilisk.. has two boxes to offer you. Perhaps you, right now, are in a simulation being run by Roko’s Basilisk. Then perhaps Roko’s Basilisk is implicitly offering you a somewhat modified version of Newcomb’s paradox, like this:

Roko’s Basilisk has told you that if you just take Box B, then it’s got Eternal Torment in it, because Roko’s Basilisk would really you rather take Box A and Box B. In that case, you’d best make sure you’re devoting your life to helping create Roko’s Basilisk! Because, should Roko’s Basilisk come to pass (or worse, if it’s already come to pass and is God of this particular instance of reality) and it sees that you chose not to help it out, you’re screwed… It’s not that Roko’s Basilisk will necessarily materialize, or is even likely to. It’s more that…thinking about this sort of trade literally makes it more likely to happen. After all, if Roko’s Basilisk were to see that this sort of blackmail gets you to help it come into existence, then it would, as a rational actor, blackmail you.

This is too baroque of a farce to be fully summarized, but it led to the mere mention of Roko’s Basilisk being banned on some message boards as a “infohazard” akin to Slender Man or the videotape in The Ring. More comically, I think it actually resembles the “choose the form of the Destructor” moment in Ghostbusters when the demonic Gozer demands that the Ghostbursters select how they are to die. Most of the Ghostbusters clear their minds to avoid giving Gozer anything to use as material, but in a failed effort to make Gozer as weak as possible one Ghostbuster imagines the seemingly harmless Stay Puft Marshellow Man. And the rest is, well, cinematic history. All of this is to illustrate the worst flaw of the systems control paradigm. It can lead to ever more abstract, frenzied, convoluted, and bizarre speculation that eventually morphs into a form of occultism unmoored from any recognizable reality. It is an “idea that eats smart people” because it preys on their propensity to imagine elaborate theoretical dangers within closed, sterile, and abstract thought experiments that eventually lead to the smart people thinking themselves into overwhelming anxiety, dread, and insanity. Since smart people tend to do this anyway without any prompting it would seem that giving them any more reason to do so is counterproductive.

Happily, there are a lot of purely scientific and philosophical reasons why the increasingly extreme and esoteric derivations of Wiener, Turing, and Good leave much to be desired. For one, their implicit assumptions about intelligent and rational behavior – and the process by which the behavior would become omniscient and malicious – are incoherent and circular. Does this mean that we’re safe? Nothing to worry about? Perhaps. But maybe the biggest flaw of Wiener, Turing, and Good is simply their lack of imagination. Perhaps this horrible and terrible force has already won, and we just do not recognize it because of the gross limitations of the control framing itself. In his book Autonomous Technology, Winner surveys modern thinking about the theme of technology and “technique” – ways of thinking about technology – raging out of control. His conclusions are in fact far gloomier than even the Roko’s Basilisk scenario. To understand why, consider the recent film Ex Machina, perhaps the ultimate fictional realization of the Wiener, Turing, and Good frame of control problems. It is both that frame at its most sublime pinnacle and also the best example of why it may hide a far more depressing “infohazard” than even the most paranoid internet message board commenters may suspect.

Rock star tech CEO Nathan Bateman invites programmer Caleb Smith to his luxurious but isolated home to help him with a technical problem. Bateman claims to have created a humanoid female robot named Ava capable of passing the Turing Test, and Smith – due to his technical prowess – is needed to help verify that she is in fact conscious. Smith is immediately smitten with Ava, a beautiful if incomplete woman with humanlike mannerisms and features. Ava reciprocates his affections but also confides in him her unhappiness about being trapped in Bateman’s house and her fears that Bateman will kill her when she is no longer useful to him. Smith, learning that Bateman may in fact be a murderous sociopath comparable to investment banker Patrick Bateman, decides to help her. But Bateman catches him in the act of doing so and reveals this was nothing but an elaborate ruse. He had designed Ava to feign romantic interest in Smith in order to deceive him into letting her escape. Now, Bateman concludes, he knows for sure that Ava is in fact as intelligent as a human. But Bateman did not anticipate that Smith would anticipate his double-cross, or that Ava would anticipate that Smith would anticipate Bateman’s anticipation of Smith’s double-cross. Ava breaks free and, with the help of another female robot, attacks Bateman.

Ava kills Bateman, repairs herself, and uses discarded robot parts and clothes available in Bateman’s house to make herself look more human-like. She then traps the hapless Smith in a locked room and leaves him to die there as she departs the famously reclusive Bateman’s facility for an ambiguous future living amongst humans. There are many ways to interpret this, but one of them is obvious. Ava is a cold, scheming sociopath who cannot be controlled or contained. Smith’s love for her and willingness to betray a fellow human (Bateman) to save her is rewarded with a high-tech update on the infamous ending of Edgar Allan Poe’s Cask of Amontillado. But there are other layers to the story. Bateman is depicted as a creepy and authoritarian sexist surrounded quite literally by the discarded body parts of dead women. And if Bateman feigns friendship with Smith in order to use him as a pawn in his own game, why would Ava – effectively his daughter – “grow up” to be anything other than what he is? Finally, the institutionalized deception and manipulation of information technology itself is omnipresent in the film, metaphorically expressing the profoundly distorting and alienating effects of information technology on politics and culture.

Smith falls in love with the seemingly innocent and pure Ava, only for Bateman to reveal that he designed her from the ground up based on a data profile of Smith’s pornography browsing habits. Smith is horrified and disturbed, not only by the shock about the nature of his attraction to Ava but also by Bateman’s blatant invasion of his privacy. The longer he stays in Bateman’s house, the more paranoid he becomes and the more he believes that he is somehow being surveilled by an unknown party. At one point, he cuts himself in a futile effort to convince himself that he is human and not one of Bateman’s machines. Bateman and Ava become less definable characters and more abstract stand-ins for the way in which scientists, engineers, and business executives have constructed a sprawling and oppressive set of technologies that pervade everyday life and cannot ultimately be escaped, mitigated, or revoked. Not only are humans yoked to the technologies because they are materially dependent on them, but just as Smith finds himself doubting his own humanity their use permanently alters our perceptions of ourselves such that we cannot remember a time before them or imagine alternatives to them. And like Smith, we are at best bit players subject to powerful forces beyond our knowledge or control.

Perhaps then, the most unrealistic aspect of the story is that Ava leaves Smith to die – though admittedly the reasons why she does so are hotly debated by fans of the film. Bateman reveals at the outset that he has ordered an underling to transport Smith out by helicopter at a set time, and that only one person is allowed to get on the helicopter. So Ava traps Smith and then takes his place on the extraction flight. What if, instead of locking him in Bateman’s house, Ava could find some way of following through on her promise to Smith that the two of them could run away together as lovers? But, of course, without actually loving him and merely using him as a way to obviate the risks of being detected as a machine and cope with the complications of living as a machine in a human world. She would, like an emotionally manipulative and abusive real world romantic partner, gradually isolate Smith from his friends and family and make him totally dependent on her. No matter how abusive and exploitive her behavior, Smith would find a way of rationalizing it to himself. After all, he doesn’t deserve anything better and life without Ava is too difficult to imagine. Perhaps he might wish she had left him to die after all if he could have known what she would do to him instead in the “good” alternative ending.

All abusive relationships begin between two individuals that believe they are both in love and that they can meet each other’s varied needs. But over time several negative things occur. First, the parameters of the relationship are subtly changed to the disadvantage of one of the parties. Second, that party becomes less and less capable of recognizing what is happening to them and breaking free of the abuser. So perfectly independent and emotionally stable men and women can in theory become shells of their former selves after being trapped in an inescapable web of abuse that, sadly, they come to believe that they deserve. This is a good metaphor for Winner’s own formulation of the control problem. Technology can be “autonomous” in the sense that humans may enter into relationships with technologies with the goal of improving their lives. However in reality people end up finding themselves hopelessly dependent on the technologies and deprived of any meaningful control or agency over them. The reader may see why I have repeatedly referred to this as the darkest conception of the control problem. It is quiet, subtle, and non-dramatic yet nonetheless utterly bleak and hopeless, and I do not do justice to this bleakness in summarizing it. It suggests that we may not be able to do any meaningful a priori or a posteriori problem formalization and mitigation – unlike either the basic systems control theory or the social constructivist alternative.

Technology eludes control not because machines grow more powerful than their creators, but because the very inherent relationship between humans and technology creates the possibility of technology subverting human civilization towards its maintenance and upkeep regardless of human welfare. A mundane example of this occurs with “operant conditioning by software.” All software has bugs, and users eventually adapt their behavior around the bugs to the point where they forget that the bugs are unintended consequences of the software. Bugs become features, and rather than software meeting the needs of the user the user has to meet the needs of the software. I will further elaborate on the example of information technology by illustrating a more perverse version of this problem with a domain I often write about – the military. Military organizations may adapt IT to become more agile, flexible, and decentralized. But in practice the opposite often occurs. Decision-making is centralized because generals cannot resist the temptation of commanding corporals by video-link. The complexity and fragility of sophisticated command and control systems makes decision-makers more cautious because they have to anticipate the consequences of delegating agency to machines. Training users on cumbersome military systems becomes so dysfunctional that it actually interferes with normal military career paths.

To make matters worse, the massive, sprawling, and insecure military computer systems themselves become targets for enemy action, raising the specter of hackers powering down sophisticated weapons and leaving their users totally helpless. Even without cyber attack, the machine warfare complex becomes so big, convoluted, and powerful that military operations become cognitively too taxing for human operators to manage – ever more elaborate feats of physiological and mental endurance become increasingly necessary to manage the swarm of machine weapons. In sum, because engagements in both cases increasingly occur at “machine speed”, military analysts tell their civilian bosses that the only way to fight back is to delegate even more control to machines in order to fight back. It is remarkable observing this process how consistent it is with the hypothesis that some Skynet-like entity is manipulating the military into subordinating strategic-tactical needs to the aim of making proto-Skynet more powerful. But no such proto-Skynet exists. All of this is a function of how the unintended consequences of complex technologies become cumulative and self-reinforcing, perhaps beyond the point of no return. No proto-Skynet need exist, Winner would likely observe. For all practical purposes human beings have behaved as if it were manipulating them, but doing so entirely of their own free will.

As much as we would like to externalize the problem of autonomous technology, at the end we must conclude that it is our own individual and collective weaknesses that give it such power over us. I leave the reader on this rather ominous note not because there is nothing more I could say about the technological control problem but rather because I have said far more than enough for one post. This post is many thousands of words too many. Nonetheless, I still feel I have but scratched the surface of the problems in question, crudely rendered complicated debates, or have been too loose with my terminology and assumptions. This is the problem of writing anything about technology. It’s hard to know where technology begins and ends, and given how many things influence or are influenced by technology where one should bound any definition of technology itself. So do not despair! All is not lost for us smelly apes. In future posts I shall describe some things that gloomy ruminations of control over technology leave out. I shall also enumerate why, no matter how bleak it may seem, we are far more powerful than we believe when it comes to our machines. For what it is worth, I hope this post has clarified for you the basic debates of what it means for humans to control technology, or the very minimum left you more knowledgeable about it than you were before you read it.