September 9, 2017

Material Issue

In preparing an outline for a future longform piece on technology and geopolitics, I began to realize that there was something of a paradox involved in how people think about topics such as machines and geopolitical aspects of computing. On the one hand, much of these accounts tend to predictably ignore the role of human agency – individual and social. However, they are also strikingly disinterested in the material aspects of how these technologies manifest effects in the world. Perhaps one of the most interesting cases of this is the usage of computer games as development environments for artificial agents. The vast majority of these AI systems are run on top of emulators. This is for the purpose of convenience; it is much cheaper and easier to emulate an Atari arcade system rather than attempt to jerry-rig and re-use hardware that is decades old. Yet in the process, it is not really clear that the AI agents are really "playing" the game as one might think a modal player would. The way that humans play the game cannot be separated from the particular material interfaces used (paddles, controllers, etc) as well as the manner in which hardware and system constraints generally influenced the development and use of games and platforms. Hence the playing of the game by the AI is more classifiable as a kind of mod or hack, like speedrunning ("Twitch plays Pokemon") or the old modder bot programming scene surrounding seminal first-person shooter games.

This is why Rodney Brooks' criticism of chess-playing engines (and similar criticism of Go-playing machines) is only at best half-right. The programs that play these games are not really disembodied reasoners, they are very much a product of the particular material machine environments that programs are developed and written for. This sort of artificiality was once obvious in the competitive computer chess programming scene, as programmer-players physically built machines to play chess that were optimized all the way down to the circuits. Today, this is mostly experienced through contrivances like the programming of a computer chess board on a Raspberry Pi. Still, even if a board is represented only within a the memory of a desktop computer, materiality still enters the picture. According to Chessprogramming, there are two ways of representing a chess board in computer memory, piece-centric and square-centric representation:

A piece centric representation keeps lists, arrays or sets of all pieces still on the board - with the associated information which square they occupy. A popular piece centric representative is the set-wise bitboard-approach. One 64-bit word for each piece type, where one-bits associate their occupancy. The square centric representation implements the inverse association - is a square empty or is it occupied by a particular piece? The most popular square centric representations, mailbox or it's 0x88-enhancements - are basically arrays of direct piece-codes including the empty square and probably out of board codes. Hybrid solutions may further refer piece-list entries. While different algorithms and tasks inside a chess program might prefer one of these associations, it is quite common to use redundant board representations with elements of both. Bitboard approaches often keep a 8x8 board to determine a piece by square, while square centric board array approaches typically keep piece-lists and/or piece-sets to avoid scanning the board for move generation purposes. With a board representation, one big consideration is the generation of moves. This is essential to the game playing aspect of a chess program, and it must be completely correct. Writing a good move generator is often the first basic step of creating a chess program.

In general, the implementation of computer players is also always a matter of dealing with material constraints. Built-in computer players in computer games must share Central Processing Unit (CPU) and Graphical Processing Unit (GPU) processing time with other activities such as animation and collision detection. Programs for computer players must also split themselves into components that work over multiple frames of the game time. If one reads the specification for Blizzard and Google DeepMind's new reinforcement learning research environment, one finds plenty of similar considerations. They factor into everything from the representation of the visual environment (which must interface with the 3D game engine) to processing constraints on what experiments can be performed (a matter of factors such as the simulation speed, rendering, screen resolution, and how many threads are available). It is these kinds of factors that tripped up a recent effort at programmatically playing Dota2; the game agent received "precise data describing the game state directly, no interpretation required... they receive the kind of information that humans could never have, like the exact number of in-game units an enemy hero is from the bot, or frame-perfect reactions to an attack." One of the lessons from this is that it is quite trivial to make "games" and "players," as conceivably many things can take on both roles. However, the material aspect of how both are constructed involves a vast set of nonhuman components and operations that have their own intrinsic properties, capabilities, and characteristics.

Why is this interesting or meaningful? There is both a political-economic and cultural component to the idea that information and the information-era economy is disembodied and frictionless. Acknowledging that information is rooted in a material, mechanical context robs something of the mystique from it. As human-computer interaction scientist Paul Dourish notes in his latest book:

Our understanding of what algorithms can and might do is always relative to the computer systems that can embody and represent them. ...I [draw] a distinction...between what an algorithm made possible and what an implementation makes feasible. The same algorithm, implemented on different computers or supported by different technical infrastructures, has quite different capacities. Mathematically, what it can achieve is the same, but practically, a new architecture or implementation of an algorithm can bring new possibilities into view and new achievements into the realm of the possible. ....[W]hat it takes to apply these technologies does not lie solely within the domain of the algorithm but rather depends crucially on how that algorithm is manifest in a larger technical infrastructure.. ...Algorithms are abstract descriptions of computational procedures, but the realization of an algorithm as a program requires the programmer to incorporate not just the algorithmic procedure itself but also the ancillary mechanisms needed to make it operate—checking for errors, testing bounds, establishing network connections, managing memory, handling exceptions, giving user feedback, encoding information, storing files, loading data, specifying modules, initializing variables, discarding old results, and myriad other “housekeeping” tasks. The result is that, while algorithms are at the heart of programs, much of what our computer programs do is not actually specified by algorithms. Programs are mechanistic and procedural but not entirely “algorithmic” in the sense of being specified by an algorithm (or something that is called an algorithm).

These details are obscured and downplayed, in part because they tell us a very different story about complex technologies than the one we would like to hear. It's a story of a rather long and tangled process of debating whether life – and machines meant to imitate it – are passive or active in their agency. It's also a story of how much we take for granted the fact that many of our machines work at all. As Brooks himself observes in several blogs on robotics and future technology, obscuring the material facts of implementation gives us a false idea of how far technology is progressing. The vast majority of systems that we interact with on a daily basis are simple, rote, and more automated than autonomous. Complex machines that we rely on as a matter of life and death – such as transportation systems – are also complicated to design and control. And these sorts of material factors and limitations can also have powerful ramifications when they impact systems that mechanize decision-making, as seen with the manner in which the pragmatic representation of military planning models on late 1940s UNIVAC computers altered the form and content of the models and their assumptions. Or, far more tragically, how computer arithmetic errors in an Patriot missile battery led to the deaths of 28 Americans in 1991.

This winding road leads to a conclusion that David Auerbach already came to when he wrote about the "stupidity" of computers. Computers are unlikely to ever really "understand" us in the way that we understand each other. Hence, it is likely that we will continue to adjust our world to meet their needs – regimenting and changing ourselves to make ourselves more and more machine-readable. Some aspects of this are gloomy; the invention of paperwork led to the modern administrative state and the modern administrative state arguably was one of the driving forces behind computing. Others are more ambiguous. Meme culture today is very much a product of how creative individuals and communities can take particular formats and artifacts (such as the humble image macro) and assemble them into meta and meta-meta commentaries on both the original artifact and the concerns of the day. Neither memes or the subcultures that produce them ought to be necessarily lionized or demonized. But what they do show is that the very artificiality of computerized representations allows them to be used as stage props that self-consciously highlight their own contrived nature while providing a space for creative expression. I have more to say about this, but I will save it for the longform piece. For now, I'm reading Fabian Sanglard's book on the Wolfenstein 3D engine for inspiration.

Tags: concept