Technology and Policy: Farce and Tragedy
There are currently a spate of stories about the various...odd characters backing Donald Trump. Many of them are about particular wealthy technologists that back Trump. Others are about Trump's connection – real or perceived – to the technology industry as a whole. This is often juxtaposed with unhappiness and concern over tech companies becoming more involved in policy areas or assuming quasi-governmental functions. I feel something of deja vu reading this. Some time ago, I wrote an essay about the problems with saying that tech companies do not solve problems that "matter" and urging them to tackle public policy woes. My arguments at the time were roughly that engineering expertise != policy expertise, treating policy problems as science, engineering or business problems is often hazardous, and technology companies can do far more to help social welfare by doing less rather than more – either by just making useful tools do-gooders can use or subtly working in partnerships with policy. Sometimes certain technical innovations can also obviate old policy problems altogether (no one worries about horse related traffic accidents anymore for obvious reasons) while creating new ones (car accidents and pollution). So my feelings at this point are: "be careful what you wish/wish for, as you just might get it."
It's really no surprise – or should be no surprise – that some businesspeople have policy views that are shocking or even abhorrent to people on my social media feed. Then again those of a broadly center-left orientation are increasingly shocked by even mundane deviations from a certain range of parameters governing center-left/center-right discursive modes (to be less charitable, intolerant of views not held by a far narrower ideological spectrum), to say nothing of truly extreme and bizarre reactionary ideologies like those of reclusive Trump backer Robert Mercer. It's really no surprise – or should be no surprise – that even well-intentioned efforts by people in the technology sphere often reflect the sort of naive "solutionism" (a word I hate to use but nonetheless feel is appropriate here) that is disdainful of nuance and complexity of longstanding social problems and imagines that better apps and better efficiency can solve them. But demonstrating the flaws of "solutionism" did not take Silicon Valley's fumbling steps into policy – it's been clear since the 19th century. And to some extent the criticism that tech companies don't tackle the "hard problems" was never really sincere in the first place because critics understood both of these factors from the get-go. It often amounted to a passive-aggressive demand that actors they don't trust enter into policy arenas of importance to them to implement solutions that they almost certainly were guaranteed to disapprove of. More charitably, there are inherent problems with this idea that perhaps my original essay was too polite in addressing. This post will not repeat that mistake. I will also be far more explicit about my own political, social, and intellectual priors over the matter than I was in the previous essay, whose lack of such explicitness was also a mistake.
To be totally blunt, sloganeering that "tech doesn't tackle the hard problems" is reflective of both farce and tragedy in contemporary America. It is reflective of the bizarre farce of contemporary American formal and informal governance as well as the depressing tragedy of how much we have assimilated a privatized view of governance. This post reflects on both farcial and tragic elements of the equation. It tries, at the end, to move beyond farce and tragedy towards an intellectual stance that allows human societies to assume some agency over the drama and narrative before it is too late. The problem is not whether tech companies ought to deal with the "hard" problems. It is how our society chooses to deal with the challenges produced by complex technologies and their intersection with more traditional social ills that we have not perfected a solution for. Unless we focus the conversation away from the farce and tragedy towards this greater political dimension, we will surely be sunk by the next wave of technical innovations. But I fear that it may be impossible at this point to hope for anything other than a conversation characterized by ample parts farce and tragedy. That has been the track record so far. Why hope for anything different?
It is hard to talk about the problems of tech companies entering policy and governance without talking about debates concerning expertise as a whole. We are currently awash in debates over the meaning and relevance of expertise in an era of populism. Expertise has a useful function. I would dare not, for example, make great pains to insert myself directly into a major policy debate over development in Africa without at least minimally perusing arguments made by people that work in that field or produce research about it. However, specialist communities and knowledge-producing institutions are often far more internally dysfunctional than many people acknowledge. This is a broader topic for another post, but the Sebastian Gorka and Peter Navarro episodes currently being ventilated by the think-tank, academia, and media communities are a case in point. Gorka and Navarro are respectively opposed by specialist communities in the counterterrorism and China policy worlds (and their proxies and allies) – and not without some significant justification given their personal extremism and the danger of their policy ideas. Yet it is difficult to conclude that the status quo in either community was that great to begin with. In the counterterrorism world, analytical groupthink about topics such as the supposed decline of al-Qaeda and the Arab Spring is the unfortunate backdrop to Gorka's attempt to steer US policy towards naked prejudice and Othering. Likewise, Navarro's economically backward and politically extreme brand of neo-Mercantilism cannot be considered in isolation from the increasingly bankrupt policy mainstream position regarding China – coax it by hook and crook into a "Rules-Based Order" that Beijing rejects. Again, the "cure" offered by Gorka and Navarro will almost certainly kill the "patient" if actually implemented. But attempting to burn the house down is an irrational response to the larger problem of the house being difficult to fix.
So, if you invite a complete outsider into the policy equation with a culture built on "disrupting" existing commercial solutions that it sees as outmoded, what do you really expect? Are they going to cite all of the relevant literature in your field? Consult all of the experts you believe to be important? Pay attention to your ideas about the nuances of acting in X or Y? Even absent Silicon Valley's culture of disruption inviting an complete outsider to intervene in a problem implicitly suggests that the existing community of practice is not up to the task. Someone in such a situation might be excused in thinking that they have some kind of knowledge or expertise that insiders do not or insider knowledge and expertise is defunct in some way. But the Californian Ideology is a real thing and it sometimes takes the aforementioned thinking to an extreme. And parts of contemporary variants of the Californian Ideology stem from a belief that mainstream American governmental and quasi-governmental institutions have failed and it is better to build parallel institutions or subvert existing institutions than engage with normal politics. There is more than a bit of fantasia in this. Some (such as Peter Thiel) conflate the inability to "educate" the body politic to support their preferences and beliefs as a failure of the political system itself and make the age-old error of believing they can control a petulant bomb-thrower to achieve their ends. Others merely genuinely overrate their own ability to avoid political contention, especially as their success in automating much of society will inevitably attract a political backlash (if it hasn't already). It is better to deal with politics, as messy as it is, than flee to New Zealand in any event.
However, one must face up to the sad reality that maybe believers in the Californian Ideology have a point in 2017. American political institutions are completely dysfunctional and are beginning to resemble those of places that Christiane Amanpour reports from live on CNN in a flak jacket. The decay of other critical institutions are part of the backdrop to this sad tale, as well as the simple fact that Americans have virtually no faith in most institutions to begin with. So one wonders if it is really that insane to believe that the normal channels of practice are defunct and parallel ones must be built. Institutions like technology companies and the military still retain some public trust because they are perceived to be bastions of genuine expertise and competence, even if such a belief is premised on the misleading idea that what makes them competent at their particular area of focus would translate into the solving of thorny governmental and quasi-governmental dilemmas. All of this loops back to the issue of business engagement in policy problems. There is an inherent tension in drawing in outsiders with little background in the problem with the expectation that they will work on the problem while consulting all of the established sets of experts, the relevant literature, and the common protocols of decision-making and governance used to resolve problems. And it's a recipe for a pointless repetition of confusion and outrage over the perpetually rediscovered revelation that outsiders do not "understand" things the way that insiders do or more bluntly that outsiders do not grant the sort of symbolic tributes to the knowledge, practices, status, and privileges of insiders that insiders want. There is no guarantee that any outsider will make sense of a policy dilemma the same way that an insider does, and it is sometimes difficult to decouple the need to do so from validation of the insider's position within the community of practice.
There is, in general, a broader problem with the idea that any sort of outsider that doesn't come from a particular community of practice will or should defer to the community's particular preferences, knowledge, and norms. It may be a good idea in general for that outsider to do so! But the belief that "technology is political" does not imply that the politics of technology are simple or that technology will or should be instrumentalized according to a particular actor's politics. The recurring farce of "technology ethics" often stems precisely from the unwillingness to deal with the reality that many makers, users, and operators of technology simply do not share the social habitus, lifeworlds, or politics of technology ethicists – who reify their particular perspectives as universals. Once upon a time, for example, the collective intelligence of "smart mobs" was seen as a progressive force for social change. But because communities that lack a priori commitments to progressive ideals could also utilize such technologioes and associated modes of technological practice, P2P and social network technologies and their associated modes of practice are now seen as illiberal and regressive producers of a "post-fact" reality. There is no small irony that Hillary Clinton – who as Secretary of State attempted to invest significant amounts of resources in the promotion of global Internet freedom in part out of a belief that doing so would undermine hostile regimes abroad – would later find herself spinning dark tales of fascist cartoon frogs in the 2016 Presidential Election. Part of this shock is understandable. Certainly there is no way to ever really bridge the ideological commitments of, say, Mercer/Thiel and those that seek to instrumentalize technology for progressive social causes. And criticizing the latter for refusing to make good with the former would be churlish and uncharitable. Only the most blinkered "High Broderist" bourgeois centrist liberal would demand that a committed feminist make common cause with Thiel, a man that waxes nostalgic for the days before women's suffrage. That would be a patently absurd proposition!
However, beneath all matters of technology and society are simple, political, divergences of opinion about the ends to which technology serves. And these divergences of opinion about intractable social problems existed long before the origin of particular technologies:
[B]ecause what the algorithms is designed to optimize is generally going to be something like ‘maximize ad revenue’ and not anything particularly explicitly pernicious like ‘screw over the disadvantaged people’, this line of inquiry will raise some interesting questions about, for example, the relationship between capitalism and social justice. By “raise some interesting questions”, I mean, “reveal some uncomfortable truths everyone is already aware of”. Once it becomes clear that the whole discussion of “algorithms” and their inscrutability is just a way of talking about societal problems and entrenched political interests without talking about it, it will probably be tabled due to its political infeasibility.
The nub of the issue is really how these sorts of divergences are dealt with in the practical matter of solving the "wicked problems" of policy. So the problem really goes both ways. Yes, there is often a tendency by technologists to ignore subject matter expertise or grounded knowledge within communities of practice. However, communities of practice sometimes can lack useful deliberative mechanisms for recognizing and dealing with differences of opinion besides ostracism, Othering, and mockery. When drawing outsiders who do not share one's particular verstehen of a particular problematic into that problematic, one has to be willing to pay attention to their verstehen and grant it legitimacy even if it means rethinking some of the underlying commitments of the community of practice. Unfortunately it is often much easier to simply dismiss. And perhaps there is some grounds for dismissal given how different engineering and business matters are from solving policy problems! But it is also true that political interventions into technology have often been massively counterproductive at best and reflective of a pervasive close-mindedness at worst. In any event, absent a way to disentangle resentment and ressentiment of outsiders from the somewhat paradoxical call to invite them into the community of practice, specialist communities ought not to invite them. Absent a better way of handling wildly divergent ideas about knowledge, practices, and values other than mockery and dismissal, it is better that communities of practice do not create opportunities for more mockery and dismissal of wildly divergent ideas about knowledge, practices, and values.
So, in one sense, the entire shebang is a farce. A farce in which people who are at best ambivalent about the presence of outsiders and at worst hateful of them demand that they intervene in particular policy problems and then become genuinely or disingeneously surprised that the outsiders come at the problem from a very different point of view. Maybe its not a particular knee-slapping farce it is alternatively amusing and aggravating to me nonetheless. At one point during a particular social media conversation about this an correspondent outright admitted that he loathed the people that had the power in the tech world but nonetheless demanded that they take social responsibility and step into the domain of politics and justice. You go to war with the army you have. You go to policy with the technology industry you have. Absent an ability to change that industry to reflect your normative preferences, don't invite them to solve problems you care about. It's not rocket science and it doesn't require a superintelligent AI to figure that one out.
There is another reason why it is not an a priori good idea for corporations to solve social problems. Even if many calls for them to do so are not really made in good faith, it also plays into a rhetorical frame set by opponents of government. In other words, people in part want the likes of Mark Zuckerberg to solve problems that government ordinarily handled because of the tremendous success of people that do not like government. People such as "drown it in a bathtub" Grover Norquist that are ideologically opposed to an older center-left/center-right model of government and its appendages and seek to reduce its remit (perhaps for their own self-interested ends). The results of this have not just been "public squalor" and "private affluence." Rather, the shrinkage of government and the privatization of government functions have systematically degraded the effectiveness of the classical administrative state – whatever its significant woes and problems – and created a grotesque monstrosity that is neither understandable or accountable to most Americans. The "submerged" policies of this state often obscure the role of government, allow for easier regulatory capture by private interests (witness the baleful role of the tobacco, fast food, and sugar industries on health governance) and help "crony capitalists" take on quasi-governmental functions (such as the private prison industry). Americans end up opposing entitlements they receive because they do not know that they are products of government policy and solely blame government for the sins of predatory interests that government policies favor or enable. It is not clear that inviting yet another set of private actors into the problem of public governance won't worsen this already pervasive and genuinely tragic problem.
But due both the failures of government and the successes of those that want to demonize government, we now live in a kind of ideological Twilight Zone where even passive-aggressive criticisms of the tech industry are voiced in an ideological dialect that implicitly lends legitimacy to the idea that business should privatize governance. It is bizarre. But then again, we all just spent most of last year mourning a dead gorilla. Weird things happen. But there really is nothing obviously wrong, really, with the idea that companies exist to make money and government exists to govern. That is how capitalism works. Companies are not charities nor do many exist expressively for the purpose of performing charitable functions. Certain kinds of companies exist as contractors to governmental and charitable sectors and are specialized around aims that governmental and charitable entities share. But there is no shame in, say, starting yet another tech startup to flog junk at people. There should also be no shame in the idea that governmental and quasi-governmental entities ought to take the lead in the solving of social problems. Government is not about efficiency – even if it could be more efficient. Government is not about innovation – even if it could be more innovative. Government is not about agility – even if it really would be much better if government could be more agile. Government – and politics – is about the unsubtle business of "who gets what, when, and how." Unless this is always kept in mind, one gets situations in which governmental functionaries unironically state that feeding schoolchildren has "no demonstratable results." Feeding hungry children is something that a civilized people do. It is not some sort of icing on the cake but the cake itself.
One need not break out a PowerPoint deck full of ideas about how to optimize the Key Performance Indicators (KPIs) of feeding hungry children to justify feeding hungry children. Feed the [censored] hungry children first, worry about the [censored] KPIs later. A barbaric society is one that lets helpless, needy children starve because it isn't "agile" to feed hungry children and taking care of them induces an inefficiency. Less dramatically, the principles of agility and efficiency are also difficult to incorporate into government in general. It is unavoidable that in war, for example, inefficiency occurs because there is a limit to how much you can keep men and women alive and optimize their killing power while conforming to economic assumptions about economies of scale and the principle of least effort often seen in private industry. From one perspective, a infantry unit is inefficient because it lacks homogeneous equipment, weapons, and roles. But this is also a way of coping with the underlying reality that, say, maintaining a capability for ranged combat can be useful if fighting takes place at range. And the inverse. This is not to say the military or defense writ large is a useful model for governmental efficiency (it really isn't) but that cutting costs by making everyone tote the same weapon or gear and eliminating perceived functional overlaps and rendundant roles would not accomplish the mission and would likely get men and women killed for nothing.
It would be positively ridiculous to expect that governmental expertise would be a priori transferrable to lean startups; it would also be ridiculous to expect that a longtime governmental bureaucrat could develop a winning corporate strategy absent assimilation of a very different mindset and set of practices. Do we really want companies to solve our problems? What does say about us as a society if we expect them to? What does it also say about the parameters of political discourse that we trust companies more than government to intervene in political issues and talk about political issues with the vocabulary of the private sector? Not really good things, I submit. Companies should, for sure, socially responsible in helping to prevent or mitigate the externalities they might potentially generate and complying with both the letter and spirit of the law. They should do their level best to internally practice values congruent with those of larger society, such as treating women with respect. And when necessary and practical, they can be included as partners in efforts such as combating online extremism. And the traditional charitable role of the private sector is also a useful means of social betterment. But all of this is distinct from the idea that companies ought to solve, work on, or mitigate "hard" social problems and that they are somehow deficient or trivial in doing things like spamming you with ads for junk to sell to you. Please let this meme die. Please let it die unless this request is (1) made in good faith as opposed to the perennial ressentiment of the technology world popular these days and/or (2) done with some forethought about the difficulties of having any private sector actor take on a political role and how to mitigate those difficulties.
Returning back to Thiel and Mercer, there is one perhaps unavoidable way in which technology and policy will fuse that transcends both the farce and the tragedy. Herbert Hoover, in the 1920s, believed that the social, political, and economic problems of modern industrial life could only be resolved by an enlightened partnership between industry and government. While this idea – the basis of Hoover's economic policies – eventually ended in tragedy in the 1930s, Franklin Delano Roosevelt modified Hoover's ideas in a way that was much more permissive of state involvement. World War II – a massive and successful techno-industrial effort managed by technocratic elites – also legitimated the idea that the problems of a modern technologically advanced society needed to be handled by a specialized corps of experts. Modernization theory emerged out of this crucible and later died in the jungles of Vietnam, America's burning inner cities, and the postmodern funhouse world of post-industrial turbo-capitalism. The size and complexity of the agglomerated "stack" of software systems (and other related social problems arising from the complexity of technology and society) poses a new challenge to contemporary governance akin to that of the industrial era. Thiel, Mercer, and others see themselves as a managerial class uniquely empowered to solve such problems – despite the reality that their political patron (Trump) has a much more....antiquated understanding of political economy than they do. The problem is not whether technology companies or technologists should participate in policy or whether tech companies need tackle the "hard problems," but what kind of political governance of technology we require today.
I will be, as I have been throughout this post, quite blunt. As Nils Gilman argues, "technoglobalism" is breaking down and absent major institutional upgrades the "software" of current American governance is going to suffer the political equivalent of what people feared would happen during the Y2K incident. Automation is likely already partly responsible for the political chaos engulfing the US today, so I shudder to think of what might happen if worst-case projections about automation and employment prove to be accurate. Recent events have forced many political observers to adjust subjective estimates of US political stability and the strength of US institutions. What is most important at this point is to set the political terms about how to govern complex technologies' impact on American society. This is not a new concern. Langdon Winner warned of this problem in the late 1970s and virtually no one listened. For technology companies to play a useful role in dealing with "hard problems," we need to think better about the role of government and society in setting the terms for any kind of political action regarding technology. Otherwise one ends up with a situation in which it is easier to imagine literally modifying human biology to help us keep up with the machines than any large-scale concerted societal action that might obviate such drastic measures. Or in the terminology of programming, "destructive" changes of internal state are not always preferable when non-destructive modifications can be made. The same holds true, for say, a choice between inserting a computer into your brain and upgrading the welfare state. Maybe one needs both computers inserted into brains and an upgraded welfare state. I dunno. I just have a problem with a logic that implicitly privileges inserting a computer into your brain without any consideration of upgrading the welfare state.
It is high time that the politics of technological governance were foregrounded. Not only would it do some good in setting the parameters of what we want out of technology and what we need out of technology companies (and other entities), it would also impose some badly needed intellectual seriousness on a farcial debate. It would stop the farcial circus act of technology ethics and its endless series of Trolley Problems dead in its tracks and force some intellectual seriousness. It would stop the bizarre dance of the technology intellectual between kinda-sorta-political-critique and consumer critique and the often counterproductive responses it inspires from critics. Finally, it would also perhaps mitigate against the consistently sophomoric tropes of technology criticism and force those that partake in the act of technology criticism to do what Thiel and Mercer are actually doing: get their hands dirty in the political arena or otherwise clarify their political stance about technology. This is not to say that I remotely approve of the specific political activism of Thiel and Mercer. I don't. I find their views and policy programs instinctively repulsive at worst and grossly hubristic at best. But technology critics want to be political actors without taking on actual political responsibility or dealing with the messiness of politics, a world in which they will have to deal with the reality that not everyone shares their own discursive and normative commitments. A world that, yes, might include "dudebros" that they dislike or more simply people whose primary end is participating in a capitalist society many of them have an at best ambivalent relationship to or outright loathe. Might this be uncomfortable? For sure! Nothing about politics is comfortable. But is time to stop repeating the cliched mantra of "technology is political" and actually deal with politics, which often involves bargaining, negotiation, compromise, selecting the best of bad options, recognizing that there is often no clear "good guy" in a policy dispute, and above all else often dealing with people that one finds distasteful.
The other option is, of course, to wash one's hands of the whole thing. Maybe it is beyond redemption. Maybe the compromise of engaging in the political is too much to bear and the only righteous way is to simply critique as a voice of moral conscience. It may be that the only ethical response for how to engage in the politics of technology is to take a stance of unremitting criticism and hostility towards technology itself and those that make, operate, and use it. This is an extreme viewpoint, but it is worth noting that technology is never an end in and of itself in technology analysis. We have things that we care about, and then there is the role technology plays in maintaining, producing, and/or changing those things. So this sort of Old Testatement prophet-esque stance has some legitimate basis even if it often is uncompromising and to some extent fanatical. As LM Saracas points out, critics of technology are different from other critics in this respect:
Perhaps there is something about the instrumental character of technology that makes completing the analogy difficult. Music, art, literature, food, film – each of these requires technology of some sort. There are exceptions: dance and voice, for example. But for the most part, technology is involved in the creation of the works that are the objects of criticism. The pen, the flute, the camera – these tools are essential, but they are also subordinate to the finished works that they yield. The musician loves the instrument for the sake of the music that it allows them play. It would be odd indeed if a musician were to tell us that he loves his instrument, but is rather indifferent to the music itself. And this is our clue. The critic of technology is a critic of artifacts and systems that are always for the sake of something else. The critic of technology does not love technology because technology is never for its own sake.
So what does the critic of technology love? Perhaps it is the environment. Perhaps it is an ideal of community and friendship. Perhaps it is an ideal civil society. Perhaps it is health and vitality. Perhaps it is sound education. Perhaps liberty. Perhaps joy. Perhaps a particular vision of human flourishing. The critic of technology is animated by a love for something other than the technology itself. Returning to where we began, I would suggest that the critic may indeed mourn just as they may celebrate. They may do either to the degree that their critical work reveals technology’s complicity in either the destruction or promotion of that which they love.
One need not, of course, spend all of their time mourning just as much as they need not spend all of their time celebrating. My own personal belief system is often a mixture of mourning, celebration, and a pervasive and sometimes all-consuming sense of cynicism (at times unfortunately verging on misanthropy) about the imperfectability of human societies and the pervasiveness of both farce and tragedy within them. However, regardless of what sort of subjective attitude one takes towards technology I am in 100% agreement with Saracas that technology is never an end in and of itself. That is why I consistently react extremely negatively to calls for technology companies to solve "hard problems," calls that either are a vehicle for submerged ressentiment against the tech world or at the very minimum often take for granted how these problems ought to be solved and the normative contestation that often keeps from being solved in the first place. Future tech-driven upheavals are on the horizon. People that care about the intersection of tech and policy need to understand that the same dysfunctional way that we talk about tech and policy is not sufficient to deal with them. We need to foreground the governance of technology and think more carefully about how technological problems intersect with the traditional problem of what it means to live the good life and what it means to create a just and equitable society. And we need to do so without farce or tragedy blocking our ability to think clearly about it.
We may be able to move beyond farce and tragedy and find a way to make ourselves the author of the drama and the writer of the narrative. That would be ideal, because technological change is unlikely to cease. It must be managed, guided, and sometimes delayed or otherwise thwarted. Moral panics about technology, or "techno-panics," are bad. But any reasonable person living in a society on the verge of technical upheaval must surely have some panic in their hearts. Sadly, I fear that it may be impossible to have this kind of conversation about our shared future. Perhaps our best chance to have it was when Winner posed the question of "autonomous technology" and politics to begin with in the late 70s. An opportunity that we collectively spurned and now will suffer the consequences of spurning. Unless we are to come to some better arrangement for how seriously we take this question than we do right now.