The Risks of Underpromising Cyberpower

"Fear, uncertainty, and doubt" (FUD) is endemic to discussions of cyber conflict. Apocalyptic visions of cyber-doom often lack empirical support and owe more to crappy Hollywood movies than sound threat analysis. Thus, Conrad Crane's recent skeptical note about cyber capabilities is a welcome diversion from the seemingly never-ending cyber-anything FUD parade. Crane observes the results of recent military wargames and offensive cyber operations in the Middle East and concludes that the military benefits of cyberpower have been oversold. Don't overpromise the effects of cyberpower, Crane argues.

While Crane's assessment of the limitations of what he has observed are reasonable and plausible, his conclusions about cyberpower suffer from an overly narrow framing of how it is exercised. In assuming that the exercise of cyberpower is limited to destructive and debilitating "cyber-barrages" and "cyberbullet(s)" fired from the cyber equivalent of an rifle or artillery piece, Crane ignores other ways of using cyberpower.

As Thomas Rid explains in his latest book, the word "cyber" derives from the Cold War science of cybernetics -- the study of communication, feedback, information, and control in machines and humans. A perspective on strategy and conflict that takes cybernetics into account is one that ultimately concludes that only two domains of conflict really exist: "the domain of conflict regardless of tangential surface features and the domain of command, control, communication, coordination, data and cognition." The latter domain -- cyberspace as a space in between -- has always existed and information communication technologies (and layers and layers of more generalized machine systems and protocols) are merely a human-made means of exploiting it. In 2009, the National Defense University (NDU) published an interdisciplinary study on cyberpower that defined it as "the ability to use cyberspace to create advantages and influence events in the other operational environments and across the instruments of power." The 2009 NDU definition suggests that we may be significantly underestimating the importance of cyberpower in contemporary conflict.

Cyberpower, for example, is critical to enabling what used to be called "network-centric" military operations that leverage networked military systems for intelligence, surveillance, communication, situational awareness, collaboration, synchronization, and command and control. Certainly, as Crane has rightly noted elsewhere, having a high-tech military is by no means a panacea. But good luck trying to fight a modern combined arms land engagement (like 2008's Battle of Sadr City) without a way to synchronize high-technology surveillance and strike systems with more traditional infantry and armor ground maneuver elements. At sea, naval historian Norman Friedman argues that these structural capabilities have been critical to naval strategy and tactics since World War I. Lastly, they are a basic, non-negotiable requirement for Air Force dreams of a thick "swarm" of unmanned combat air vehicles (UCAVs). Underestimating the importance of cyberpower also renders us vulnerable to enemies who in some respects understand its potential better than we do. Scenarios about cyber-doom envision movie plot-like threats such as taking down nuclear plants or the power grid. Perhaps this might occur in a high-intensity conflict. But at that point, armies would likely be clashing anyway and nukes would likely be flying. Dealing with malicious hackers would be frankly the least of our worries.

However, the purely military uses of cyberpower only are a small component of a large space of possibilities. Let us observe that none of the most nightmarish cyber-doom fears include the prospect of a foreign power using information manipulation to influence a presidential election. But that is precisely what experts like Rid believe Russia is doing to influence the 2016 Presidential election. And for all of America's military might, Washington cannot do anything to halt Russian intelligence operatives from breaking into senior officials' computers and using both direct and indirect proxies to selectively release (potentially doctoring/altering) private information. Sure, there there isn't anything radically new about Russian political deception and subversion. The ideology and governance of the Soviet Union was itself a great big lie, and one might say that the very survival of Vladimir Putin's dictatorial Russian regime depends on the success of a giant deception operation against the Russian public. Even if there is little new to Russian subversion, however, Russia's success in modernizing these techniques for the information era is a prime example of why we ought not to underpromise about the effectiveness of cyberpower.

Unlike the US, Russian military theorists understand the potential for using cyberpower for more than just shutting things and blowing things up. The Russian military theory of "reflexive control," defined as "a means of conveying to a partner or an opponent specially prepared information to incline him to voluntarily make the predetermined decision desired by the initiator of the action," focuses on the distortion, shaping, and manipulation of information rather than using offensive operations to destroy information systems. All systems of communication and control -- from the human mind to an command and control network -- can be subtly degraded, disabled, or subverted by feeding them false inputs or exploiting weaknesses in how they process, evaluate, and act on information. This can be seen in how easily even advanced artificial intelligence systems can be fooled or otherwise punked by simple tricks.  But human social networks also be similarly fooled with with false information and compromised by deception and manipulation. The effectiveness of Russia's efforts to influence our elections suggest that Russia has apparently figured out how to use the information architecture of our political system and the media and communication systems that surround it to sow conflict, doubt, and discord. They have not only successfully intercepted sensitive US governmental communications, but also devised and implemented a political subversion operation that leverages our politics, media, and communication architecture against us. As Joshua Foust observed, Russia's operation "exploits weaknesses in Western journalism itself." But how? Foust explains in a piece I urge everyone to read.

By manipulating the instinctive push for equivalence in Western journalism, Russia is able to insert a factually wrong narrative and have it considered alongside an actual version of events as simply a competing perspective instead of being accurately described as a lie. Thus, when agents working for the Russian government release hacked emails under the guise of gossip journalism, it fits their false narrative: "Everyone is corrupt, everyone is a liar, but we'll tell you the truths they want to hide." Hook it up to an appealing, click-baity headline and thousands of otherwise innocent people spread it across social media, and it becomes its own self-reinforcing conventional wisdom.

Russia has been learning how social media helps spread stories for years. Adrian Chen followed one early effort Russia undertook on this front in 2014. He researched a Russian "troll farm," where employees of the Russian government flooded social media feeds with dummy content. One type of behavior he noticed was hoaxing, whereby Russian troll accounts would try to fabricate some sort of emergency and then study how local media picked up on the story and covered it. This campaign of releasing emails, however, represents something new. While the media efforts by Russian propagandists have been difficult to counter, they have existed in a realm that is at least understandable: RT, Sputnik, et. al, are state propaganda, which means they can be evaluated as sources. Even if a story they publish is provably wrong, you can't ignore what Foreign Minister Sergei Lavrov says about Russian troops in Crimea. The hacks are different: it's not always clear at first where they come from, so it's harder to evaluate their reliability (there is a reason the released emails are only those with insults and negativity, for example: people speaking positively of each other does not fit Russia's narrative). And because they are, in fact, true (Colin Powell really sent those emails), they can't simply be denounced as a lie. It is the most effective form of propaganda, because there's nothing to denounce as a lie.

In hindsight, the hoaxing Chen covered seems like an experiment to see exactly how to goad the U.S. media into covering an event. They seem to have learned their lessons: despite the overwhelming sense from within the U.S. intelligence committee that the hacked emails from presidential candidates, generals, and secretaries of state are coming more or less directly from Russian intelligence, there is reluctance to cover it very closely: The veneer of unguarded honesty is irresistible to click-hungry reporters.

Foust is correct, but he also understates the problem. The exploitable weaknesses Foust describes are inherent to the way in which the symbiotic relationship between online media financially dependent on outrage clickbait and the people who inhabit increasingly angry and partisan social media networks allow for the disturbingly easy stoking of online hate mobs. It doesn't matter if the trigger for online outrage is real or imaginary. As long as there are legions of trolls willing to go to extremes over an bitterly contested social issue, it is a simple, no, elementary matter to provoke them into tearing each other's throats out online. And instead of dispelling the hysteria, established political, media, and academic authorities only just fan the flames. The fact that some white supremacist Internet trolls co-opted a popular visual Internet meme called Pepe the Frog was spun (quite possibly by those very same bigoted trolls) into a narrative that the racist Pepe memes were the product of a elaborate, diabolical, and coordinated campaign. But this, as one Twitter personality noted, was actually far from the case.

At no point in this causal chain did the people fanning the flames of outrage bother to do even the most basic research or fact-checking. At no point did anyone ask themselves "am I being played by racist trolls?" That's not a new observation. This basic aversion to fact-checking and investigating the sources of information combined with the features of social media and the online link economy, Ryan Holiday claims, allowed him to routinely punk the news media in his former job as a corporate PR "media manipulator."

Cyberpower has been certainly overpromised. And there is way too much FUD in discussions of all things cybernetic. But there's also a risk to underpromising it. We sit at the threshold of an new era characterized by the ubiquity of adaptive, data-hungry systems and a corresponding society characterized more and more by the offloading of its collective memory, cognition, and reasoning to computers.

Through sensor ubiquity, the man-machine integration era is upon us. This integration is taking place in distributed, continuously changing, optimizing/learning, finite precision feedback systems. Security challenges of such systems abound, also due to constantly co-evolving threat actors and changing environments. Are we adequately preparing to defend such systems, and more importantly: how do we ensure they are worth defending?

I will also add that the very nature of the 'man-machine integration era' suggests that the question of what it means to defend is the biggest problem. What we face is not just hackers hijacking autonomous vehicles or blackmailing people via surveillance data collected via Internet of Things (IoT) gadget, but the reality that our increasingly informatized identities, culture, society, media, and politics can be easily manipulated by actors that understand how the organization of information networks determines their influence on our beliefs and behaviors. Whether it is Russian subversion of American politics or malicious trolls attempting to provoke divisive social conflicts, the enemy understands how to manipulate us better than we know how to defend. Given this state of affairs, we ought to take cyberpower seriously. Because if we don't, it is nonetheless guaranteed that the other side will.