Something that was a small segment of one of my prior posts –
It is an exaggeration to say that fringe weirdos on social media often were more well-informed than people that exclusively evaluated mainstream sources, but not that much of an exaggeration as most would think. And that is not accidental. As Ben Thompson noted, the global COVID-19 response depended on an enormous amount of information developed and shared often in defiance of traditional media (which underrated and even mocked concern about the crisis) and even the Center for Disease Control (which attempted to suppress the critical Seattle Flu Study). The response still depends primarily on transnational networks and often must operate around rather than through official channels.
– is now one of the major narrative fracture points in the debate over COVID-19. Not specifically my characterization of it – I’m just one of many people yapping right now – but rather the underlying issue of whether or not the “fringe weirdos” really got it right and whether or not the “mainstream” really got it wrong. There is certainly steep criticism of the latter in my posts from this month. But I issued qualified praise for the former, not going as far as some people I respect greatly even if I certainly agree with the general thrust of their arguments.
The underlying question itself is probably not resolvable. There is nothing mysterious about it, to be sure. Everyone can develop informed opinions. However, beyond this the problem becomes more convoluted. If you are making comparisons, you need to be explicit about defining and defending your answer to this question: what counts as a fair evaluation, and what can and cannot be used as a part of it? And more specifically (note that some of these overlap and this isn’t an exhaustive list):
What particular people and entities are specifically being evaluated as sources of information?
What specific metrics of performance should be used in evaluating the respective sources?
What information outside this particular event, if any, is admissible for evaluation?
What kinds of counterfactual outcomes are a part of the evaluation?
What rewards and penalties should be issued as a consequence of evaluation?
And therein lies the problem. It is unlikely that people with strongly held differing views can agree on responses to all of these bullets. It would be surprising to see firm agreement on at least two. This post offers insight about one of the reasons why: the slipperiness of terminology that is evolving – as we speak – in real time.
What did “flatten the curve” mean? Did it mean that steady, individual-level non-pharmaceutical interventions would be enough to save hospitals from overload? Some people have interpreted the memetic GIFs that way, and critiqued them on that basis. But remember, #FlattenTheCurve went viral back when fretting about “coronavirus panic” was a mainstream thing, when people actually needed to be talked into social distancing. The most viral of the GIFs does not contrast “flattening” with some other, more severe strategy; it contrasts it with nothing. Its bad-guy Goofus character, the foil who must be educated into flattening, says: “Whatever, it’s just like a cold or flu.”
One of the cruelest paradoxes that “flatten the curve” illustrates is that the claims you most want to evaluate are probably least amenable to being put into a form that can be evaluated. The post discusses with great erudition the problem of when a scientific concept becomes a meme, received very differently by disparate audiences and transmitted at lightning speed.
And this is to say nothing of the enormous difficulties in getting people that distrust each other immensely to submit to a shared mechanism of evaluation. Arguments like this often become proxies for unresolvable arguments about whether or not social status and resources have been correctly allocated and/or proxies for larger ideological, cultural, and personal feuds. It’s part of why predictions are overrated. Not because they are useless or there is some kind of aura of mystery that prevents us from making and judging them. But because the predictions that are most important to us tend to defy the most straightforward mechanisms of formulation and evaluation.
This is also perhaps why proposed remedies to great failures to control systemic problems also take on a primarily polemical character. Proposing more “skin in the game” is fine as expression of discontent but it is unclear whether or not it would be a socially beneficial policy. Skin in the game is a desire that the interests of principals and agents become more closely aligned. But it is difficult to find nontrivial cases in which they are, and the costs of attempting to forcibly create alignment may not be worth the benefits relative to other schemes for so-called ‘public morality’ in society. Polemic emerges in lieu of a clear resolution to something that people (justifiably) have impassioned feelings about.
One possible answer to this is to insist on rigor and try to aim for some neutral and objective standard. Epistemic hygiene movements tend to fixate on this, fighting the natural tendency of people to avoid arguing over discrete and tangible things and to differ vigorously about the basic parameters of the discussion at hand. But this seems like a lost cause. After all, if it were possible to get people to hold to these common constraints much of this would be far easier. Hence, while withering in my criticism of the ‘expert system’ mode of information regulation, I did not offer unqualified praise for the self-styled ‘weirdos’ that oppose it. As I noted in the prior post I linked, it is likely unwise to embrace a ‘Year Zero’ approach.
The past few posts have discussed the decay and decline of a mechanism for information regulation fundamentally rooted in the assumption that modern societies are best managed by closed and exclusive bureaucratic entities and/or communities of practice tasked with the management of specialized information. The emergence of the Internet and the growth of large computational platforms has, in this view, opened a Pandora’s Box of junk information that threatens the viability of these entities and by extension common perceptions of shared reality. This is the base of the counter-disinformation approach I have criticized, at least in crude outline.
Yet as Ben Thompson argued, this viewpoint both underrates the explosion of useful information as well as the utility of this new information as a hedge against self-regulation problems by legacy institutions. One of the major criticisms that many have voiced online about legacy institutions’ responses to COVID-19 is the manner in which fear of looking crazy or empowering “bad faith actors” stifled recognition of uncertainty and danger. To paraphrase George Orwell, better to be crazy than barbarous. Hence, indifferent to social pressures and already expected to be crazy, the self-styled weirdos were free to sound the alarm and take preventive action.
In the most charitable interpretation of this analysis, how did the weirdos sort good information from bad? Sonya Mann linked to an interesting discussion of this on Reddit that was remarkably frank about the manner in which such an argument must inevitably be a form of special pleading. NB: the term ‘weaponized autism’ is a piece of memelord lingo.
The ‘weaponized autism’ of all these internet people has dramatically outperformed our official institutions. It has done this, mostly, by just massively signal boosting relevant signals. It’s not even really a failure of our institutions, it’s just, like, (to use an analogy) we’re here using the newfangled telegraph to communicate things in real time, and our institutions were built during an era of courier’d mail. It’s not that they’re even doing anything badly, it’s just that we are moving faster than them because of fundamental differences in our toolsets. … Of course there’s a million idiots on twitter and reddit. But that’s why we ignore them. I know this is kind of a special pleading argument, and it basically reduces down to “if you only pay attention to the good sources, the sources are good”, but it really is like that. There’s a sort of background evolutionary pressure going on where there’s enough smart people looking at these things, from enough different perspectives, that relevant information gets surfaced pretty quickly, and bullshit either gets suppressed, or we all recognize it as BS and mock it.
This comment invokes a ‘background evolutionary pressure’ but it is worth noting that this pressure was not operative during the Boston Manhunt or the Pizzagate fiasco. The poster goes on to note that one of the consequences of being burned by establishment sources during COVID-19 is to, as a safety measure, increase their distrust of official messaging across-the-board. Their justification is relatively sound: if I am bamboozled on one thing, how do I know I am not being bamboozled on another? But credulity is probably something that gets redistributed rather necessarily being replaced by skepticism.
One of the other interesting points made in the other aforementioned post on COVID-19 information transmission and the flatten the curve meme is how the social networks of particular figures in social media communities became ersatz trust networks. The poster noted that Scott Alexander – a popular blogger – was in a sense recreating the original problem with trust in legacy institutions:
[It feels] worryingly like an “information cascade” – a situation where an opinion seems increasing credible as more and more people take that opinion partially on faith from other individually credible people, and thus spread it to those who find them credible in turn. Scott puts some weight on these opinions on the basis of trust – i.e. not 100% from his independent vetting of their quality, but also to some extent from an outside view, because these people are “smart,” “actually worried.” Likelier to be right than baseline, as a personal attribute. So now these opinions get boosted to a much larger audience, who will take them again partially on trust. After all, Scott Alexander trusts it, and he’s definitely smart and worried and keeping up with the news better than many of us… But it is a bad thing when that trust spreads in a cascade, to your “smartest” friends, to the bloggers who are everyone’s smartest friends, to the levers of power – all on the basis of what is (in every individual transmission step) a tiny bit of evidence, a glimmer of what might be correctness rising above pure fog and static. We would all take 51% accuracy over a coin flip – and thus, that which is accurate 51% of the time becomes orthodoxy within a week.
Thus, the ‘background evolutionary pressure’ may in fact be a form of bottom-up social bias that merely recapitulates the top-down social bias alluded to earlier, albeit at warp speed. Being outside the orthodoxy is a form of credibility, but it is derived from a social signal that in turn may create its own orthodoxy.
Some of this may simply be unavoidable. It is rare that one has all of the information necessary for full evaluation of an important choice, which is part of why people discuss ‘belief in science’ or ‘faith in science’ even though belief and faith are antithetical to the idealized practice of a working scientist. The goal of the scientist is to reduce error, in part by eschewing faith and demanding logic and evidence. But one way to start a fight in philosophy of science circles is to mention the word “falsification” and then duck for cover. And in any event, common dependence on systems too large for individuals to fully ascertain produces faith-like dependence on expert systems.
Because of the totality of this dependence, being seriously burned once is cause for overarching loss of faith. After all, if you are dependent and cannot necessarily re-negotiate the terms of the arrangement, you are forced into a defensive crouch in an effort to minimize your vulnerability and incentivize the counterparty to restore the grounds for your faith. But the faith itself seems to be a constant and is simply transferred elsewhere. Unfortunately, this may end up producing a similar sense of disillusionment because it is unclear whether any entity – insider or outsider – is really worthy of it. If one’s object of longing can never reciprocate, one is destined to go through life perpetually heartbroken.
Perhaps because of this dilemma trust seems to be on the way out. The globalized world of free-flowing people, material, and information created the COVID-19 crisis and the world to come after the pandemic is through may very well become far less trusting as a consequence of it. And what comes after trust? Cryptocurrency suggests both the promises and costs of a (idealized) trustless world, as David Auerbach observed:
The impersonal trust relationships that fuel regular, state-backed currencies nonetheless signify some sort of bond between currency holders and their surrounding polities and citizenries. The surplus costs of transacting with fiat money like the dollar are paid to middlemen like creditors, and governments, and, as onerous as we may find these institutions, they can represent a form of community belonging and institutional obligation. On the other hand, the surplus mining costs of bitcoin—the computational expenditures required simply to process transactions—do not represent any personal or human bonds whatsoever, positive or negative… Any cryptocurrency must broadly reach agreement or consensus in the absence of any explicit community or trust relationship. But as Simmel (and others since) have stressed, the absence of trust demands a premium, and bitcoin’s transaction premiums are notoriously high: a guarantee of indisputably performed useless computation. Via Simmel…this is less a design flaw than an inevitability. Bitcoin emancipated itself from any possible societal or governmental stricture at the cost of an expensive but wholly mechanical autonomous trust network. Instead of the community of humans, bitcoin embraces the community of machines.
It is probably unlikely that a ‘community of machines’ can substitute wholly for human trust networks. One of the primary reasons is that machines cannot exist in the world without humans as mediating agents that structure the way in which machines both receive and transmit information. The French philosopher Gilbert Simondon argued quite vigorously against what he called the “myth” of a self-regulating machine. What is the role of the human in all machines? At the level of an “ensemble” of technical objects aggregated together, humans are what link machines to their environments and determine how they receive and transmit information.
[M]an’s nature is that of the inventor of technical and living objects capable of resolving problems of compatibility between machines within an ensemble; he coordinates and organizes their mutual relation at the level of machines, between machines; more than simply governing them, he renders them compatible, he is the agent and translator of information from machine to machine, intervening within the margin of indeterminacy harbored by the open machine’s way of functioning, which is capable of receiving information. Man constructs the signification of the exchanges of information between machines. The inadequate rapport of man and the technical object must therefore be grasped as a coupling between the living and the non-living.
I have discussed this in a much less abstract context in a prior post. And it is part of why I do not contend that the COVID-19 crisis retroactively validates the most optimistic techno-utopian projections. It certainly refutes some tech critics and establishes the critical importance of the technological vision today for our common survival. But what we have learned about the COVID-19 crisis does not make a world in which Simondon’s functions can be painlessly automated any more likely than it was prior to it. Which leaves us with the rather thorny problem of defining a more flexible system of trust than the one we currently have.
In my prior posts, I have mostly operated in a combination of diagnostic and polemic modes. Diagnostic, in the sense of attempting to outline and describe significant collective failures. And polemic, in the sense of expressing outrage and scorn about those failures. There are going to be many, many, many more arguments in this vein published elsewhere, some of which have been quite thoughtful and profound. Others have been much more vituperative, appropriately or not. Having contributed a tiny part to this emerging body of literature, I believe it is time to start thinking about how to move forward from this dismal low.
Forward-looking conversation can contribute several things that we are lacking at the moment. First, evaluating proposed remedies can be a way of getting – via the backdoor – some underlying information about evaluation that is generally elusive when trying to revisit a past systemic failure. When people propose their desired alternatives, they inevitably also reveal something about how they believe performance should be judged that might not otherwise be accessible via their speech and behavior. Additionally, forward-looking conversation is more honest about a key element of assigning past blame: hypothetical outcomes. “If we had done X” is a counterfactual assessment, inevitable as it may be. “In the future we can do Y” is a similar hypothetical but is likely easier to debate. Future posts will likely focus on pieces of this underlying puzzle rather than a past that is already getting submerged into the mire of contestation.