Enormous amounts of time, energy, and resources go into predicting things.
-
Who will win elections?
-
Who will buy the product?
-
When will war break out?
Much of this obsession with prediction is justified. We want to have foresight about particular outcomes of interest to us, so we are not caught unprepared. Even imperfect predictions are better than no predictions. We can, over time, improve our predictions even when they fail. But I am going to say something heretical: prediction is overrated. Or maybe, as the complexity of the prediction-action process grows (perhaps in keeping with the systemic importance of prediction), prediction itself becomes far less singularly important as a component of that overall process.
We often hear that humans are bad at predictions, but there are some kinds of predictions we do extremely well. For example, how do baseball players hit fastballs?
The one feat even more difficult than throwing a fastball, though, might be hitting one. There’s a 100 millisecond delay between the moment your eyes see an object and the moment your brain registers it. As a result, when a batter sees a fastball flying by at 100 mph, it’s already moved an additional 12.5 feet by the time his or her brain has actually registered its location. How, then, do batters ever manage to make contact with 100 mph fastballs—or, for that matter, 75 mph change-ups?… [Researchers] found that brain is capable of effectively “pushing” forward objects along in their trajectory from the moment it first sees them, simulating their path based on their direction and speed and allowing us to unconsciously project where they’ll be a moment later.
This is both very incredible and very banal. Given how fast the object is traveling and the gap between visual detection and neural registration, it is astounding that brains can allow us to anticipate the ball’s future location such that we can hit it. At the same time, it is also the kind of thing that one does without even thinking about it. Humans do lots of things like this both effectively and non-consciously. Note what they usually have in common: a very thin gap between prediction and action. You predict the future location of the ball and then you hit the ball with a baseball bat. If you can act based on a prediction, and act quickly without any impediments and complications, the value of correct predictions to you is quite high.
A slightly more complicated example is Norbert Wiener’s work on World War II anti-aircraft gun controllers:
During World War II, Wiener received a government contract to help build a system that improved the accuracy of antiaircraft guns by predicting the future locations of aerial targets. Wiener envisioned a target’s flight path as a series of discrete measurements. Since airplanes don’t leap about the sky randomly, each new measurement is in some way correlated with the one that immediately preceded it, and, to a somewhat lesser degree, with the one preceding that, and so on, until you reach so far back in time that you come to measurements that have nothing to do with the target’s current position. Previous measurements thus offer some clues to future measurements; the trick is determining how much weight to give each of the previous measurements in calculating the next one. “What you want to do is minimize, in Wiener’s case, the mean square error in the prediction,” says Alan Oppenheim, Ford Professor of Engineering and head of MIT’s Digital Signal Processing Group. “That starts to get into mathematics, and then that starts to give you optimum weights. Getting those weights correct is what Wiener was doing.”
In an idealized sense, the controller uses feedback from the environment to correct itself in order to minimize the error between predicted future positions of the target and its actual position. Wiener never really got around to making it, in part because his anticipated designs were more experimental and aestheticized than practical. But the idea itself has stuck because it illustrates something important about the role of iterative error-correcting feedback in many domains. Now compare both examples – the baseball hitter and Wiener’s AA gun controller – to a much different scenario. Much different in nearly every respect.
A cabinet-level national security official receives predictive warning of an impending terrorist plot against the state. The ultimate sources of the warning are hidden from her to protect operatives in the field. They provide relatively specific but not necessarily determinative and practical indicators of when, where, and how the blow will fall. While her advisors present her with a range of possible responses, she cannot act alone. Decisive action will require the support of other political stakeholders in the cabinet – including the President – and it will be complicated to implement quickly. And even if the prediction is correct, she could be nonetheless dismissed from her position for loosely related or unrelated reasons.
It is not that prediction here is useless. Clearly, national security justifies the existence of a large apparatus that collects information of varying types, synthesizes raw observations into composite intelligence products, and then utilizes them to make predictions of future dangers. But there are a large collection of important differences between this scenario and the prior two examples. Note in the first two examples that predictions can immediately and seamlessly be utilized for short and simple actions with clear observable results. You either hit the ball or you miss it. You either hit the airplane or you miss it. Additionally, prediction and action occur without explicit deliberation and debate. In the case of the baseball hitter, there is no conscious awareness that the process is happening at all.
Similarly, in the anti-aircraft example, Wiener intended for the system to be fully automated. The actual working of the system relies on explicitly formalized mathematics and technical designs but in action is not a deliberative process – after all, there would be no humans involved. So we end up with a very fascinating conundrum. For one, prediction is probably most valuable to us when we are not aware we are doing it. The more conscious we are that we are making predictions, the more complicated acting on them may be. This is trivially but consequentially true whenever there is a separation of responsibilities between (a) the agent(s) generating the prediction, (b) the agent(s) responsible for choosing to act on the prediction or ignore it, (c) the agent(s) responsible for deciding how to act and (d) the agent(s) responsible for implementing the desired action responding to the prediction.
Also observe how in the first two examples all of these agents were contained within either the body of the baseball hitter or the technical apparatus responsible for anti-aircraft gun control. Now they are distributed within an organization at least or multiple organizations at most. In theory, something akin to an evolutionary process will select for organizational structures capable of integrating these functions effectively and weed out organizations that do not. In practice, this is more complicated. If, all things being equal, organizations that more effectively predict the future course of the external environment and act according to said predictions fare better than those who do not, we have to be more specific about the grounds we have to justify this claim. After all, there are many little specific details that can get in the way.
On what time scale? Maybe in the grand scope of human history, but not necessarily in a time frame relevant to your particular problems. What is the selection process? Perhaps in the near term at least some organizations may be insulated from the costs of poor predictions and/or inappropriate actions. Finally, is effectively predicting and acting really what is being optimized for? There is another equally difficult issue to resolve beyond the prediction-action gap. Ideally, any prediction ought to result in an action that changes the state of the environment. The baseball hitter predicts the future course of the ball such that he can hit it. But to the extent that the predictioneer has to include themselves as a component of their own model, the utility of prediction becomes progressively less obvious.
In certain cases simply making a prediction can influence the outcome they seek to predict. This may result because the target is aware of the prediction function, therefore altering their behavior to “game” the system to their advantage. Or it can be far more banal. In a personalized recommendation system, items may be recommended to a user based on their past behavior. However, the user behavior that serves as an input to the recommender may in turn be a consequence of earlier recommendations. Similarly, election forecasts may depress turnout and thus change actual electoral results being predicted. Understanding of this “performative” quality of predictions is still more of an art than a science, though it is well-known at least informally to students of public policy.
All of this does not render prediction irrelevant, it just seriously demotes it. The greater the systemic importance of the predicted outcome, the more complicated it will be to cross the bridge between prediction and action. And the more complex the scenario being predicted is, the greater the potential for things like performative prediction to become serious problems for the predictioneer. In some cases, predictions can become totally irrelevant to an outcome simply because the decision agents are not in a position to act. Getting them into a position where they can act may therefore in these situations be far more important than the prediction itself.
The movie Jaws, referenced in a more topical post, is a trivial fictional example with significant real-world import. The Mayor is presented with what we see is a very solid prediction – if he does not act, people will get eaten by a giant shark. But closing down the beach would cause economic devastation to his small beach community, so naturally he finds every excuse to avoid action. We scoff at the Mayor but he faces all of the costs of performing the correct action – closing down the beach – without necessarily receiving the benefits – political recognition that he averted a disaster that did not occur as a consequence of his behavior.
In conclusion, it is rather striking that most criticisms of the prediction obsession focus on the methodological grounds for prediction. Critics doubt whether efficient prediction of complex systems is possible and insult predictioneers as would-be oracles. This may or may not be a legitimate criticism, but it takes for granted the singular importance of predictions in and of themselves. Religious texts and folk parables are full of prophets that accurately foresaw the future but found themselves cursed rather than blessed by their foresight. I would not necessarily go as far as to say that predictioneers are cursed. But I am not necessarily sure they are blessed either.