The Federal Reserve, like the rest of us, operates in an uncertain world. This sounds like an obvious platitude and truism, but the difference between measurable “risk” and unmeasurable “uncertainty” has been key to economic thought since JM Keynes. The pandemic recession and recovery has meant most recent analysis of “uncertainty” has been over the economic effects of world events—supply chain snarls, a land war in Europe—or data quality and measurement error. While these are both important, policymakers at the Federal Reserve should bear another key source of uncertainty in mind: model uncertainty.

We use models to try to make predictions despite that uncertainty, or to find a roadmap for policy. What I would like to propose here is a simple heuristic for evaluating the risk of choosing the wrong model in a policy situation: how large are the costs, and how reversible are the effects, of choosing the wrong model? Doctors in the past used to treat a whole range of diseases with bloodletting, but we know now that it’s only really effective for helping with gout pain. In other cases, bloodletting would just be harmful, and the patients' excessive blood loss and the complications thereof would be the consequence of a bad model of disease.

Our task as economists is to weigh the costs and benefits of each approach. This simple consequence-oriented approach should be a useful heuristic in evaluating model choice under pervasive uncertainty.

Our uncertainty about the economy usually takes one of three forms:

  1. Data uncertainty: there may be measurement errors or data quality problems in the data fed into the model. This is a bigger problem for real-time policymakers than it is for academics, who have the luxury of using data based on the final revisions to the series.
  2. Parameter uncertainty: estimations of these models are still uncertain as to the actual parameter values, since the actual structure of the economy that underlies the data these models use is constantly changing. Work on this point exists, but is often framed as designing models that can handle deviations around the specified model, as in robust control theory.
  3. Model choice uncertainty: the question of which model to use in which situation, with the knowledge that no economic model is designed to encompass the entire economy at high resolution. This is what is often considered the “art” of central banking, while engaging the prior two points are the “science” of central banking.

Today, the Fed faces significant uncertainty in model choice. As my colleague Preston Mui has explained at length, some models presented early in the pandemic called for draconian increases in unemployment in order to reduce inflation. These models, presented as straightforward explanations of the labor market-inflation nexus have performed poorly in explaining the path of inflation, even when using the relevant realized data.

Source: Bureau of Labor Statistics, Cleveland Fed, Survey of Professional Forecasters, Authors’ Calculations. Model forecast uses historical VUR, headline shocks, and SPF 10-year CPI forecasts (the median SPF forecast for 2022 Q4 is assigned to November, and the expectations for December are linearly extrapolated). I use the authors’ estimates of the “core inflation gap” equation and add the estimated gap to actual SPF inflation expectations to obtain a model forecast for median inflation.

Why should we follow through on a model’s recommendation to resolve inflation through dramatic increases in unemployment if we are highly uncertain of how those models will perform in the near term? In theory, economists can add further epicycles or degrees of freedom to models like the ones discussed above, in order to capture effects that are visible in the economy but not the model. In practice, however, economists need to keep a realistic vision of the trade offs involved in the decision to follow through on the recommendations of a given model.

Now, this is not an argument against models per se. Rather, it is an argument that our willingness to act on the output of models should be tempered by the scale of the impact that following those models’ recommendations would have on the economy as a whole, and how difficult the effects would be to reverse if we find we are using the wrong model. If a model tells you to jump off a bridge, the evidentiary threshold for using that model better be high.

We know that once unemployment begins to rise, that rise is extremely difficult to reverse. As the past three recessions have shown, it takes substantial time, interest rate cuts and fiscal spending to reverse a downturn, regain lost jobs, and preserve industrial capacity. We also know that monetary policy can halt inflation at extreme cost to the labor market. Are we at a point where that extreme cost is justified?

If a model makes extreme recommendations, policy analysis should take that fact into account. This isn’t a question that can really be resolved on the grounds of science or epistemology, but is rather a question of rational statecraft under uncertainty.

Obviously this consequentialism isn’t an excuse for descriptive inaccuracy. If a rigid Phillips Curve was as empirically valid as the law of universal gravitation, there would be no model uncertainty to deal with. But given that even the most ardent proponents for a rigid Phillips Curve relationship demand generous and charitable interpretation, it’s also worth asking why we wouldn’t apply similar charity and generosity to causal views with comparable empirical validity but more benign consequences?  

Given the scale of uncertainty around model selection and the identifiable welfare costs and consequences associated with a specific model’s prescriptions, it is hard to see the case for targeting recessionary increases in unemployment. Engineering the worst-case scenario (especially when its effects are visited on other people!) is hardly the mark of seriousness that some seem to think it is.