How well do models model? Do woodchucks chuck wood?

It’s been only three weeks since I started my internship, but so much has happened. We moved offices (I sit at my own desk now!), and I participated in my first “home week,” an office tradition that brings staff from all over the country to work and hang out together. CCR has a great team dynamic, and it’s been nice being a part of that.

 

Figure 1: Amount of literature surrounding each NET. Source: Minx et al. 2018

The upheaval isn’t limited to our office: the last few weeks have produced milestones for the field and, with them, public debate about the future (and past) of carbon removal. At the world’s first international conference on negative emissions, a group of researchers presented their synthesis of all of the literature (you can find the 3 articles here:  http://www.co2removal.org/) to date on carbon removal. Barely a week later, the team from Carbon Engineering – a company located in Canada running some of the world’s only plants for direct air capture – released on-the-ground cost estimates for the technology. They caused a stir, to say the least. At a glance, and certainly to me at first, these developments may seem unrelated. But underpinning them both is something that become a blessing and a curse for the carbon removal field: integrated assessment models. The new findings could potentially change how our path to to limiting global warming to 2°C, and turn our existing model findings upside down.

In light of these developments, I find myself asking fundamental questions of the role and purpose of integrated assessment models.

We know very well that the reality is infinitely complex, and it’s near impossible to model every single interaction in a system. Models are not, and do not represent, reality itself. But often, modelers try their best and model with the upmost rigor with complex equations and various recursions to try to simulate cycles. For example, we could stipulate a two degree Celsius warming scenario with some probability in an integrated assessment model. Along with some inputs and assumptions, we could see what it takes to be below two degrees of warming with a certain amount of probability. The model then generates different set of options of reaching such an end goal—whether it’s reduced or negative emissions, or a combination of both. But we’d be under the impression that it is a feasible goal—after all, the model said we can curb climate change…if we take these certain steps under various assumptions. It is exactly this framing that scholars argue against: it provides a sense of security that borders on over-optimism.

Figure 2: Temperature anomaly as a function of anthropogenic emissions. Check out the uncertainty! Source: http://www.climatecentral.org/blogs/the-5-scariest-charts-from-the-ipcc-climate-report-16529

But are the steps toward this end goal realistic? How valid are our assumptions? How well can we actually quantifying these systems and their interactions—do I take the results with a grain of salt, or a tub full of it?

These are all well-pointed and well-reasoned critiques toward the use of a model. However, I think one point gets lost amid this debate: we are not constrained to only the use of models. In his call for methodological pluralism, Richard Norgaard puts it eloquently: “[b]roader, less well defined questions can only be pursued through multiple, overlapping analyses, extensive discussion between diverse experts and the people directly affected, and judgment” (53). Other forms of analysis could be supplementing information from the models. As an example, Robert Pindyck’s method of calculating the social cost of carbon offers a complementary method to integrated assessment models.

To modelers and those entrenched in debates on the accuracy of models, I ask these questions instead: how could the information obtained from models best be used? How could we improve the rigor of the results obtained? What other methods present complementarity with usage of models?

I think we are at fault if we reject the results because they came from models. But it is also flawed to regard them as infallible truths. Climate change is a problem rife with uncertainty. We must embrace this “loading of [the] climate dice” pluralistically. Singularity in this context is equivalent to a repeated game of chance—we are bound to lose.

----------

Norgaard, R. B. (1989). The Case for Methodological Pluralism. Ecological Economics 1(1), 37-57. DOI: 10.1016/0921-8009(89)90023-2

Minx, Jan C.; Lamb, William F.; Callaghan, Max W.; Fuss, Sabine; Hilaire, Jérôme; Creutzig, Felix et al. (2018): Negative emissions—Part 1. Research landscape and synthesis. Environ. Res. Lett. 13(6), p. 63001. DOI: 10.1088/1748-9326/aabf9b.