In this week’s episode, host Daniel Raimi talks with Pennsylvania State University Professor Wei Peng, who recently published a study in the journal Nature about improving climate policy models, coauthored alongside Valentina Bosetti of the RFF-CMCC European Institute on Economics and the Environment and other scholars. Peng describes the basics of integrated assessment models (IAMs), which often use complex modeling techniques to predict future climate trends, and contends that many of these models are difficult to understand or detached from political realities. Going forward, Peng recommends that modelers engage with policymakers earlier in the model design process to ensure that climate models can be used to guide decisions in the real world.
Listen to the Podcast
- Applying complex integrated assessment models: “[IAMs] try to provide detailed representations of sectors and technologies. For example, [they can predict] what kind of power plants we’re going to build, or the kind of cars people are going to drive, the kind of buildings we’re going to live in, and the kind of food we’re going to consume—plus, what kind of agricultural activities we’re going to have to use in order to meet that food demand.” (7:18)
- Making models that meet policymakers’ needs: “We want to spend our money and time on the questions that are actually important to real-world decisionmakers, not just to write another paper that can be published in an academic journal. It’s important to engage with the potential model users from the very beginning. We should ask them what questions they really care about and develop models accordingly. We should not just spend five years making model improvements … and then tell policymakers, “Hey, this is what we made. Take these and try to make use of them.” (17:10)
- Local political support matters: “When we think about climate leaders like California and New York, they aren’t taking the lead on this because it’s cheaper to mitigate climate change in these places, but because they have stronger local political support. It’s actually easier politically to promote mitigation in those places.” (24:49)
Top of the Stack
- “Climate policy models need to get real about people—here’s how” by Wei Peng, Gokul Iyer, Valentina Bosetti, Vaibhav Chaturvedi, James Edmonds, Allen A. Fawcett, Stéphane Hallegatte, David G. Victor, Detlef van Vuuren, and John Weyant
- Making Climate Policy Work by Danny Cullenward and David G. Victor
- Global Energy Outlook 2021: Pathways from Paris by Richard Newell, Daniel Raimi, Seth Villanueva, and Brian Prest
- RFF’s Global Energy Outlook interactive data tool
The Full Transcript
Daniel Raimi: Hello and welcome to Resources Radio, a weekly podcast from Resources for the Future. I'm your host, Daniel Raimi. This week, we talk with Wei Peng, assistant professor of international affairs and civil and environmental engineering at Penn State University. Professor Peng is the first author on a recent paper published in Nature that makes recommendations on how integrated assessment models, or IAMs, can be more useful in climate policy.
She and her coauthors described how these models can better represent the real world, especially political dynamics, to better inform policymakers at the local, national, and international scale. Stay with us.
Okay. Wei Peng from Penn State University. Thank you so much for joining us today on Resources Radio.
Wei Peng: Thank you for having me.
Daniel Raimi: Wei, we're going to talk about a recent paper that you published with a group of coauthors about integrated assessment models. But before we do that, we always ask our guests how they got interested in working on environmental issues in the first place. How did you come into this field?
Wei Peng: I grew up in China and I went to college in Beijing. As you could imagine, the air pollution is really a big problem there. But I have to say that the bad air itself didn't really motivate me to do research on the environment. I got used to it and felt that it was something I had to bear, given that I live in Beijing. But, the real game changer to me was that, in 2008, I was a sophomore when Beijing hosted the Summer Olympic Games.
It was such an important event that the government tried everything they could to clean up the air, just for those two weeks. They actually succeeded. We had blue skies during the two weeks of time, and it was like magic. They achieved that goal by, for example, telling the industrial plants to shut down during that time and asking people not to drive for those two weeks.
I basically learned two things from that. One is that political will is really powerful. I was wrong to think that I have to bear the bad air, because, if we really want to tackle it, we can clean up the air and we can do it fairly quickly. But, I also learned a second thing, which is that policies are all about trade-offs. Because, after the Olympic Games, things went back to normal. Those activities went back to normal.
As a result, the air pollution went back to normal as well. We could shut down those economic activities for two weeks, but we probably could not afford to do that for a very long period of time. It was that line of thinking that motivated me to pursue my PhD degree in energy and environmental Policy after I graduated from college in 2011.
I have been thinking about this question: "How can we have better policies, or smart policies, so that we can gradually navigate people's behavior? Also, how can we make investment decisions into something that is more sustainable, which is both cleaner and also economically viable?" I have been working on this topic for a very long time. It's hard to imagine that it has been 10 years since I started working on this topic.
Daniel Raimi: That's such a great example of trade-offs, as you say. You can shut down the factories for two weeks, but you probably can't shut them down for two years. So, trying to find that balance is so important and interesting. Wei, let's talk now about this really fascinating paper that you were the first author on. We'll have a link to it in the show notes. The paper is called “Climate Policy Models Need to Get Real About People—Here's How.”
It’s a great, grabbing title, and we're going to dig into it. As I mentioned earlier, we're really going to focus on this class of models called integrated assessment models (IAMs), so we'll be referring to IAMs in today's conversation. Most people listening to our show have probably heard of IAMs, but might not know the details. Can you start us off by describing what IAMs are and how they are used in this context?
Wei Peng: Absolutely. So, in order to understand climate problems, people have built a suite of tools to model how human activities interact with—and also have an impact on—natural systems. We call them integrated assessment models, because it models a very long chain of effects. For example, how social economic drivers, such as income and population growth, are going to affect economic activities, or what types of mitigation efforts are going to occur as a result. What would be the implications on the climate system?
Sometimes we also consider the feedback. For example, if we have more warming in the future, it is going to be hotter. So, people are going to turn on the air conditioners more often, so this has implications for energy demand. In short, it is integrated in the sense that we couple different systems together, from humans to technologies to the climate system.
Another aspect it is integrated is also that we need to combine knowledge from different fields. We need insights from economics to understand how people respond to things such as price signals and other incentives. We also need insight from physical sciences, such as how much CO2 is emitted from power plants and how those CO2 emissions are going to affect our future warming. That kind of thing.
I do want to emphasize one point that is probably less thought about, especially among those people not working specifically in the modeling field. There are actually two types of IAMs. Both of them are very useful for climate policymaking, but they actually target different policy questions.
The first type is cost-benefit IAMs, which are simpler, to a large extent, in terms of the model structure. The second type is what we call detailed process IAMs, which are more complex. Let me explain it a little bit. When we talk about cost-benefit IAMs, the simpler ones, they usually compare the costs and benefits of avoiding a certain level of warming in the future.
They try to answer questions such as, "What will be the social cost of carbon? If we emit one unit of carbon today, what will be the cumulative—both present and future—cost to society?" One quick example of the cost-benefit IAM model is the DICE model. The full name is the Dynamic Integrated Climate Economy Model, which was developed by Bill Nordhaus, who won the Nobel Prize of Economics in 2018.
These models are simpler, in the sense that they try to use simplified equations to characterize how society works. You can actually use an Excel spreadsheet to get the results from the model, such as answering questions like, "What will be the social cost of carbon? What will be the optimal mitigation pathway if you have certain preferences?"
That's the first type of model. But, when we wrote that piece in Nature, what we had in mind in terms of the target model was using the other type of model, which is the detailed process IAMs. They are detailed processes because they try to provide detailed representations of sectors and technologies. For example, predicting what kind of power plants we're going to build, or the kind of cars people are going to drive, the kind of buildings we're going to live in, and the kind of food we're going to consume, plus what kind of agricultural activities we're going to have to use in order to meet that food demand.
The key point here is that we want to represent the sectors and technologies in a very detailed way. We also want to model the interactions between the human system and the natural system. One quick example is there can be river flows, and this is going to affect the availability of cooling water for power plants. That's something we can model in this type of IAMs.
The detailed process IAMs are on set to answer big “what if?” questions. For example, “if the world wants to achieve climate stabilization by the end of the century, what kind of technology mitigation pathways can actually help us get there? What should China do? What should India do? What should the United States do?” It can also ask some of the more detailed “what if?” questions. For example, “if we impose a carbon tax, what does it mean for our technology system? Are we going to shut down coal power plants? Are we going to add renewables? Then, what are the implications of that on carbon emissions?”
These types of models are much more complex. We can't use Excel to get a solution. We need a programming language and lots of computing resources, and it takes anywhere from minutes to hours to get a solution. So, I want to emphasize this difference here, to make it clear from the very beginning that it is these detailed process IAMs that we had in mind when we were describing how climate policy needs to get real about people.
Daniel Raimi: That's so helpful. Correct me if I'm wrong, but I think the models that we're really going to be talking about today are the ones that are used, for example, to inform the Intergovernmental Panel on Climate Change (IPCC) process. Like the models that are run to achieve a 1.5 degree stabilization target in the 2018 special report. Is that right?
Wei Peng: Exactly. Thank you, Daniel. This is a very useful context. IPCC has been using these detailed process IAMs to develop mitigation pathways and to achieve certain stabilization targets. It has been used in the previous assessment report, and it was used in that 1.5 degree special report. It is also going to be used in the forthcoming assessment report, number six, as well. AR6.
Daniel Raimi: Let's get into the core of the arguments that you and your colleagues make in the paper. What are some of the key shortcomings that you identify with existing IAMs?
Wei Peng: As I mentioned before, we tried hard to use these IAMs to represent the physical world in a very detailed manner. However, we're still missing a very important driver of climate policy in the real world, which is politics. I have to say, sometimes I just wish I could live in the model world, because I can impose a carbon tax, which is just a line of code. And in that world, investment choices are going to respond very quickly.
For example, if I have a carbon tax in my model, we're going to see less coal and more renewables. That feels so simple, but the real world is much more complex. Just as a quick example, a carbon tax is difficult to get in the real world. For example, a lot of times, politicians and policymakers actually prefer regulatory approaches such as renewable portfolio standards or low carbon fuel standards, because those regular approaches mean the public doesn’t observe the policy cost as directly. As a result, they’re actually politically easier to choose and implement in the real world.
On top of that, there's so much inertia in our social and technological systems. Getting away from coal, even if we have a price on carbon, is difficult. Think about the potential job losses and equity implications. In short, we definitely see a disconnect between the modeled world and the real world, and we think we should try to make the model more useful and more politically relevant. A key shortcoming—the key challenge we hope to address—is to add those political and human factors.
Daniel Raimi: As you know, it's one thing to kind of note that in the abstract but it's another thing to actually code it, right? To get it into the models and make it work. So, what are some of the concrete steps that you and your co-authors have in mind, to actually make these updates to IAMs?
Wei Peng: I think the first thing that we should be thinking about in the model is what to prioritize because model development is a never-ending process. A lot of these big models we're talking about today have been developed for many decades. That's how we get this complexity in the models today.
The key question is we know that we want to add more human and political factors into the model, but we also know that it takes a lot of time, human resources and money to develop these models. What are the areas we should prioritize? The concrete suggestions we made in this piece are that we should think strategically in two dimensions.
The first dimension to consider is “is this model development going to really help the decisionmakers?” Or, in other words, is this model development going to be useful for making concrete and improved decisions in the real world? Here, another important point to note is that there are different decisionmakers who can benefit from using the results of IAMs.
There are both national decisionmakers and subnational decisionmakers. They probably care about the distributional consequences on different segments of population within their countries or constituencies. For them, those are the type of model improvements that will help them make better decisions.
In comparison, there are other people—for example, the negotiators who are going to come to the COP conference later this year. They are going to discuss, on an international scale, what the next climate deal should look like so that we can solve the problem. They may care about how international trade is impacted, or how the investment policies are going to affect the future landscape of climate change. The key point here is that we need to recognize there are different decisionmakers and they have different needs. We want to know, what are the useful model improvements for them? That should be the starting point.
The second dimension, as a modeler, is that we should also be thinking about, "How practical is it to make certain model improvements? Do we have the data to support it? Can we add algorithms that are compatible with existing model structure? Are we going to make sure that we can find a solution?"
We don’t want to make the model too complex, but I have to say that the second dimension, about how practical or how easy it is to make that improvement in model, is something modelers are very good at. We have been thinking about questions like this from the very beginning, both when we’re developing models and when we’re refining older models.
I really think it is the first dimension that is understudied. Once we figure out the first dimension, it is a very practical question for the modelers, of “How should we make it happen?” in the modeling world.
Daniel Raimi: There's a really interesting graph in the paper, too, that people can check out, where you plot these two dimensions against each other. It includes both the usefulness of an update to the model, and how easy or difficult it would be to practically implement it in the model. There are some examples of each type in there, which I'd encourage listeners to go check out.
As you've already noted, there are different levels of difficulty to implementing these changes. Can you talk a little bit more about some of the key challenges to making these improvements to the models in the real world?
Wei Peng: I think, first of all, if our goal is to make the models more politically relevant, we actually need to keep reminding ourselves that we should start with the question, not the tool. This message may sound very straightforward, but I have to say, if you are a modeler who has spent decades investing in this type of model, it's easy to think about what can be done using that certain model, but it’s not easy to take a step back.
It’s important to think about the bigger question, about what things would really be important to include. That's why I want to re-emphasize this point again. I think the first key challenge is recognizing different needs of different decisionmakers. Also, it’s important to recognize that our models are not going to solve every problem. There is no one-size-fits-all type of model. So, we want to spend our money and time on the questions that are actually important to the real-world decisionmakers, not just to write another paper that can be published in an academic journal.
In order to really make this happen, I think it's important to engage with the potential model users, or the stakeholders, from the very beginning. We should ask them what questions they really care about and develop models accordingly. We should not just spend five years making model improvements, produce the next generation of results, and then tell policymakers, "hey, this is what we made, take these and try to make use of them."
I think that's the first key challenge I want to mention here: really engaging the model users from the very beginning. The second challenge, I think, is that all models have strengths and weaknesses. When we talk about integrated assessment models, we are actually not talking about one specific model. There are actually many of them that exist. They're viewed on different model structures and logics, and they have different priorities. There are certain parts that are stronger than the others.
I think there's a saying that, when you have a hammer, everything looks like a nail. So, I do think that, as a modeler, we probably want to think about this question: "I have a very good tool in front of me, but what kind of political processes, or what kind of human factors, are compatible with the type of model I'm working on right now? Am I stretching my model too much, or making it too complex?" If so, you might be answering a question that the model is not designed to answer.
A specific example of this is that a lot of models are viewed on this logic of optimization. So, we have a certain goal, for example, to limit warming to two degrees. We want to find the most cost-effective way of achieving that goal. That's clearly a framework of optimization, but at the same time, a lot of political and human processes are really dynamic.
There are things like policy diffusion or technology adoption. There are a lot of non-linear and very dynamic processes involved there. For things like adopting new technologies, people sometimes are reluctant to adopt new technologies, even when there's so much evidence that they're superior. There are other models people have been designing to better capture those dynamic processes. One example is agent-based modeling.
I think the point I'm trying to make here is that there are different model structures and logics. Some of them are more compatible with certain types of political processes and factors. Some of them are less compatible. As modelers, we need to recognize the strengths and weaknesses of our modeling approach and try to make use of them to answer important questions. But, at the same time, we shouldn’t stretch it too much when other models are perhaps better candidates.
Daniel Raimi: The next question I wanted to ask really follows that logic quite nicely. It's about the role of modeling complexity, or the right approach to model complexity. One of the challenges of any large-scale modeling effort, whether it's IAMs or some other kind of model, is trying to represent the real world in all its complexity, but also simultaneously not making it so complex that nobody can understand what is happening in the model.
That's certainly a challenge with some of these really big IAMs— they have the potential to be quite opaque. How do you think about the tradeoffs of trying to represent the real world in all its complexity, but also making the model usable and understandable and transparent?
Wei Peng: This is a great question. I have been thinking about this a lot in the past few years. First of all, complex doesn't necessarily mean better. So, making the model more complex is a means of improving modeling, but it's not our ultimate goal. Going back to what I said earlier, the goal here is really to make models more politically relevant. That should be the guiding principle into whether or not we want to make the model more complex.
If complications are a useful thing to have to inform real-world decisions, that is worth doing—even if it means that we're making the model computationally more complex. One example is, again, the distributional consequences. A lot of models right now have one aggregate income group when studying that. If you double the income groups, you more than double the computational need.
That is because there are so many interactions with different systems. So, it's not like simply doubling the computational needs when you double the income group. Think about it: when you have the 32 world regions and five income groups, it's actually very easy for the computational needs to explode. But, given the critical importance of understanding the distributional consequences, I felt that we should still do it, because it is important.
I think this is an example where we know it's going to make the model more complex, but there is still a strong justification for why we should invest time and energy into it. I think the other point I want to make about this complexity versus usefulness discussion, is that it is true that when we make the model more complex to the outsiders, it sometimes makes it more opaque and more difficult to understand. But there is a flip side, which is that the process of making the model more complex by adding more factors and dynamics can also help us, as scientists, to understand what actually matters.
As a modeler, we can actually test out what assumptions or additions can actually have a huge impact on our results. We can break this huge, complex, and opaque question into manageable, small pieces, so that we can unpack the dynamics even more.
On the one hand, I agree that making models opaque is definitely not what we want to do, but at the same time, there are so many things modelers can do to clarify their models. It's just that we haven't spent enough time publishing papers along those lines and communicating with other audiences about those underlying dynamics.
Daniel Raimi: You've already mentioned at least one example of a type of model that can incorporate some of these dynamics—you mentioned agent-based modeling earlier. Are there other examples that you can point to, maybe specific models or specific policies? You also mentioned distributional consequences. But are there examples that you can point to, where you think these complex human and political systems are getting incorporated into IAMs in ways that you find exciting and innovative?
Wei Peng: I can share an example from a project that we have been working on. The type of integrated assessment model that I am most familiar with is the Global Change Assessment Model (GCAM). This model is developed by the Joint Global Change Research Institute, which is a partnership between the Pacific Northwest National Laboratory (PNNL) and the University of Maryland.
This is a global model, but it has a version that has a subnational detail for the United States. We have been using the GCAM-USA model to think about this question of, "What does it mean when climate action goes local?" This is a trend we have been observing in the past five years. It’s especially noticeable in the United States now that states, cities, and business leaders are actually taking the lead in tackling climate change. But, what does it mean for the mitigation cost?
When we think about climate leaders like California and New York, they aren’t taking the lead on this because it's cheaper to mitigate climate change in these places, but because they have stronger local political support. It's actually easier politically to promote mitigation in those places. But, the downside of thinking about the state-driven climate action, is that a lot of people are worried that it's going to be too expensive.
Because California and New York have already done so much to mitigate climate change already, they don't have those low-hanging fruits to enact new mitigation policies in a cheap way. So, are we going to significantly increase our mitigation cost of achieving, for example, a net-zero target as a country on the whole? What we did, using GCAM-USA, is that we partnered with the public opinion researchers at the Yale Program on Climate Change Communication.
They did a series of really fascinating surveys to sample how people think about climate policies. What is the level of support for things like climate policy in general? Or more specifically, taxing carbon, et cetera. Those public opinion surveys reflect the political reality of public support for climate action. So, we used that information from their survey and built climate mitigation scenarios to reflect those realities.
What we find is that it is going to only be marginally more expensive if we let the climate leaders lead and let the followers follow. The main driver of this not-too-expensive economic cost is that states are actually connected through the energy markets. There is electricity trade, and there can be trade of bioliquids as well, so that provides an opportunity to arbitrage.
So, even though climate leaders have a relatively high implicit carbon tax, because of this trade, we actually don't see a significantly higher cost to achieve the same goal at national level. That is just one example of how we have been using the GCAM-USA model to try to and capture some of the political realities on the ground.
I also want to quickly mention that the GCAM team at PNNL have gradually been adding different income groups to different sectors in their model. It is work in progress, but I think that definitely points to a very important direction for better modeling for distributional consequences and equity outcomes.
Daniel Raimi: Yeah, that's so interesting. I've definitely seen a similar dynamic here at RFF, where some of my colleagues have built complex models—they’re not IAMs—but they're complex in other ways. They are increasingly trying to incorporate some of those distributional issues, as well as the political dynamics that we're talking about today.
Wei Peng from Penn State University, this is such interesting work, and we could talk about it for many more hours. I'm sure you'll be thinking about it for many more hours in the weeks and months ahead, but we're going to go now to our last question, which is called Top Of the Stack. So, we’re asking you to recommend something that's at the top of your literal or metaphorical reading stack.
I'll start with a little bit of log rolling, actually. I couldn't resist, because we were talking about IAMs and IPCC models. I want to point people to this year's edition of RFF's Global Energy Outlook, which is a report that we put together each year that basically compares long-term energy outlooks from a variety of organizations, including some that are built with the very models that we're talking about today.
It's up at rff.org/geo. We compare, on an apples-to-apples basis, lots of different long-term energy outlooks. There's an interactive data tool where people can play around with the results of the IAMs that we're talking about today, as well as other models from energy organizations, like the International Energy Agency, British Petroleum, Shell, and other companies like that. So, hopefully a cool tool for all of you out there who are interested in energy models. But, how about you, Wei? What's on the top of your stack?
Wei Peng: Yeah. Before I get to what is on my stack, I just want to say that tool is really cool. I spent some time on it, and I'm actually thinking about using it for my teaching in the fall. I teach a course on the energy-environment nexus and I think that tool you were mentioning just now will be super helpful for my students to get a sense of the energy landscape. So, thank you for doing that.
Daniel Raimi: Oh great, I'm glad it's useful. Yeah, we want it to be used. It'd be great if your students could spend some time on it.
Wei Peng: Yeah, okay. So, I think the most recent book I've read on climate is a book called Making Climate Policy Work by Danny Cullenward and David Victor. David Victor is actually also one of the authors of this work. I don't want to be a spoiler, but I think this is going to be a very interesting book for those of us who have been thinking about climate policy but haven't really thought about it through the lens of empirical evidence: what actually worked in the past and what didn't really work that well, and also why.
They did a really cool analysis, looking at the market-based programs to tackle climate change and mitigate emissions. They identified some of them— actually many of them—that are not very effective. I encourage you to read the book, but as you could imagine, the punchline is really about politics. I would encourage those interested in our paper to read this book, because this really provides additional evidence and really good stories about why politics should be at the center when we think about climate policy.
Daniel Raimi: Yeah, that's a great recommendation. I've been meaning to read that book. I've also heard great things about it. So, thank you so much for that. Once again, Wei Peng from Penn State, expert on energy models and how to make them work in the real world and make them useful, thank you so much for your work on this and for coming on the show and talking to us. I've really learned a lot and I think our listeners have, too.
Wei Peng: Thank you, Daniel. Thank you for having me. It's great to be here.
Daniel Raimi: You've been listening to Resources Radio. Learn how to support Resources for the Future at rff.org/support. If you have a minute, we'd really appreciate you leaving us a rating or comment on your podcast platform of choice. Also, feel free to send us your suggestions for future episodes. Resources Radio is a podcast from Resources for the Future. RFF is an independent, nonprofit research institution in Washington, D.C..
Our mission is to improve environmental, energy, and natural resource decisions through impartial economic research and policy engagement. The views expressed on this podcast are solely those of the podcast guests and may differ from those of RFF experts, its officers, or its directors. RFF does not take positions on specific legislative proposals. Resources Radio is produced by Elizabeth Wason with music by me, Daniel Raimi. Join us next week for another episode.