5 min read

The Fed proves it: climate scenario analysis is bad public policy

The Fed just published the results of its exploratory climate scenario analysis. The report is revealing and casts significant doubt on the value of similar exercises. There must be a better way.
The Fed proves it: climate scenario analysis is bad public policy
AI-generated via DALL-E

This article has been made free-to-view. Like what you read? Then consider buying us a coffee here👇

The results of the US Federal Reserve Board’s exploratory climate scenario analysis (CSA) just landed.  As has become standard for these exercises, the accompanying report contains no meaningful new insights and no surprising new ideas to make us rethink our understanding of climate-related financial risk. Nor does it feature any critical new policy prescriptions.

In fact, there’s little to suggest that the banks involved are more robust for their pains, or that the Fed is any better informed about the most critical challenge currently facing humanity.

Take one of the findings as an example: insurance helps shield property owners and banks from the financial costs of property damage. Who’d have imagined?

Where the CSA report is useful is in revealing that, unlike their European counterparts, American banks are more willing to criticize the analyses they were asked to perform.  Either that or the Fed, to their credit, was actually willing to publish the criticism.  Either way, the primary contribution of this release is that we get the chance to see an important regulator grappling with the weaknesses of a methodology that others apparently accept without question.

I’ll go through the identified weaknesses in just a moment, and also add a few of my own.  As I do, though, it’s important to ask whether these weaknesses simply require the underlying CSA methodology to be tweaked, or whether the problems are intrinsic and intractable.  

Often critics of exercises like this will say that the scenarios weren’t sufficiently severe or that the macroeconomic linkages assumed by the scenario designers were not deep enough.  To me, these are minor foibles, not real criticisms.  If you want to dial up the stress levels of a given scenario, knock yourself out.  You’ll be pushing further into the tails of the distribution but if it makes you feel better, I don’t have a problem with any scenario you might want to propose.

The other category of common criticisms involves data shortcomings, the effect these have on the models used, and the way they shape the subsequent analysis.  As Donald Rumsfeld famously said: “You go to war with the army you have, not the army you might want or wish to have at a later time.”  In other words, if you’re dissatisfied with the level of aggregation used in the stress test or feel that the economic analysis is too coarse to be useful, I’m sorry, but that’s the hand you’ve been dealt.

The adjustments I propose run a little bit deeper.  The physical risk component of the exercise was focused on commercial and residential mortgages, which is fine, but the participants limited themselves only to a study of the properties directly in the path of the proposed climate shocks.  To capture the full impact of a scenario, it is critical that the entire portfolio be analyzed.  This means considering the implications of a climate-induced reduction in supply on property valuation and the performance of mortgages in relatively safe locations.  Residences, shops, and factories that are far removed from floodplains and tinder-dry forests will become more valuable as warming progresses. This should be incorporated into the analysis.

The other tweak would be to jettison the ridiculous static portfolio assumption.  I like to describe this as a “sitting duck stress test” – if you are forced to remain on the tracks as the train approaches, you will get struck.   It’s far more realistic to believe that demand for risky loans in risky locations will dissipate as dire scenarios unfold and that banks won’t be interested in funding the reckless individuals who are not deterred by things like rising insurance costs.

But these are still minor changes.  The genuinely deep criticisms offered by the Fed’s CSA report are those related to the speculative nature of the exercise and our inability to assess the quality of the work.  From the report (emphasis is mine):

“These challenges included limited data, lack of back-testing capabilities, nonlinear risks, scenario horizon, heavy reliance on judgment, limited reliability of model output, and time constraints. Given the exploratory nature of the exercise, the Participant Instructions acknowledged and accepted the challenges with model validation.”

The elements that are not highlighted are either modeling challenges – some of which are very difficult – or those that suggest a lack of available resources (time, data, money).  I’ve highlighted the issues I view as more intrinsic to scenario analysis, most of which are impossible to resolve. 

How might a climate scenario analysis be back-tested?  This is a process whereby we hold back some data, run the models, and then see how well they would have predicted the missing data.  Such an approach applies to unconditional forecasts, but scenario analysis is an entirely different beast.  If the model performs well in predicting the holdout sample, this does not mean it will correctly capture behavior under stress. Besides, people always tell me that historical data doesn’t display evidence of financial climate risk, so how is backtesting supposed to bolster a model used to project “future risk”?

Put simply, the validation of scenarios can never be based on the quality of the forecasts.  When the banks report the “limited reliability of model output” the reality is that they have no idea whether the model output is good or bad.  The predictions may turn out to be sound but we have literally no way of knowing this unless the scenarios actually play out and the previously produced projections prove to be prescient.  The models can be validated using other criteria, like passing an array of statistical diagnostics, but such tests may be completely irrelevant to the core task at hand.  Again, there’s literally no way of knowing whether the tests have any power.  

What’s important to recognize is that these problems of validation cannot be fixed with tweaks here and there. They are intrinsic to scenario analysis and call the entire CSA methodology into question.

With unreliable models – or, more correctly, with models whose reliability cannot be determined – it is inevitable that climate scenario analyses become judgment based and speculative.

Later in the report, I laughed out loud at this zinger (my emphasis):

“Most participants noted that time constraints, data limitations, and the nature of the exercise precluded participants from applying a full control framework, which would typically include model validation.”

So the participants noted that the very nature of the exercise precludes their ability to apply consistent, repeatable, auditable processes!  And yet, regulators around the world want to force banks to run these exercises?

This is a bit like health authorities mandating the use of a vaccine that cannot be assessed for safety.  Not that it hasn’t yet been assessed, but that it cannot ever be!  I should say that if the banks think the vaccine of CSA is helpful and want to use it of their own volition, that’s no skin off my nose.  The regulators shouldn’t stand in their way. I wish them the best of luck and hope they stay healthy.

But for financial regulators to require the use of such techniques, supposedly in the national interest?  I’m sorry, but it’s just poor public policy. 

They surely can propose better ways of fortifying the financial system against the rigors of global warming and the related transition.  Or at least something that can be objectively assessed.