- Who can utilize the SCALE method?
The Scaled CECL Allowance Loss Estimator (SCALE) tool can only be utilized by banks under $1 billion in assets. The SCALE tool is a spreadsheet that uses proxy expected lifetime loss rates based on call report data reported by institutions between $1 billion to $10 billion in assets. However, remember that using the SCALE tool means that you will be taking someone else’s “all in” allowance. This is an allowance computed in the prior quarter. Your institution won’t see what is included in their qualitative adjustments, forecasts, or impaired reserve, which may not be applicable at your institution. Perhaps the most significant practical impact of SCALE is that by introducing it, regulators have clarified that financial institutions have the space to bring in external information like peer information for their computational estimates.
- Is it OK to use different methodologies for different pools? For example, can you use one methodology to estimate losses for mortgages and another for auto loans?
Yes. Some methods tend to integrate the core differences of those pools, especially an instrument-level projection, but several Abrigo clients have chosen to blend methodologies to cover different pools. The most common solution is to do something more complex for the institution’s larger, more important segments, and something expedient for the others.
- Is my low or non-existent loss experience relevant and reflective of my credit appetite and, therefore, relevant as a base for future loss expectations?
An institution’s low or non-existent loss experience may very well be relevant. Suppose your institution’s loans are well-secured and strongly underwritten, and you rarely have defaults or loss events. In that case, your experience could be made into a credible argument for a zero allowance. The question is whether it will be more trouble than it’s worth to do so. Your institution could develop a narrative that shows how it excels at credit, but it likely has a 70-80 basis point allowance level under the current standards. Once factors like the economy, new regulations, management changes, or even natural disasters are factored in, a greater allowance may be necessary. It may be best to find a broader loss experience and include it within the realm of possibility for your institution on the quantitative side. Credit appetite is a relevant factor, but just because an institution has not experienced a loss does not mean it cannot.
- How often do you see CECL filers in the $1 billion to $10 billion asset size using statistical/econometric models?
Among Abrigo customers, econometric models are used frequently. Econometric models make it easy to find reliable data and defend adjustments, and they can be easily tested. There are many ways to address the question of future loss rates, and econometric models do not need to be complicated. A straightforward single or two-factor model can suffice.
It can be challenging to model forecasts for institutions with small lending footprints. But generally, institutions in this position experienced an increase in reserves in the Great Recession due to risk and saw realized losses between 2008 and 2010. The trick is to quantify and document any potential risk to your institution, whether it is realized or not. For example, even if unemployment did not increase an institution’s allowance during the pandemic, the increase in risk may have led to an increase in allowance. On the other hand, a heavily agricultural loan-based institution may have seen risk decrease during the pandemic, thanks to rising grain prices and stimulus money. If your institution is curious about how macroeconomics impacts your footprint, it may be a good idea to consult with an expert to tackle your specific situation.
A supportable approach to setting a policy is to evaluate forecasts from a source like the FOMC. Select historical periods that are most like the current or short-term conditions and select periods that reflect the long-term average for reversion.
It is useful if the straight-line assumption for the preexisting balances can be defended. Purchased accounting considerations or changing loan terms in the portfolio or portfolios in growth mode will make this methodology hard to defend. Institutions may not be able to rely on input data or a reference pool of loans reflecting accurate loan data from a representative institution. But they can still get to a credible result using certain CECL models that are less reliant on historical, loan-level loss experience. A good starting point for institutions without data is to layer forecast adjustments and refine benchmarks appropriate for financial statement usage. Once a reasonable set of results is found for the quantitative model under certain economic circumstances, institutions can try to establish a simpler methodology to get a similar outcome. If another method is successful, the institution has a more straightforward method; if it fails, the institution will know the simpler model won’t work for them.
Yes. Policy adjustments may be relevant for only a specific period that your institution selects. The adjustments can be based on a peer cohort as well. Either way, your institution will need to justify its decision in a statement detailing how the past is relevant to the future. If that narrative claim is controversial, take it as a sign that an underlying model may be more appropriate.
The loans within a segment inherit "pool" level assumptions on the credit component — loss or default — and "pool" level assumptions on timing — recovery delay, prepayment/curtailment — but use instrument-level payment structures.
In this situation, you need to master the trick of using peer experience to supplant your internal experience. Your institution could make that argument—you have no losses and don't expect any because of (valid reasons) X, Y, Z. But you would then have to actualize it by having an unpalatably low allowance for a community financial institution. Another option would be to make a qualitative argument, but those arguments can become strained and difficult to maintain, as we saw through the Incurred Loss practices. The argument we prefer to make is one about the state of knowledge and experience itself. For example, “we bear credit risk on these instruments, but don't have the tens and tens and tens of thousands of loans required to calculate that risk-based just on our lending.” That’s where looking at peer institutions’ risk comes in.
It would require some comparable mapping, but that's usually intuitive, and the peer option will still work. The system and the theory we are discussing lets you map any call code or call code rollup to any segmentation you have in place.
- Have you seen the Consumer Price Index (CPI) correlate better than unemployment for consumer loans like indirect autos? Any reason why unemployment would show little correlation?
Occasionally. One issue with CPI is its forecast ability. CPI is the kind of thing that might end up in a qualitative framework because you think it matters, which is reasonable, but it can be difficult to quantify its impact. It is more likely that unemployment will fit in the long run. It usually correlates strongly when given some time (lead/lag) considerations.