Skip to main content

Looking for Valuant? You are in the right place!

Valuant is now Abrigo, giving you a single source to Manage Risk and Drive Growth

Make yourself at home – we hope you enjoy your new web experience.

Looking for DiCOM? You are in the right place!

DiCOM Software is now part of Abrigo, giving you a single source to Manage Risk and Drive Growth. Make yourself at home – we hope you enjoy your new web experience.

Looking for TPG Software? You are in the right place!

TPG Software is now part of Abrigo. You can continue to count on the world-class Investment Accounting software and services you’ve come to expect, plus all that Abrigo has to offer.

Make yourself at home – we hope you enjoy being part of our community.

6 Common CECL backtesting mistakes & how to avoid them

Regan Camp
April 14, 2025
Read Time: 0 min

Sidestep some of the issues that most commonly hurt backtesting efforts

Backtesting the estimate for credit losses can build confidence in the CECL model and ensure it reflects an institution's credit risk. However, be careful to avoid common backtesting mistakes.

Key topics covered in this post:

Evaluating CECL estimates against actual credit losses

Backtesting is one of the most practical tools community financial institutions (CFIs) can use to make sure their Current Expected Credit Loss (CECL) models are holding up. CECL backtesting, part of any CECL model validation process, is all about taking a look at what was projected for credit losses and seeing how the estimate compares to what actually happened.

On the surface, it seems simple. But in practice, it often trips up institutions for all kinds of reasons – limited staff, unclear expectations, or uncertainty about how to use the results.

If backtesting your CECL calculation is on your radar, or you're looking to tighten up your approach, here are some common mistakes CFIs make, and a few ways to sidestep them.

You might also like this resource: “A banker’s guide for CECL compliance and backtesting.”

download

Common backtesting mistakes in CECL

Not backtesting frequently enough

Many institutions treat backtesting as a once-a-year compliance checkbox. But CECL models don’t operate in a vacuum. They’re impacted by changing portfolios, shifting economic conditions, and internal decision-making. Waiting too long between backtests increases the risk of undetected issues, including model drift and outdated assumptions.

TIP: Build a regular backtesting cadence. Quarterly is a great goal. Frequent reviews help spot early warning signs and ensure that assumptions stay aligned with actual performance.

Using inconsistent data sets

This one comes up often. A backtest might look off, but when you dig in, the issue is simply that the model and the actual loss comparison used different data sources or definitions. Inconsistencies in timeframes, segmentation, or inputs can make results unreliable or—even worse—misleading. CECL backtesting provides the ongoing monitoring to catch those issues.

Tip: Align data definitions, timeframes, and segmentation across modeling and backtesting. Small inconsistencies can skew results, so consistency provides a clearer picture.

Ignoring loan segmentation differences

At the portfolio level, backtesting results might look fine until they’re broken down into individual loan segments. That’s where trouble spots tend to appear. It might be commercial real estate, indirect auto, or another niche that behaves differently than expected.

Tip: Always review model output against actual loss rates by loan segment. Even without the resources to dig into more granular breakdowns like geography or risk grade, segment-level analysis often reveals areas where model assumptions need attention.

Overlooking the impact of macroeconomic assumptions

When models include economic forecasts or qualitative overlays, those assumptions should be part of the backtesting analysis. Skipping a review of macroeconomic assumptions in your model is a missed opportunity to understand what’s really driving results.

Tip: During backtesting, step back and look at how macro assumptions held up. If the model expected unemployment to rise and it didn’t, or vice versa, what was the impact? These insights often lead to meaningful refinements.

Failing to document and act on findings

One of the biggest gaps isn’t in the analysis, it’s in what happens after. Some institutions run the numbers, find discrepancies, and then...do nothing. Either the findings aren’t documented properly, or the follow-up just doesn’t happen. Failing to act on insights can undermine a model’s credibility and regulatory standing.

Tip: Create a process for documenting everything, including what was tested, what was found, and any model changes made (or why no changes were made). As CECL governance matures, setting clear thresholds for when a model change is required or when holding steady is reasonable is becoming more important. Examiners want to see thoughtful, consistent decision-making in model documentation.

Relying on small sample sizes

Many community financial institutions Abrigo’s Advisory team works with have portfolios with very few charge-offs. That’s great from a credit standpoint, but it makes backtesting more difficult. Drawing conclusions from limited data can lead to misleading results.

Tip: When CECL data is sparse, try expanding the historical window or using peer data for additional context. If qualitative factors (qualitative adjustments to the CECL calculation) are needed, make sure the rationale is well-documented and tied to what the data is showing.

Improve CECL model accuracy

Backtesting isn’t about achieving perfection. Results often don’t match forecasts exactly, and that’s perfectly fine. In many cases, institutions land on the conservative side, with allowances exceeding actual losses. That can be entirely appropriate when supported by good documentation and sound reasoning, such as economic uncertainty or limited data. Ultimately, the value of backtesting lies in the insights it provides for your CECL allowance. It reveals how the model is performing, supports stronger governance, and improves conversations with both internal stakeholders and regulators. Done thoughtfully, it becomes more than just a compliance step. It becomes a tool for building confidence in the CECL model and ensuring the model reflects the true nature of a financial institution’s credit risk.
This blog was written with the assistance of ChatGPT, an AI large language model. It was reviewed and revised by Abrigo's subject-matter expert for accuracy and additional insight.

Learn how Abrigo Advisors can boost you confidence in your CECL allowance calculation

CECL & stress testing consulting
About the Author

Regan Camp

Vice President, Portfolio Risk Sales and Services
Regan Camp is Abrigo’s Vice President of Portfolio Risk Sales and Services, leading a team of subject matter experts who assist financial institutions in accurately interpreting and applying federal accounting guidance. He began his career in financial services as a commercial loan officer at a $2.1 billion institution. He then

Full Bio

About Abrigo

Abrigo enables U.S. financial institutions to support their communities through technology that fights financial crime, grows loans and deposits, and optimizes risk. Abrigo's platform centralizes the institution's data, creates a digital user experience, ensures compliance, and delivers efficiency for scale and profitable growth.

Make Big Things Happen.