What to Do When Your Credit Risk Model Works Today, but Breaks Six Months Later
towardsdatascience.com·5h
Flag this post

has a tricky secret. Organizations deploy models that achieve 98% accuracy in validation, then watch them quietly degrade in production. The team calls it “concept drift” and moves on. But what if this isn’t a mysterious phenomenon — what if it’s a predictable consequence of how we optimize?

I started asking this question after watching another production model fail. The answer led somewhere unexpected: the geometry we use for optimization determines whether models stay stable as distributions shift. Not the data. Not the hyperparameters. The space itself.

I realized that credit risk is fundamentally a ranking problem, not a classification problem. You don’t need to predict “default” or “no default” with 98% accuracy. You need to order borrowers by risk: Is Borrower A riski…

Similar Posts

Loading similar posts...