Credit risk: best practices for predicting future risks

In today’s uncertain times, credit risk managers are under increasing pressure to provide robust, forward-looking insights on counterparties. Fitch Solutions explores the key pain points in the process and crucial steps to improving data quality As financial institutions grapple with heightened uncertainty, credit risk managers are under more pressure than ever to provide credible assessments of what might happen next. Amid a volatile market outlook, questions around the direction of monetary policy and ongoing regulatory change mean fulfilling this aim has rarely been more difficult. Robust, high-quality forward-looking data is critical to modelling future scenarios. However, a recent Fitch Solutions survey of risk managers worldwide found that accessing and acting on this data remains a formidable challenge – particularly with conditions and demands constantly shifting.

“When it comes to forward-looking data, the responsibility of risk managers extends far beyond the credit portfolio,” says Paul Whitmore, global head of counterparty risk solutions at Fitch Solutions. “They’re looking at market risk, operational risk, reputational risk and other assets. Institutions such as the IFRS [International Financial Reporting Standards] Foundation are putting more emphasis on timely financial reporting and meaningful assessments of forward-looking credit data. So people recognise the need to be better at this, but there are limits to what they can do. It’s a slow-turning ship.” For all the complexity, Whitmore believes there are clear steps risk managers can take to resolve some of the struggles around forward-looking information. Often a more sustainable approach is the result of better prioritisation and learning to live with limitations – as well as recognising the powerful role historical data can play in carving out insights on future trends. Getting comfortable with complexity The latest edition of Fitch Solutions’ Credit risk survey highlighted a number of barriers that credit risk professionals face when sourcing and using forward-looking data. Reliance on assumptions and uncertainty about the future topped the list, being identified as a frequently encountered barrier by nearly three-quarters of respondents.

This is followed by reluctance of counterparties to share competitively advantageous or sensitive information – an issue Whitmore says is unlikely to be resolved any time soon.

“People know how precious data is now, and institutions aren’t necessarily going to put it in the public domain for others to pick up and commercialise,” he explains. “A securities firm or a broker-dealer may ask their counterpart to sign a non-disclosure agreement. They don’t want information to leave the risk management office or to be used in advanced models because they’re worried about their competitive advantage if a competitor gets a better understanding of sensitive parts of their business. From a bank’s point of view, it’s a double-edged sword: if they can’t assess an organisation’s credit because they’re reluctant to share the information, they’ll look to other counterparties.”

As regulators demand more granular projections, complexity of calculations has emerged as another pain point. As with others, this is likely to be most keenly felt by medium-sized and small institutions, Whitmore says, despite efforts by the Basel Committee on Banking Supervision to “level the playing field” by discouraging large banks from using more advanced – and often less transparent – rating models.

“Big banks have the resources, experience and depth of data,” he says. “They have more direct experience of corporates defaulting than the ratings agencies, and are always trying to see what’s around the corner, whether they could implement solutions such as artificial intelligence and machine learning to get the early warning signals that can show credit declining quicker than financial statements reflect.”

Smaller institutions, by contrast, are “still trying to get to grips with the traditional ways of doing things – getting data, and making comparisons quickly and efficiently to get the best outcomes,” Whitmore adds. “In some cases they’re still inputting data manually. It’s a really difficult cycle to break because they’re never going to have the resources or economies of scale of the larger banks that can employ the latest technology, or present counterparties with more enticing offers.”

Choosing your battles
Nonetheless, even smaller banks can adopt a few principles that pave the way for fewer problems with forward-looking data. The first is to reduce complexity and manual data-gathering wherever remotely possible. This can often come down to ensuring calculations are fit for purpose.

“Calculations can be as complex or as simple as they need to be,” says Whitmore. “When dealing with a large institution there may not be a need to spend eight hours analysing all the risks because a lot of those are likely to be small. Simply ensuring it has adequate capital for an unexpected turn of events can be enough.”

Another best practice is to concentrate on data that is significant, rather than gathering as much as possible or combing through every detail – especially given the rate at which data is multiplying. Financial statements are a prime example. One survey respondent told Fitch: “We get a lot of qualitative information from earnings releases, [but] it’s not standardised, and there’s often supplemental data. It’s painful to get. It’s manual and time-consuming.”

“There’s a real need for people to contribute data in a way it can be easily grasped and acted on,” says Whitmore. “We try to look at different financial statements and bring them together in a meaningful way for comparison purposes and make that available to the market.”

In the scramble to find forward-looking information, Whitmore notes that risk managers shouldn’t lose sight of historical data. The right historical data resources can shed more light on future risks than is commonly appreciated, even – perhaps especially – in times of distress.

“Regulators aren’t that interested in a single point of time – a share price going up or down, or a corporate that could be at risk of default one day and not the next,” he says. “Debt instruments are bought for the long term, so the deeper the history, the more economic cycles you’re drawing on, the more robust your credit models become. History teaches us that there are peaks and troughs, so historical information can help predict future moves.”

When assessing a historical data source, however, it is important to consider whether it’s truly comprehensive. Often the emphasis is on larger institutions or corporates, when information on trends in other tiers of the market can provide more insight and a fuller picture – and help satisfy regulatory demands.

“Regulators don’t want to see that you’ve assessed a large bank using one method, but another one based on completely different processes or outputs,” Whitmore says. “This is why we try to ensure a level of consistency, using the same measures and due diligence for each institution. Credit risk professionals value this, because they understand the more data points you’ve got, the deeper the history – even on more obscure institutions – the more accurate their models can be. We’ve assembled an extensive breadth and a deep history of data, and that combination is really the foundation to sourcing and delivering better forward-looking information.”

 

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.