How AI Could Spark the Next Financial Crisis

The prospect of artificial intelligence has long been visible at the edges of the cultural horizon. For decades, the promises and pitfalls of AI have been entertained in sci-fi films and literature, serving as captivating thought experiments for how a digital intelligence could help or hurt humanity.

But over the last two years, the utility of AI in the real world has grown exponentially. Generative AI, which creates text, sound, images or other media in response to text inputs, has surged in popularity.

Text-to-image AI has been used to win art competitions. The Beatles will soon release one final song, using artificial intelligence to stitch together old audio. “Deepfake” videos can now be forged to imitate the likeness and sound of prominent figures. And the disruptions that OpenAI’s ChatGPT and other large language models, or LLMs, will bring are too numerous to list; OpenAI claims GPT-4 can score in the 89th percentile or better on the bar exam and the reading and math portions of the SAT. Companies are already working LLMs into their day-to-day operations, and areas like marketing, customer service, and website creation and design are already being farmed out to these new systems.

Given the huge steps AI has taken in such a short period, it’s worth considering what kind of risks the technology poses to one of the most fundamental and important parts of modern society: the financial system.

Could AI Spark the Next Financial Crisis? The SEC Chair Thinks So

Securities and Exchange Commission Chair Gary Gensler is already thinking about the knock-on effects that AI could have on the financial system.

Gensler worried openly in May about AI’s potential to induce a crisis, saying that a future financial crisis could be sparked “because everything was relying on one base level, what’s called (the) generative AI level, and a bunch of fintech apps are built on top of it.” He went on to call the technology a potential systemic risk.

“Gensler’s concern about ‘one base level’ of AI is absolutely legitimate, not just in finance, but across the board throughout society,” says Richard Gardner, CEO of Modulus, a financial technology firm.

“In terms of finance, if there’s one AI operator that stands out above the rest – an industry standard that most firms use – then if that system is breached, or if there are innate flaws within its programming, there could be widespread implications that eventually create a domino-like catastrophe,” Gardner says.

Other AI Risks to

Financial Stability

Of course, to limit the potential of an AI-sparked crisis to merely one root cause is a failure of imagination. Even if Gensler’s “base level” concern is proactively avoided, other credible threats loom, such as:

The end of encryption. Encryption is the cornerstone of modern digital security and commerce, and it allows users to safely and securely engage in activities like online banking, e-commerce, messaging and numerous other day-to-day activities most people take for granted. It’s possible that AI could be involved in eventually breaking encryption and ruining trust on the internet forever, although other technology like quantum computers may be required for such a task.

Fake news and deepfake risk. AI’s ability to generate audio is already good enough to make convincing imitations of artists like Kanye West and The Weeknd. The technology’s use in politics is still in its infancy; in April, the GOP blitzed out an attack ad against President Joe Biden that used entirely AI-generated images. The ability to create and disseminate fake media about world leaders or market-moving events is available today.

Malicious actors have used this playbook before, without the help of AI:  the Associated Press Twitter account was hacked and used to tweet “news” that two explosions had rocked the White House, injuring then-President Barack Obama. Markets briefly crashed before quickly recovering when the tweet proved false.

It’s not just geopolitical fake news that could spark a sell-off: Nefarious actors could use AI-generated content and social media to attack individual companies with a deluge of phony, negative stories and rumors.

Mass phishing attempts. With technologies like ChatGPT already excelling at tasks like creating boilerplate marketing emails, bad actors could utilize AI to spam out ever-more convincing “phishing” emails, pretending to be a financial institution and asking for victims’ sensitive personal information or passwords. This could result in catastrophic financial outcomes for a relatively small portion of society and sow distrust among a larger segment of the population.

Disastrous AI trading systems. “Similar to other computer trading systems, AI-driven trading algorithms can be susceptible to errors or bugs, which, if not properly identified and controlled, can lead to erroneous trading decisions that theoretically could snowball into a flash crash or contagion,” says Michael Ashley Schulman, partner and chief investment officer at Running Point Capital Advisors, a multifamily wealth management firm in El Segundo, California.

Vulnerabilities in systemically important companies. Should major investment banks like JPMorgan Chase & Co. (ticker: JPM) or Goldman Sachs Group Inc. (GS) deploy AI trading techniques that go haywire and lose hundreds of billions of dollars, repercussions could quickly ripple across the rest of the economy.

Existing Uses of AI in the Stock Market

Nightmare scenarios aside, it’s important to remember that “artificial intelligence” is an expansive term covering a lot of ground – and it’s already being broadly deployed on Wall Street.

“Much of the public stock market is already dictated by incredibly sophisticated hedge funds that use incredibly complex systems that rely on AI and machine learning models to inform their trading strategies,” says Vince Lynch, CEO and founder at IV.AI.

In other words, the trading capabilities unleashed by AI have been here for some time – and some of the outcomes of firms that embrace this technology in trading have actually been fantastic.

Famously, mathematician Jim Simons’ hedge fund, Renaissance Technologies’ Medallion Fund, routinely wallops the market using quantitative trading techniques fed by massive data sets and rules-based algorithmic trading. Between 1988 and 2018, the Medallion Fund returned 63.3% annually.

Bridgewater Associates, the world’s largest hedge fund, also uses complex algorithmic rules as a basis for many of its investment decisions.

Outside of hedge funds, some robo advisors use AI in making asset allocation decisions, and a handful of funds including ETFMG’s AI Powered Equity ETF (AIEQ) and the Qraft AI-Pilot US Large Cap Dynamic Beta and Income ETF (AIDB) currently use AI to make all fund investment decisions.

What Can Be Done to Address the Risks?

AI in various iterations has been a part of market dynamics for a while, and for whatever it’s worth, it hasn’t destroyed the financial system yet. That said, automated trading programs aren’t immune from introducing major risk into the market: Both the 1987 crash and the 2010 flash crash were caused in part by such programs. But it’s not trading algorithms themselves that keep Gensler up at night. It’s the other ways AI could take part in a crisis, some of which may be fundamentally unpredictable.

The best way to proactively address this, says Gardner, “is through regulatory guidelines. And, if Congress and other regulators fail to act, it could very well be the SEC that steps in and develops strict guidelines for the use of AI in finance.”

Of course, limiting the scope of oversight to the field of finance may miss the forest for the trees. “Robust risk management frameworks and stress-testing procedures should be in place to assess the potential systemic impact of AI technologies,” says Schulman, who worries about errors in popular AI models propagating.

While it’s possible that these approaches can ultimately reduce risk, developing and implementing a regulatory framework in such a fast-moving industry may prove to be a Sisyphean task. But at least one regulator is thinking about it.