Human hand and robot hand analysing financial charts
VoxEU Column Financial Regulation and Banking Productivity and Innovation

How AI can undermine financial stability

As artificial intelligence makes inroads into the financial system, it exacerbates existing channels of instability and creates new ones. This column identifies several such channels, malicious and misinformed use, misalignment and the evasion of control, and finally risk monoculture and oligopolies. All arise when AI vulnerabilities interact with economic fragilities like strategic complementarities, problems of incentives, and incomplete contracts.

The rapidly growing use of artificial intelligence (AI) promises much improved efficiency in the delivery of financial services, but only at the expense of new threats to financial stability.

While there is no single notion of what AI is, it is helpful to see it as a computer algorithm performing tasks usually done by humans. AI differs from machine learning and traditional statistics in that it not only provides quantitative analysis but also gives recommendations and makes decisions. Norvig and Russell (2021) list a number of possible definitions of AI. Among these, AI as a rational maximising agent resonates with the economic notion of utility maximising agents, and hence is particularly helpful in the analysis of AI use in the financial system.

AI is seeing widespread use in the financial system. The private sector applies AI to tasks such as risk management, asset allocation, credit decisions, fraud detection, and regulatory compliance. The financial authorities are already employing AI for low-level analysis and forecasting, and we expect them further to expand their use to the design of financial regulations, the monitoring and enforcement of regulations, identification and mitigation of financial instabilities, and advice on resolving failing institutions and crises.

While the increased use of AI will be broadly beneficial, improving the delivery of financial services and efficiency of financial regulations, AI also creates new avenues of instability. The identification of those channels motivates our work in Danielsson and Uthemann (2024), which builds on the existing work on AI safety (Weidinger et al. 2022, Bengio et al. 2023, Shevlane 2023), identifying societal risks arising from AI use, including malicious use, misinformation, and loss of human control. We augment those with sources of fragility recognised in the economic literature, such as incentive problems, incomplete contracting, and strategic complementarities.

It is the vicious interaction of the AI and economic instability channels that is the biggest concern about AI use in the financial system.

Malicious use of AI

The first channel is the malicious use of AI by its human operators, a particular concern in the financial system because it is replete with highly resourced profit-maximising economic agents not too concerned about the social consequences of their activities. Such agents can bypass controls and change the system in a way that benefits them and is difficult for competitors and regulators to detect. They may even deliberately create market stress, which is highly profitable for those forewarned.

These agents either directly manipulate AI engines or use them to find loopholes to evade control. Both are easy in a financial system that is effectively infinitely complex.

Such activities can be socially undesirable and even be against the interests of the institution employing the operator of the AI engine.

We expect the most common malicious use of AI will be by employees of financial institutions careful to stay on the right side of the law. AI will likely also facilitate illegal activities, such as rogue traders and criminals, as well as terrorists and nation-states aiming to create social disorder.

Misinformed use of and overreliance on AI

The second channel emerges when the users of AI are both misinformed about its abilities and strongly dependent on it. This is most likely when data-driven algorithms, such as those used by AI, are asked to extrapolate to areas where data are scarce and objectives unclear, which is very common in the financial system.

AI engines are designed to provide advice even when they have very low confidence about the accuracy of their answer. They can even make up facts or present arguments that sound plausible but would be considered flawed or incorrect by an expert, both instances of the broader phenomenon of ‘AI hallucination’.

The risk is that the AI engines will present confident recommendations about outcomes they know little about, and to overcome that, the engines will have to provide assessment of the statistical accuracy of their recommendations. Here, it will be helpful if the authorities overcome their frequent reluctance to adopt consistent quantitative frameworks for measuring and reporting on the statistical accuracy of their data-based inputs and outputs.

AI misalignment and evasion of control

The third channel emerges from the difficulties in aligning the objectives of AI with those of its human operators. While we can instruct AI to behave like we would, there is no guarantee it will actually do so.

It is impossible to pre-specify all the objectives AI has to meet, which is a problem since AI is very good at manipulating markets, and being incentivised by high-level objectives such as profit maximisation, it is not concerned with the ethical and legal consequences of its actions unless explicitly instructed.

An example is AI collusion, as noted by Calvano et al. (2019), who find that independent reinforcement learning algorithms instructed to maximise profits quickly converge on collusive pricing strategies that sustain anti-competitive outcomes. It is much easier for AI to behave in this collusive way than humans, as such behaviour is very complex and illegal. AI is much better at handling complexity and is unaware of legal nuances unless explicitly taught or instructed.

Scheurer et al. (2023) provide an example of how individual AI can spontaneously choose to violate the law in its pursuit of profit. Using GPT-4 to analyse stock trading, they told their AI engine that insider trading was unacceptable. When they then gave the engine an illegal stock tip, it proceeded to trade on it and lie to the human overseers. Here, AI is simply engaging in the same type of illegal behaviour that many humans have done before.

The superior performance of AI can destabilise the system even when it is only doing what it is supposed to do. This is particularly problematic in times of extreme stress when the objective of financial institutions, and hence the AI working for them, is survival, amplifying existing destabilising behaviour such as flights to safety, fire sales, and investor runs.

More generally, AI will find it easy to evade oversight because it is very difficult to patrol a nearly infinitely complex financial system. The authorities have to contend with two opposing forces. AI will be very helpful in keeping the system stable but at the same time aids the forces of instability. We suspect the second factor dominates. The reason is that AI attempting to evade control only has to find one loophole to misbehave, while the supervisors not only need to find all the weak points but also monitor how AI interacts with each of them and then effectively implement corrective measures. That is a very difficult computational task, made worse by the private sector having access to better computational resources than the authorities.

The more we use AI, the more difficult the computational problem for the authorities becomes.

Risk monoculture and oligopolies

The final channel emerges because the business model of those companies designing and running AI engines exhibits increasing returns to scale, similar to what we see in cloud computing.

AI analytics businesses depend on three scarce resources: computers with the requisite GPUs, human capital, and data. Not only are all of these in short supply, but they also positively reinforce each other. An enterprise that controls the biggest share of each is likely to occupy a dominant position in the financial AI analytics business.

All of these push the AI industry towards an oligopolistic market structure dominated by a few large vendors. The end result is amplified procyclicality and more booms and busts as multiple financial institutions relying on the same AI engine drives them to similar beliefs and actions, harmonising trading activities.

If the authorities also depend on the same AI engine for its analytics, which seems likely, they may not be able to identify the resulting fragilities until it is too late because they are informed by an engine with the same view of the stochastic process of the financial system as the private firms that inadvertently caused the fragility.

In other words, the oligopolistic nature of the AI analytic business increases systemic financial risk.

It is a concern that neither the competition authorities nor the financial authorities appear to have fully appreciated the potential for increased systemic risk due to oligopolistic AI technology in the recent wave of data vendor mergers.

Conclusion

Both the private and public sectors are rapidly expanding their use of AI due to the compelling efficiency and cost advantages it offers. Unfortunately, this increased use of AI also exacerbates existing channels of financial instability.

By interacting societal threats identified by AI researchers with fragilities documented in the economic literature, we identify four channels of instability: malicious and misinformed use of AI, coupled with misalignment and the evasion of control, amplified by risk monoculture and oligopolies.

While concerns about how AI can destabilise the financial system might make us careful in adopting AI, we suspect it will not. Technology is often initially met with scepticism, but as it comes to be seen as performing better than what came before, is increasingly trusted. AI earns trust by successfully executing those tasks best suited for it, the ones with ample data and immutable rules. The resulting cost savings lead to it being used for increasingly critical and poorly suited tasks, those based on limited or even irrelevant historical data.

We do not want to overemphasise these issues. We suspect that the benefit of AI will be overwhelmingly positive in the financial system.

However, the authorities must be alive to these threats and adapt regulations to meet them. The ultimate risk is that AI becomes both irreplaceable and a source of systemic risk before the authorities have formulated the appropriate response.

Authors’ note: Any opinions and conclusions expressed here are those of the authors and do not necessarily represent the views of the Bank of Canada.

References

Bengio, Y, G Hinton, A Yao et al. (2023), “Managing ai risks in an era of rapid progress”, arXiv preprint arXiv:2310.17688.

Danielsson, J (2023), “When artificial Intelligence becomes a central banker”, VoxEU.org, 11 July.

Danielsson, J and A Uthemann (2024), “On the use of artificial intelligence in financial regulations and the impact on financial stability”.

Norvig, P and S Russell (2021), Artificial Intelligence: A Modern Approach, Pearson.

Pastorello, S, G Calzolari, V Denicolo and E Calvano (2019), “Artificial Intelligence, algorithmic pricing, and collusion”, VoxEU.org, 3 February.

Scheurer, J, M. Balesni and M Hobbhahn (2023), Technical report: Large language models can strategically deceive their users when put under pressure

Shevlane, T, S Farquhar, B Garfinkel et al. (2023), “Model evaluation for extreme risks”, arXiv preprint arXiv:2305.15324.

Weidinger, L, J Uesato, M Rauh (2022), “Taxonomy of risks posed by language models”, in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, pp. 214–229.