AI Could Become the ‘New Steel’ as Overcapacity Risk Goes Unnoticed
In the 19th century, government officials came to understand that steel would be essential to both economic growth and national security. Thus, they devised policies that could sustain local production as well as prevent foreign producers from competing in domestic markets.
By the 1950s, the world had too much steel. Manufacturers substituted steel with products such as plastics or aluminum. Nonetheless, policymakers in Japan, India, the US, and the EU maintained domestic capacity because steel remained essential for the construction industry and military hardware.
Even today, demand continues to shrink as governments around the world keep investing in steel. In 2023, the OECD predicted that overcapacity would worsen, causing difficult market conditions and exacerbating climate change.
While steel and AI could not be more different, many economists view AI as a general-purpose technology that can stimulate both economic growth and innovation. Hence, policymakers must ensure domestic capacity.
However, many government officials also already see AI as a critical technology essential to both national security and economic progress. Based on a 2023 review of policies and programs reported to the OECD, more than 60 countries are using taxpayer dollars to create, disseminate, or do research on AI. That’s a lot of AI.
Policymakers in the U.S., Saudi Arabia, Japan, Germany, the U.K., and the EU recently announced huge public investments in AI (These investments follow large private sector investments). The EU has provided $1 billion in funding each year for AI capacity-building since 2018. In March 2024, the Saudi government announced it would use some $40 billion of its $900 billion sovereign wealth fund, the Public Investment Fund, to invest in AI at home and abroad.
At the national level, these investments are understandable. But collectively, they could lead to overcapacity, a situation where the supply of AI exceeds demand.
Such overcapacity creates pitfalls in addition to the already well-known risks of AI, such as bias or inaccuracy. As nations seek to sustain domestic AI competitiveness and market share, some might dump excess capacity. That could make it easier for criminal elements or rogue agents to acquire the technology. Here, overcapacity could lead to political instability.
Moreover, AI producers need huge sums of capital to design, develop, and deploy these systems. To attract and sustain such investment, some firms or governments may choose to disregard guardrails—strategies designed to limit potential negative effects. Here, overcapacity may be correlated with untrustworthy or irresponsible AI.
Furthermore, as they compete with other nations, policymakers may hoard data or limit access to technologies, making it harder to cooperatively use AI to advance knowledge or collectively address wicked problems such as climate change. Here, overcapacity may be correlated with a lack of cooperation on the uses of AI.
Finally, there is an opportunity cost to over-investment in AI. Without deliberate intent, such investment may come at the expense of other technologies and approaches to analyzing large pools of data.
Overcapacity is a normal problem in national and international economies. At times, supply outstrips demand. But when various governments intervene to create and sustain capacity, as they have done with steel and may now be doing with AI, we struggle to address the global spillover effects. Policymakers should begin addressing this potential risk at existing important international venues such as the G-7, the G-20, and the UN.
This commentary is by Susan Ariel Aaronson. She is a CIGI senior fellow and a professor of international affairs at George Washington University. She is also co-principal investigator with the National Science Foundation/National Institute of Standards and Technology Trustworthy AI Institute for Law and Society, where she leads research on data and AI governance.
This article was published by Fortune magazine.