Why the EU’s AI Act Works Against Itself in the Context of the Global Economy
On March 13, 2024, the European Parliament passed a new technology policy outlining a comprehensive framework for regulating artificial intelligence (AI). The AI Act, as it’s named, will take effect in stages beginning in June 2024 and will greatly impact nations conducting trade with the European Union (EU) as it may prohibit the importation of certain AI products. Specifically, the policy ranks four levels of risk for AI systems. AI used in medical devices or infrastructure involving chemical and electrical networks, like transportation systems, are considered “high risk” and are thus subject to strict market regulations, including, but not limited to, outright prohibition. Given the role AI plays in the health, safety, and privacy of individuals, the EU has sought to ban these “high risk” products to avoid any potential threats. Though concerns about the long-term impacts of AI in daily use are longstanding, the EU’s AI Act effectively enters into full force a lasting piece of legislation that expects member states to comply with strict regulation, despite the EU relying heavily on AI products for economic stimulation.
In spite of the AI Act’s genuine intentions to address serious and concrete threats posed to consumer safety, the legislation fails to account for the substantial role AI plays in the larger global economy. According to the International Monetary Fund, AI impacts roughly 60% of jobs in advanced economies. Moreover, automation, which reduces the need for human labor by streamlining business processes, plays an integral role in the global workforce. Limiting AI’s role in trade will only further hinder advancements in fields the EU designates as “high risk,” placing the Union at a disadvantage compared to other economic giants. In China, for instance, a McKinsey report projects that using AI to predict “diagnostic outcomes and support clinical decisions could create about $5 billion in economic value in China.” The projected economic stimulation AI brings to “high risk” industries in other countries indicates that it is likely not in the EU’s economic interest to ban its growth.
Not only will the AI Act’s ranking regime stifle industrial creativity in high risk fields, but its economic sanctions will also obliterate any company that dares to venture into the frontier of cutting-edge AI technology by fining such companies 15 to 35 million euros. Though some technology moguls can afford to pay these penalties, the tech branch of the law firm Linklaters confirmed that the act “‘is likely to impact many businesses, especially those developing AI systems but also those deploying or merely using them in certain circumstances.’” Furthermore, the EU’s expectation that other nations will comply with their AI regulatory framework does not provide EU-based AI companies with an advantage. In the AI market, the EU does not comprise a significant portion of the AI market’s emerging businesses. As of 2018, “the top three players (measured in terms of number of AI startups) are the United States with 1,393 startups (40%), China with 383 startups (11%), and Israel with 362 startups (11%).” This model of AI policy, though seen as beneficial for the EU, would not economically benefit the United States and China, given their strong pull and ongoing geopolitical race to dominate the AI industry, which ultimately leaves the EU behind in the economic expansion of AI technology.
Within the EU, the AI Act has also stirred political divergence, with an estimated 97 million euros spent annually in Brussels for lobbying efforts by global technology firms. The impacts of the AI Act extend beyond trade and have resulted in unmatched competition between lobbying groups in their size, financial resources, and advocacy techniques. Furthermore, when looking inward, scholars suggest that the EU will lose its “competitive edge” in the greater global automation of trade, as it fails to recognize that given the advancements in automation and the increasing global dependence on AI, countries like the U.S. and China are not only dominating this movement, but are also fast succeeding. In response to the AI Act, U.S. executives have pointed out that the legislation poses a threat to the growth of the modern economy and have actively rejected the use of trade barriers.
Limiting the use of AI in its infancy does not allow governments enough time to explore its potential nor its capabilities in the greater context of human industries. EU businesses claim that the AI Act “‘would jeopardize Europe’s competitiveness and technological sovereignty without effectively tackling the challenges we are and will be facing.’” The business community’s response is simply another reminder that the EU must adopt a humancentric approach to AI legislation and reject reckless regulations that disadvantage human industries according to risk regimes not carefully tailored to AI’s various implementations. Specifically, the AI Act has banned biometric categorization systems, which are used to create facial recognition databases. However, biometric identification systems can be used effectively. For instance, the United States government used such technology during warfare to identify members of the Taliban and distinguish them from allies or civilians. By working with businesses and governments who utilize AI in their daily and professional lives, the EU’s AI Act would be more successful in building AI tools that are efficient and safe.
The fear-mongering approach to regulating AI presented by the EU has ripple effects beyond its member nations. The greater global economy, and the Union’s trade relations, depend on the growth and development of AI products. Developing a safer, more ethical, and advanced AI world originates from within technology companies themselves, not through sweeping reforms obliging other nations to comply with ineffective policies not sustainable for their respective economies. The EU has effectively forced other nations to follow their lead, although a global consensus on AI policy has yet to be established. When looking at the future of AI advancements, creating a global consensus and standard about what AI is considered “high risk,” and how human involvement in the automation process would work alongside AI, would create a human-centered approach. And working with the United States’ pre-existing and working AI regulation—while also creating a competitive and open AI market—would better ensure a global standard about what AI products are beneficial.
Greta Herman is a sophomore at Barnard College studying political science and history. Greta can be reached at gmh2149@barnard.edu.