Image
Hands on a computer keyboard with simulated holographic images floating above representing aspects of artificial intelligence

European Union AI regulation is both model and warning for U.S. lawmakers, experts say

© Khanchit Khirisutchalual - iStock-1515913422

Paige Gross
(Colorado Newsline)

Members of the group Initiative Urheberrecht (authors’ rights initiative) demonstrate to demand regulation of artificial intelligence on June 16, 2023 in Berlin, Germany. The AI regulation later adopted by the European Union is a model for many U.S. lawmakers interested in consumer protection but a cautionary tale for others who say they’re interested in robust innovation, experts say. (Photo by Sean Gallup/Getty Images)

The European Union’s landmark AI Act, which went into effect last year, stands as inspiration for some U.S. legislators looking to enact widespread consumer protections. Others use it as a cautionary tale warning against overregulation leading to a less competitive digital economy.

Image
Concept image of a book with the title "The Law" with a gavel and strike plate rest on top.

© tussik13 - iStock-908521486

The European Union enacted its law to prevent what is currently happening in the U.S. — a patchwork of AI legislation throughout the states — said Sean Heather, senior vice president for international regulatory affairs and antitrust at the Chamber of Commerce during an exploratory congressional subcommittee hearing on May 21.

“America’s AI innovators risk getting squeezed between the so-called Brussels Effect of overzealous European regulation and the so-called Sacramento Effect of excessive state and local mandates,” said Adam Thierer, a Senior Fellow at think tank R Street Institute, at Wednesday’s hearing.

The EU’s AI Act is comprehensive, and puts regulatory responsibility on developers of AI to mitigate risk of harm by the systems. It also requires developers to provide technical documentation and training summaries of its models for review by EU officials. The U.S. adopting similar policies would kick the country out of its first-place position in the Global AI race, Thierer testified.

The “Brussels Effect,” Thierer mentioned, is the idea that the EU’s regulations will influence the global market. But not much of the world has followed suit — so far Canada, Brazil and Peru are working on similar laws, but the UK and countries like Australia, New Zealand, Switzerland, Singapore, and Japan have taken a less restrictive approach.

When Jeff Le, founder of tech policy consultancy 100 Mile Strategies LLC, talks to lawmakers on each side of the aisle, he said he hears that they don’t want another country’s laws deciding American rules.

“Maybe there’s a place for it in our regulatory debate,” Le said. “But I think the point here is American constituents should be overseen by American rules, and absent those rules, it’s very complicated.”

Does the EU AI act keep Europe from competing?

Critics of the AI Act say the language is overly broad, which slows down the development of AI systems as they aim to meet regulatory requirements. France and Germany rank in the top 10 global AI leaders, and China is second, according to Stanford’s AI Index, but the U.S. currently leads by a wide margin in the number of leading AI models and its AI research, experts testified before the congressional committee.

University of Houston Law Center professor Peter Salib said he believes the EU’s AI Act is a factor — but not the only one — in keeping European countries out of the top spots. First, the law has only been in effect for about nine months, which wouldn’t be long enough to make as much of an impact on Europe’s ability to participate in the global AI economy, he said.

Image
Open hand facing up and glowing slightly from the palm with the letters 'AI' floating above
© Shutthiphong Chandaeng - iStock-1452604857

Secondly, the EU AI act is one piece of the overall attitude about digital protection in Europe, Salib said. The General Data Protection Regulation, a law that went into effect in 2018 and gives individuals control over their personal information, follows a similar strict regulatory mindset.

“It’s part of a much longer-term trend in Europe that prioritizes things like privacy and transparency really, really highly,” Salib said. “Which is, for Europeans, good — if that’s what they want, but it does seem to have serious costs in terms of where innovation happens.”

Stavros Gadinis, a professor at the Berkeley Center for Law and Business who has worked in the U.S. and Europe, said he thinks most of the concerns around innovation in the EU are outside the AI Act. Their tech labor market isn’t as robust as the U.S., and it can’t compete with the major financing accessible by Silicon Valley and Chinese companies, he said.

“That is what’s keeping them, more than this regulation,” Gadinis said. “That and, the law hasn’t really had the chance to have teeth yet.”

During the May 21 hearing, Representative Lori Trahan, a Democrat from Massachusetts, called the Republican’s stance — that any AI regulation would kill tech startups and growing companies — “a false choice.”

The U.S. heavily invests in science and innovation, has founder-friendly immigration policies, has lenient bankruptcy laws and a “cultural tolerance for risk taking.” All policies the EU does not offer, Trahan said.

“It is therefore false and disingenuous to blame EU’s tech regulation for its low number of major tech firms,” Trahan said. “The story is much more complicated, but just as the EU may have something to learn from United States innovation policy, we’d be wise to study their approach to protecting consumers online.”

Self-governance

The EU’s law puts a lot of responsibility on developers of AI, and requires transparency, reporting, testing with third parties and tracking copyright. These are things that AI companies in the U.S. say they do already, Gadinis said.

“They all say that they do this to a certain extent,” he said. “But the question is, how expansive these efforts need to be, especially if you need to convince a regulator about it.”

AI companies in the U.S. currently self-govern, meaning they test their models for some of the societal and cybersecurity risks currently outlined by many lawmakers. But there’s no universal standard — what one company deems safe may be seen as risky to another, Gadinis said. Universal regulations would create a baseline for introducing new models and features, he said.

Image
Wooden blocks spelling out the word "Legislation"
© iStock - Piotrekswat

Even one company’s safety testing may look different from one year to the next. Until 2024, OpenAI’s CEO Sam Altman was pro-federal AI regulation, and sat on the company’s Safety and Security Committee, which regularly evaluates OpenAI’s processes and safeguards over a 90-day period.

In September, he left the committee, and has since become vocal against federal AI legislation. OpenAI’s safety committee has since been operating as an independent entity, Time reported. The committee recently published recommendations to enhance security measures, be more transparent about OpenAI’s work and “unify the company’s safety frameworks.”

Even though Altman has changed his tune on federal regulation, the mission of OpenAI is focused on the benefits society gains from AI — “They wanted to create [artificial general intelligence] that would benefit humanity instead of destroying it,” Salib said.

AI company Anthropic, maker of chatbot Claude, was formed by former staff members of OpenAI in 2021, and focuses on responsible AI development. Google, Microsoft and Meta are other top American AI companies that have some form of self safety testing, and were recently assessed by the AI Safety Project.

The project asked experts to weigh in on the strategies each company took for risk assessment, current harms, safety frameworks, existential safety strategy, governance and accountability, and transparency and communication. Anthropic scored the highest, but all companies were lacking in their “existential safety,” or the harm AI models could cause to society if unchanged.

Just by developing these internal policies, most AI leaders are acknowledging the need for some form of safeguards, Salib said.

“I don’t want to say there’s wide industry agreement, because some seem to have changed their tunes last summer,” Salib said. “But there’s at least a lot of evidence that this is serious and worthwhile thinking about.”

What could the U.S. gain from EU’s practices?

Salib said he believes a law like the EU AI Act in the U.S. would be too “overly comprehensive.”

Many laws addressing AI concerns now, like discrimination by algorithms or self-driving cars, could be governed by existing laws — “It’s not clear to me that we need special AI laws for these things.”

But he said that the specific, case-by-case legislation that the states have been passing have been effective in targeting harmful AI actions, and ensuring compliance from AI companies.

Gadinis said he’s not sure why Congress is opposed to the state-by-state legislative model, as most of the state laws are consumer oriented, and very specific — like deciding how a state may use AI in education, preventing discrimination in healthcare data or keeping children away from sexually explicit AI content.

“I wouldn’t consider these particularly controversial, right?” Gadinis said. “I don’t think the big AI companies would actually want to be associated with problems in that area.”

Gadinis said the EU’s AI Act originally mirrored this specific, case-by-case approach, addressing AI considerations around sexual images, minors, consumer fraud and use of consumer data. But when ChatGPT was released in 2022, EU lawmakers went back to the drawing board and added the component about large language models, systematic risk, high-risk strategies and training, which made the reach of who needed to comply much wider.

After 10 months living with the law, the European Commission said this month it is open to “simplify the implementation” to make it easier for companies to comply.

It’s unlikely the U.S. will end up with AI regulations as comprehensive as the EU, Gadinis and Salib said. President Trump’s administration has taken a deregulated approach to tech so far, and Republicans passed a 10-year moratorium on state-level AI laws in the “big, beautiful bill” heading to the Senate consideration.

Gadinis predicts that the federal government won’t take much action at all to regulate AI, but mounting pressure from the public may result in an industry self-regulatory body. This is where he believes the EU will be most influential — they have leaned on public-private partnerships to develop a strategy.

“Most of the action is going to come either from the private sector itself — they will band together — or from what the EU is doing in getting experts together, trying to kind of come up with a sort of half industry, half government approach,” Gadinis said.