States strike out on their own on AI, privacy regulation
(Colorado Newsline) As congressional sessions have passed without any new federal artificial intelligence laws, state legislators are striking out on their own to regulate the technologies in the meantime.
Colorado just signed into effect one of the most sweeping regulatory laws in the country, which sets guardrails for companies that develop and use AI. Its focus is mitigating consumer harm and discrimination by AI systems, and Gov. Jared Polis, a Democrat, said he hopes the conversations will continue on the state and federal level.
Other states, like New Mexico, have focused on regulating how computer generated images can appear in media and political campaigns. Some, like Iowa, have criminalized sexually charged computer-generated images, especially when they portray children.
“We can’t just sit and wait,” Delaware state Rep. Krista Griffith, D-Wilmington, who has sponsored AI regulation, told States Newsroom. “These are issues that our constituents are demanding protections on, rightfully so.”
Griffith is the sponsor of the Delaware Personal Data Privacy Act, which was signed last year, and will take effect on Jan. 1, 2025. The law will give residents the right to know what information is being collected by companies, correct any inaccuracies in data or request to have that data deleted. The bill is similar to other state laws around the country that address how personal data can be used.
There’s been no shortage of tech regulation bills in Congress, but none have passed. The 118th Congress saw bills relating to imposing restrictions on artificial intelligence models that are deemed high risk, creating regulatory authorities to oversee AI development, imposing transparency requirements on evolving technologies and protecting consumers through liability measures.
In April, a new draft of the American Privacy Rights act of 2024 was introduced, and in May, the Bipartisan Senate Artificial Intelligence Working Group released a roadmap for AI policy which aims to support federal investment in AI while safeguarding the risks of the technology.
Griffith also introduced a bill this year to create the Delaware Artificial Intelligence Commission, and said that if the state stands idly by, they’ll fall behind on these already quickly evolving technologies.
“The longer we wait, the more behind we are in understanding how it’s being utilized, stopping or preventing potential damage from happening, or even not being able to harness some of the efficiency that comes with it that might help government services and might help individuals live better lives,” Griffith said.
States have been legislating about AI since at least 2019, but bills relating to AI have increased significantly in the last two years. From January through June of this year, there have been more than 300 introduced, said Heather Morton, who tracks state legislation as an analyst for the nonpartisan National Conference of State Legislatures.
Also so far this year, 11 new states have enacted laws about how to use, regulate or place checks and balances on AI, bringing the total to 28 states with AI legislation.
How are everyday people interacting with AI?
Technologists have been experimenting with decision-making algorithms for decades — early frameworks date back to the 1950s. But generative AI, which can generate images, language, and responses to prompts in seconds, is what’s driven the industry in the last few years.
Many Americans have been interacting with artificial intelligence their whole lives, and industries like banking, marketing and entertainment have built much of their modern business practices upon AI systems. These technologies have become the backbone of huge developments like power grids and space exploration.
Most people are more aware of their smaller uses, like a company’s online customer service chatbot or asking their Alexa or Google Assistant devices for information about the weather.
Rachel Wright, a policy analyst for the Council of State Governments, pinpointed a potential turning point in the public consciousness of AI, which may have added urgency for legislators to act.
“I think 2022 is a big year because of ChatGPT,” Wright said. “It was kind of the first point in which members of the public were really interacting with an AI system or a generative AI system, like ChatGPT, for the first time.”
Competing interests: Industry vs privacy
Andrew Gamino-Cheong cofounded AI governance management platform Trustible early last year as the states began to pump out legislation. The platform helps organizations identify risky uses of AI and comply with regulations that have already been put in place.
Both state and federal legislators understand the risk in passing new AI laws: too many regulations on AI can be seen as stifling innovation, while unchecked AI could raise privacy problems or perpetuate discrimination.
Colorado’s law is an example of this — it applies to developers on “high-risk” systems which make consequential decisions relating to hiring, banking and housing. It says these developers have a responsibility to avoid creating algorithms that could have biases against certain groups or traits. The law dictates that instances of this “algorithmic discrimination” need to be reported to the attorney general’s office.
At the time, Logan Cerkovnik, the founder and CEO of Denver-based Thumper.ai, called the bill “wide-reaching” but well-intentioned, saying his developers will have to think about how the major social changes in the bill are supposed to work.
“Are we shifting from actual discrimination to the risk of discrimination before it happens?” he added.
But Delaware’s Rep. Griffith said that these life-changing decisions, like getting approved for a mortgage, should be transparent and traceable. If she’s denied a mortgage due to a mistake in an algorithm, how could she appeal?
“I think that also helps us understand where the technology is going wrong,” she said. “We need to know where it’s going right, but we also have to understand where it’s going wrong.”
Some who work in the development of big tech see federal or state regulations of AI as potentially stifling to innovation. But Gamino-Cheong said he actually thinks some of this “patchwork” legislation by states could create pressure for some clear federal action from lawmakers who see AI as a huge growth area for the U.S.
“I think that’s one area where the privacy and AI discussions could diverge a little bit, that there’s a competitive, even national security angle, to investing in AI,” he said.
How are states regulating AI?
Wright published research late last year on AI’s role in the states, categorizing the approaches states were using to create protections around the technology. Many of the 29 laws enacted at that point focused on creating avenues for stakeholder groups to meet and collaborate on how to use and regulate AI. Others recognize possible innovations enabled by AI, but regulate data privacy.
Transparency, protection from discrimination and accountability are other major themes in the states’ legislation. Since the start of 2024, laws that touch on the use of AI in political campaigns, schooling, crime data, sexual offenses and deepfakes — convincing computer-generated likenesses – have been passed, broadening the scope in how a law can regulate AI. Now, 28 states have passed nearly 60 laws.
Here’s a look at where legislation stands in July 2024, in broad categorization:
Interdisciplinary collaboration and oversight
Many states have enacted laws that bring together lawmakers, tech industry professionals, academics and business owners to oversee and consult on the design, development and use of AI. Sometimes in the form of councils or working groups, they are often on the lookout for unintended, yet foreseeable, impacts of unsafe or ineffective AI systems. This includes Alabama (SB 78), Illinois (HB 3563), Indiana (S 150), New York (AB A4969, SB S3971B and A 8808), Texas (HB 2060, 2023), Vermont (HB 378 and HB 410), California (AB 302), Louisiana (SCR 49), Oregon (H 4153), Colorado (SB 24-205), Louisiana (SCR 49), Maryland (S 818), Tennessee (H 2325), Texas (HB 2060), Virginia (S 487), Wisconsin (S 5838) and West Virginia (H 5690).
Data Privacy
Second most common are laws that look at data privacy and protect individuals from misuse of consumer data. Commonly, these laws create regulations about how AI systems can collect data and what it can do with it. These states include California (AB 375), Colorado (SB 21-190), Connecticut (SB 6 and SB 1103), Delaware (HB 154), Indiana (SB 5), Iowa (SF 262), Montana (SB 384), Oregon (SB 619), Tennessee (HB 1181), Texas (HB 4), Utah (S 149) and Virginia (SB 1392).
Transparency
Some states have enacted laws that inform people that AI is being used. This is most commonly done by requiring businesses to disclose when and how it’s in use. For example, an employer may have to get permission from employees to use an AI system that collects data about them. These states have transparency laws: California (SB 1001), Florida (S 1680), Illinois (HB 2557), and Maryland (HB 1202).
Protection from discrimination
These laws often require that AI systems are designed with equity in mind, and avoid “algorithmic discrimination,” where an AI system can contribute to different treatment of people based on race, ethnicity, sex, religion or disability, among other things. Often these laws play out in the criminal justice system, in hiring, in banking or other positions where a computer algorithm is making life-changing decisions. This includes California (SB 36), Colorado (SB 21-169), Illinois (HB 0053), and Utah (H 366).
Elections
Laws focusing on AI in elections have been passed in the last two years, and primarily either ban messaging and images created by AI or at least require specific disclaimers about the use of AI in campaign materials. This includes Alabama (HB 172), Arizona (HB 2394), Colorado (SB 24-1147), Idaho (HB 664), Florida (HB 919), New Mexico (HB 182), Oregon (SB 1571), Utah (SB 131), and Wisconsin (SB 664).
Schools
States that have passed laws relating to AI in education mainly provide requirements for the use of AI tools. Florida (HB 1361) outlines how tools may be used to customize and accelerate learning, and Tennessee (S 1711) instructs schools to create an AI policy for the 2024-25 school year which describes how the board will enforce its policy.
Computer-generated sexual images
The states which have passed laws about computer-generated explicit images criminalize the creation of sexually explicit images of children with the use of AI. These include Iowa (HF 2240) and South Dakota (S 79).
Looking forward
While most of the AI laws enacted have focused on protecting users from the harms of AI, many legislators are also excited by its potential.
A recent study by the World Economic Forum has found that artificial intelligence technologies could lead to the creation of about 97 million new jobs worldwide by 2025, outpacing the approximately 85 million jobs displaced to technology or machines.
Griffith is looking forward to digging more into the technologies’ capabilities in a working group, saying it’s challenging to legislate about technology that changes so rapidly, but it’s also fun.
“Sometimes the tendency when something’s complicated or challenging or difficult to understand is like, you just want to run and stick your head under the blanket,” she said. “But it’s like, everybody stop. Let’s look at it, let’s understand it, let’s read about it. Let’s have an honest discussion about how it’s being utilized and how it’s helping.”
Colorado Newsline is part of States Newsroom, a nonprofit news network supported by grants and a coalition of donors as a 501c(3) public charity. Colorado Newsline maintains editorial independence. Contact Editor Quentin Young for questions: info@coloradonewsline.com. Follow Colorado Newsline on Facebook and X.