Image
Open hand facing up and glowing slightly from the palm with the letters 'AI' floating above

Colorado law on disclosing AI-generated political ads raises free speech concern

© Shutthiphong Chandaeng - iStock-1452604857
Joe Mueller

(The Center Square) – As Democratic Attorney General Phil Weiser issued an advisory Monday on Colorado’s new “deepfake” law on political messages, the statute might raise First Amendment concerns.

Weiser’s two-page public advisory refers to House Bill 24-1147, which took effect July 1. It created new regulations and penalties for using artificial intelligence and deepfake-generated content in communications about candidates for elected office. The law requires anyone using AI to create election communications featuring images, videos or audio of candidates to include a disclaimer explaining the content isn’t real.

Image
PROMO 64J1 Politician - Phil Weiser - public domain

Phil Weiser

Candidates who have their appearance, actions or speech depicted in a deepfake can pursue legal prohibition of the distribution, dissemination, publication, broadcast, transmission or other display of the communication. The bill provides for compensatory and punitive damages and the possibility of criminal charges.

"Much false speech is constitutionally protected," David Greene, senior staff attorney with the Electronic Frontier Foundation, said in an interview. "I don’t read this law as creating a category of speech that’s unprotected. But it’s a content-based law and will have to pass strict scrutiny because it is a restriction on otherwise protected speech.”

Jeffrey Schwab, senior counsel for the Liberty Justice Center, said the law is complicated as it doesn’t prohibit deepfakes and provides exceptions for news organizations and content containing satire or parody.

“I think in general it might be OK except for the disclosure that not only requires that it’s generated by AI, but also must say it is false,” Schwab said. “I think that’s where the statute could be in some First Amendment trouble. Whether or not it was generated by AI or not, it may or may not be false. It might be false in that the person didn’t say the exact words, but those words could be true.”

Image
Hands on a computer keyboard with simulated holographic images floating above representing aspects of artificial intelligence

© Khanchit Khirisutchalual - iStock-1515913422

Schwab gave a hypothetical example of distribution of an AI image of President Joe Biden stating Democratic nominee Kamala Harris was the “border czar.”

“Even if that’s not something Joe Biden explicitly referred to Kamala Harris as, I think a pretty good case can be made the statement is true,” Schwab said. “But under the Colorado law, if someone were to generate AI of Biden saying Harris is the border czar, then the statute would apply. They would have to have a disclosure that says it’s not only generated by AI, but that it’s false.”

The law applies to communications to voters within 60 days of a primary election and 90 days of a general election. Weiser recommended Coloradans check political communications for disclosure of a deepfake and verify “through trusted sources” whether questionable communication includes a deep fake.

“… while the law only applies to communications related to candidates for office, deepfakes can be used in many other ways to influence the opinions of voters, and in general voters should be mindful that bad actors will find ways not protected by this law to influence public opinion using deepfakes, especially on the internet.,” according to the advisory.