Image
PROMO Technology - Hand Finger Internet Shopping Security - iStock

Senators call for AI regulation as concerns for national security grow

iStock

J.J. Brannock

(The Center Square) – The Senate Committee on the Judiciary’s Subcommittee on Privacy, Technology, and the Law held a hearing Tuesday on the principles for artificial intelligence regulation.

The hearing comes after President Joe Biden convened with seven major companies last week to make commitments to regulate the widespread use of AI, including investing in responsible AI research and development, establishing policies to safeguard people’s rights and safety, and forming trustworthy AI systems with the major development companies.

While the Biden administration received some praise for recognizing the need to act quickly, Senator Richard Blumenthal, D-Conn., said that “these commitments are unspecific and unenforceable."

"A number of the most serious issues say that they will ‘give attention to the problem,'" he said. "All good, but it’s only a start.”

Witnesses at the hearing gave a timeline of two years or less before Americans see the most “severe dangers” of AI, especially since the technology is advancing so quickly.

“AI is already having a significant impact on our economy, safety, and democracy,” Blumenthal said. “The dangers are not just extinction, but loss of jobs, one of the worst nightmares that we have. Each day these issues are more common, more serious, and more difficult to solve, and we can’t repeat the mistakes we made on social media, which was to delay and disregard the dangers.”

Blumenthal and several others in the hearing expressed concern that the human population could be wiped out in a few years due to the steady increase in AI’s autonomy.

Dario Amodei, Chief Executive Officer for ethical AI company Anthropic, said that AI was able to semi-unreliably instruct users on steps to make biological weapons. These steps can not be found on Google or other search engines.

Amodei warned that in two years AI would be advanced enough to fully list the instructions to make bioweapons, “enabling many other actors to carry out large-scale biological attacks.”

He suggested that the U.S. secure the AI supply chain from semiconductor manufacturing equipment to chips, create a safety testing and auditing program for new AI models, and give significant funding to those safety programs.

“The balance between mitigating AI’s risk and maximizing its benefits will be a difficult one, but I’m confident that our country can rise to the challenge,” Amodei said.

Stuart Russell, a professor of Computer Science at U.C. Berkeley, said that Large Language Models like ChatGPT did not make up the entirety of AI, but were simply a puzzle piece hinting to an incredibly lucrative overall product.

“I have estimated a cash value of at least $14 quadrillion for this technology,” Russell said. “A huge magnet in the future pulling us forward.”

Russell also warned that AI will pose a serious threat to humanity once it “outstrips our feeble powers,” and that we have done very little to safeguard against that so far.

“Social media algorithms were trained to maximize clicks and learned to do so by manipulating human users and polarizing societies,” he said. “But with LLMs, we don’t even know what their objectives are. They learn to imitate humans and probably absorb all too human goals in the process.”

Russell’s suggestions included a kill switch that would have to be used if systems break into other computers or try to replicate themselves.

Senator Blumenthal compared the advancement of AI to that of America’s Manhattan or Apollo projects.

“AI is here and beware of what it will do if we don’t do something to control it,” Blumenthal said.

The Senate plans to pick up the suggestions talked about in the hearing and use them as the groundwork for more comprehensive legislation for AI regulation. A specific timetable has not yet been announced.