The need for AGI regulation – 2nd US Senate Judiciary Sub-Committee on AI
- Posted by Mara Di Berardo
- On 31 July 2023
- 0 Comments
- AI regulation, artificial intelligence, senate hearing
by Jerome Glenn
The US Senate Judiciary Subcommittee held a second hearing on oversight and regulation of AI, during which the need for AGI regulation was stressed. The first hearing was carried out on May 2023. In this session, carried out on July 25, 2023, witnesses testifying included Stuart Russell, professor of computer science at The University of California Berkeley; Yoshua Bengio, founder and scientific director of Mila — Quebec AI institute, and Dario Amodei, CEO of Anthropic.
The following day, Anthropic, Google, Microsoft and OpenAI announced Frontier Model Forum to promote Safety, Best practices, Collaboration (with policymakers, academics, and civil society), and develop applications for Global Challenges such as climate change, cancer, and cyber threats.
Senate Judiciary Subcommittee hearing on AI was much better than the last one: this one DID talk about AGI, DID get into some details for a national AI regulatory agency, G-7 oversight, and a UN agency for basic international rules for all. And the discussion on political AI disinformation for 2024 election was more precise.
Chairman Blumenthal, acknowledge super AI could be just a few years away and hence urgency to get regulations in place. We had delivered two rounds of AGI background to the Senate subcommittee prior to this hearing.
Stuart Russell said we have to “move fast and fix thinks,” as $10 billion/month are going for AGI start-ups now, we should require proof of safety before public release, and a US regulatory agency should have violators of regulations removed from the market.
Yoshua Bengio said we have to create AI systems to counter bad actors using AI and AGI going rogue, there should be university ethics review boards for AI there are for biology, medicine, etc. (full text at https://yoshuabengio.org/2023/07/25/my-testimony-in-front-of-the-us-senate/).
Dario Amodei said Anthropic wants to inspire a “race to the top on safety,” secure the entire AI supply chain, testing and auditing regime (third party and national security) for new and more powerful models before releasing to the public, fund research on measurement and testing to know if testing and auditing is actually effective (maybe funding NIST to do this). As with the last hearing, there was little talk about government AI research be it US or China.