Renowned Researchers Advocate for Stricter AI Regulations
- Professors Push for AI Safety Legislation: A group of leading experts, including Yoshua Bengio and Geoffrey Hinton, have penned a letter urging California lawmakers to support the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.
- Balancing Innovation and Safety: The bill mandates rigorous safety testing for large-scale AI models to prevent potential dangers, drawing both support and criticism from various stakeholders.
- California’s Role in AI Regulation: With significant AI development happening in California, the state is seen as pivotal in setting the precedent for AI safety regulations in the US.
A group of esteemed professors, including Yoshua Bengio, Geoffrey Hinton, Lawrence Lessig, and Stuart Russell, have co-authored a letter urging key lawmakers to support California’s Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. As the bill reaches the final stages of the legislative process, these experts argue that the next generation of AI systems pose severe risks if not developed with sufficient oversight and care. The letter, shared exclusively with TIME, describes the bill as the “bare minimum for effective regulation of this technology.”
Introduced by Senator Scott Wiener in February, the bill requires AI companies training large-scale models to conduct rigorous safety testing for potentially dangerous capabilities and implement comprehensive safety measures. The bill has already passed the California senate and now awaits a vote in the state assembly before potentially being signed into law by Governor Gavin Newsom.
“There are fewer regulations on AI systems that could pose catastrophic risks than on sandwich shops or hairdressers,” the four experts write, emphasizing the need for the bill. Addressed to Mike McGuire, Robert Rivas, and Governor Newsom, the letter highlights the indispensable role California plays in regulating AI, given its status as a hub for leading AI developers.
While the bill enjoys support from a majority of Californians, it faces strong opposition from industry groups and tech investors who claim it could stifle innovation and harm the open-source community. Critics like venture capital firm Andreessen Horowitz argue that the bill’s provisions could allow other countries to take the lead in AI development. Despite the pushback, the bill’s proponents maintain that it applies only to the largest AI models, requiring assurances that these models do not pose unreasonable risks, such as aiding in the creation of weapons of mass destruction or causing severe damage to critical infrastructure.
The letter, signed by experts recognized for their contributions to AI and technology, stresses the significant risks AI systems can pose. Bengio and Hinton, both Turing Award winners, have previously voiced their support for the bill. The letter also points out that similar regulations in Europe and China are more restrictive and praises the bill for its robust whistleblower protections for AI lab employees who report safety concerns.
In response to criticisms from the open-source community, the bill has been amended to exempt original developers from shutdown requirements once a model is no longer in their control and to limit their liability when others make significant modifications to their models. Despite these amendments, some critics believe the bill would require open-source models to have a “kill switch.”
“Relative to the scale of risks we are facing, this is a remarkably light-touch piece of legislation,” the letter states. It notes that the bill does not impose a licensing regime or require government agency permission before training a model and relies on self-assessments of risk.
Governor Newsom has the opportunity to cement California as a leader in AI regulation, the letter concludes, emphasizing the urgent need for such legislation. With many top AI firms based in California, the state is well-positioned to take an early lead in regulating this emerging technology.