Privacy Please

S5, E221 - How Senate Bill 1047 Could Change AI

Cameron Ivey

Send us a text

California's Senate Bill 1047 is on the brink of becoming a law, and we're here to break down what that means for the tech industry and society at large. Tune in as I dissect how this controversial bill mandates rigorous testing of AI systems to identify potential harms such as cybersecurity risks and threats to critical infrastructure. I've got insights from policymakers, including Senator Scott Weiner, who argues that the bill formalizes safety measures already accepted by top AI firms. 

Amidst passionate debates, hear how tech giants like Google and Meta push back against the regulations, fearing they could cripple innovation, especially for startups. Meanwhile, proponents, including whistleblowers from OpenAI and notable figures like Elon Musk and Yoshua Bengio, champion the necessity of such rules to mitigate substantial AI risks. We’ll also explore the broader legislative landscape that aims to combat deep fakes, and automated discrimination, and safeguard the likeness of deceased individuals in AI-generated content.

Support the show

Speaker 1:

Check, check, one, two check, check, check, check, check and check, check, check, check, check, check, check, check, check, check, check, check, check, check, check, please. Um, uh, checker Rooney, checkmate. Okay, oh, righty. Then, ladies and gentlemen, welcome back to another episode of Privacy, please. I am your host, cameron Ivey, and I got another blog cast for you. It's been a while since I've done one of these and I think we bring it back with some really interesting news in the privacy industry. I mean, it's pretty big news for privacy in general, but we'll go ahead and dive right into this thing.

Speaker 1:

So California's AI regulation bill heads to Governor Newsom. Let's dig in, all right. Well, the California lawmakers have approved Senate Bill 1047, a controversial bill requiring companies to create or modify powerful AI systems to test for potential harm. Now the bill has now landed on Governor Gavin Newsom's desk. Whether it becomes a law, it's going to be up to him. Let's talk about the numbers a little bit. The Senate initially passed the bill with a 32 to 1 vote in May. Fast forward to late Wednesday afternoon, the Assembly voted 48 to 15 to pass it, followed by the Senate concurring with amendments this morning, a couple days ago. So what does the bill entail, you ask? Well, sb 1047 mandates that companies investing $100 million or more to train an AI model, or $10 million to modify one, must test these models for their potential to cause significant harm. This includes testing for possible cybersecurity risks or attacks, infrastructure threats and the development of chemical, biological, radioactive or nuclear weapons. The stakes and stakeholders and I'm not talking about a juicy steak, I'm talking about high stakes here High stakes, low stakes. You know what I'm saying. Stake talking about high stakes here, high stakes, low stakes. You know what I'm saying.

Speaker 1:

The bill has garnered support and opposition from some powerful entities. On one side, we have companies like google, meta and open ai, as well as startup incubator y combinator, all voicing strong opposition. They argue that compliance costs could cripple the industry, especially startups, and stifle open source AI innovation due to potential legal liabilities. Yet whistleblowers from OpenAI, the executives at Anthropic Twitter, ceo Elon Musk and AI researcher Yoshu Bengio are all in favor. They contend that AI tools pose a significant risk and that federal regulation has been insufficient. What's the political angle here? Eight members of Congress from California have urged Governor Newsom to veto the bill. Newsom himself has expressed a desire to avoid over-regulation, despite recognizing the need for some level of oversight. The next few days will be crucial as we watch to see if he will sign or veto the bill.

Speaker 1:

Here are some insights from Senator Scott Weiner. Senator Democrat from San Francisco and author of the bill argues that SB 1047 merely codifies safety measures that leading AI companies have already agreed upon with international leaders and President Biden. Industry and public reactions Meta continues to oppose the bill, claiming it would stifle AI development and hurt California's reputation for fostering innovation. However, proponents like Daniel Kokutaju sorry for the botch there, a former OpenAI employee and whistleblower he be blowing whistles believe the bill will demonstrate that innovation and regulation can coexist. Let's talk about the broader legislative context here. So, in addition to SB 1047, which honestly sounds like some kind of robot name uh, like johnny five, anyways, show my age california's legislator is also passing other ai related laws. These, conclude these include requirements for large online platforms like facebook to remove election related deep fakes and creating a working group to guide schools on safe ai usage. Additionally, other bills aim to combat automated discrimination and protect the likeliness of deceased individuals in AI-generated content. In conclusion here, california's SB 1047 is a significant step towards regulating AI to prevent societal harm while balancing innovation and safety.

Speaker 1:

As we await Governor Newsom's decision, it's clear that the stakes are high for both AI developers and the public. This is pretty big. It's going to be interesting to see what happens in the next few days, if anything happens in the next few days, but we'll stay tuned to this. We'll keep you guys posted. You might hear things before I can get that stuff out to you, but if you haven't heard much to this, hopefully this was some good information, because this just passed. I think this just happened on Saturday. I hope everybody had a wonderful Labor Day weekend and thanks again for supporting Privacy. Please, we will see you guys next week. Stay safe, stay private, stay classy. San Diego, we'll see you next time.

People on this episode