老夫子传媒

漏 2024 | 老夫子传媒
Southern Oregon University
1250 Siskiyou Blvd.
Ashland, OR 97520
541.552.6301 | 800.782.6191
Listen | Discover | Engage a service of Southern Oregon University
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

What California lawmakers did to regulate artificial intelligence

Attendees watch a demonstration of Unity's enemy artificial intelligence system at the Unity booth at the Game Developers Conference 2023 in San Francisco, on March 22, 2023.
Jeff Chiu
/
AP Photo
Attendees watch a demonstration of Unity's enemy artificial intelligence system at the Unity booth at the Game Developers Conference 2023 in San Francisco, on March 22, 2023.

The California Legislature passed more than a dozen bills to regulate artificial intelligence in recent days, though some ambitions fell short.

California legislators just sent Gov. Gavin Newsom more than a dozen bills regulating artificial intelligence, testing for threats to critical infrastructure, curbing the use of algorithms on children, limiting the use of deepfakes, and more.

But people in and around the AI industry say the proposed laws fail to stop some of the most worrisome harms of the technology, like discrimination by businesses and government entities. At the same time, the observers say, whether passed bills get vetoed or signed into law may depend heavily on industry pressure, in particular .

Debates over the bills, and decisions by the governor on whether to sign each of them, are particularly important because California is at the epicenter of AI development, with many legislators this year to regulate the technology and of protecting people from AI around the world.

Without question, got more attention than any other AI regulation bill this year 鈥 and after it of the legislature by wide margins, industry and consumer advocates are closely watching to see whether Newsom signs it into law.

Introduced by San Francisco Democratic Sen. Scott Wiener, the bill addresses huge potential threats posed by AI, requiring developers of advanced AI models to test them for their ability to enable attacks on digital and physical infrastructure and help non-experts make chemical, biological, radioactive, and nuclear weapons. It also protects whistleblowers who want to report such threats from inside tech companies.

But what if the most concerning harms from AI are commonplace rather than apocalyptic? That鈥檚 the view of people like Alex Hanna, head of research at Distributed AI Research, a nonprofit organization created by former Google ethical AI researchers based in California. Hanna said 1047 shows how California lawmakers focused too much on existential risk and not enough on preventing specific forms of discrimination. She would much rather lawmakers consider banning the use of since that application of AI has already been shown to lead to racial discrimination. She would also like to see government standards around potentially discriminatory technology adopted by contractors.

鈥淚 think 1047 got the most noise for God knows what reason but they鈥檙e certainly not leading the world or trying to match what Europe has in this legislation,鈥 she said of California鈥檚 legislators.

Bill against AI discrimination is stripped

One bill that did address discriminatory AI was gutted and then shelved this year. would have required AI developers perform impact assessments and submit them to the Civil Rights Department and would have made use of discriminatory AI illegal and subject to a $25,000 fine for each violation.

The original bill sought to make use of discriminatory AI illegal in key sectors of the economy including housing, finance, insurance, and health care. But author Rebecca Bauer-Kahan, a San Ramon Democrat, yanked it after the Senate Appropriations Committee limited the bill to assessing AI in employment. That sort of discrimination is already expected to be curbed by rules that and . Bauer-Kahan told CalMatters she plans to put forward a stronger bill next year, adding, 鈥淲e have strong anti-discrimination protections but under these systems we need more information.鈥

Like Wiener鈥檚 bill, Bauer-Kahan鈥檚 was subject to lobbying by opponents in the tech industry, including Google, Meta, Microsoft and OpenAI, which hired its first lobbyist ever in Sacramento this spring. Unlike Wiener鈥檚 bill, it also attracted opposition from nearly 100 companies from a wide range of industries, including Blue Shield of California, dating app company Bumble, biotech company Genentech, and pharmaceutical company Pfizer.

The failure of the AI discrimination bill is one reason there are still 鈥済aping holes鈥 in California鈥檚 AI regulation, according to Samantha Gordon, chief program officer at TechEquity, which lobbied in favor of the bill. Gordon, who co-organized a working group on AI with privacy, labor, and human rights groups, believes the state still needs legislation to address 鈥 discrimination, disclosure, transparency, and which use cases deserve a ban because they have demonstrated an ability to harm people.鈥

Still, Gordon said, the passage of Wiener鈥檚 bill marked important progress, as did the passage of which sets the standards for contracts government agencies sign for AI services. Doing so, , leverages the government鈥檚 buying power to encourage safer and more ethical AI services.

鈥淲e have strong anti-discrimination protections but under these systems we need more information.鈥
Assemblymember Rebecca Bauer-Kahan, Democrat from San Ramon

While some experts criticized Wiener鈥檚 bill for what it failed to do, the tech industry has gone after it for what it does. The measure鈥檚 testing requirements and associated enforcement mechanisms will kneecap fast-moving tech companies and create a chilling effect on code sharing that inhibits innovation, big tech companies like Google and Meta have said.

Given the industry鈥檚 power in California, this criticism is the proverbial elephant in the room, said Joep Meindertsma, CEO of Pause.ai. Pause.ai is a proponent of regulating AI, endorsing Wiener鈥檚 bill and even organizing protests at the offices of California-based companies including Meta and OpenAI. So Meindertsma was happy to see so many regulatory bills clear the legislature this year. But he worries they will be undermined by the tension between a desire to regulate AI and a desire to win the race 鈥 among not jut companies but entire countries 鈥 to have the best AI. Regulators in California and elsewhere, he said, want to have it both ways.

鈥淭he market dynamic between countries that are trying to stay ahead of the competition, trying to avoid regulating their companies too much over fear of slowing down while the others keep racing, that dynamic is the issue that I feel is the most toxic in the entire situation,鈥 he said.

There are already signs that industry pressure could prevail, at least against Wiener鈥檚 bill.

Several Democratic members of California鈥檚 Congressional delegation have called on Newsom to veto the bill. Former House Speaker Nancy Pelosi, who represents San Francisco, has also come out against it.

In recent weeks, Newsom seems to have leaned into AI, raising questions over how much appetite he has to regulate it. The governor showed great interest in using AI to solve problems in the state of California, signing an agreement with AI powerhouse Nvidia last month, launching an , and on Thursday introducing an AI solution aimed at connecting homeless people with services. When asked directly about Wiener鈥檚 bill in May, Newsom equivocated, saying that lawmakers must strike a balance between responding to calls for regulation and overdoing it.

The sleeper hits of this year鈥檚 AI legislation

Some bills that were more targeted 鈥 and significantly less publicized 鈥 than Wiener鈥檚 1047 did find success in the legislature.

would require companies to supply AI detection tools at no charge to the public so they can tell the difference between AI and reality. It was introduced by Democratic Sen. Josh Becker of Menlo Park.

by Democratic Sen. Bill Dodd of Napa would force government agencies to assess the risk of using generative AI and disclose when the technology is used.

Other AI bills passed this legislative session are designed to protect children, including one that and another that requires the makers of social media apps to to users under age 18 unless they get permission from a parent or guardian. Children would instead by default see a chronological stream of recent posts from accounts they follow. The bill also limits notifications from social media apps during school hours and between midnight and 6 am.

A trio of bills passed last week aim to protect voters from deceptive audio, imagery, and video known as deepfakes. One bill who create or publish deceptive content made with AI and allows a judge to order an injunction requiring them to either take down the content or pay damages.requires large online platforms such as Facebook to of a user reporting it, while yet another .

Also on Newsom鈥檚 desk are bills that would require creatives to get and in some instances. Both of those bills were supported by the actors union SAG-AFTRA.

Which bills didn鈥檛 pass

In lawmaking what fails to pass, like Bauer-Kahan鈥檚 anti AI discrimination bill, is often just as important as what advances.

Case in point: , which would have required AI makers to label AI-generated content. It sputtered out despite support from companies including Adobe, Microsoft, and OpenAI. In a statement, bill author Democratic Assemblymember Buffy Wicks of Oakland said it鈥檚 unfortunate that the California Senate did not take up her bill that 鈥渨as model policy for the rest of the nation.鈥 She said she plans to reintroduce it next year.

The labeling bill and Bauer-Kahan鈥檚 bill are two of three measures flagged as key by European Union officials who advised California lawmakers behind the scenes to adopt AI regulation in line with the EU鈥檚 AI Act, which took five years to create and went into effect this spring. Gerard de Graaf, director of the San Francisco EU office, visited the California Legislature to visit with authors of AB 3211, AB 2930, and SB 1047 in pursuit of the goal of aligning regulation between Sacramento and Brussels.

In an , de Graaf said those three laws would accomplish the majority of what the AI Act seeks to do. This week, de Graaf had high praise for his California counterparts, saying he thinks state lawmakers did some serious work to pass so many different AI regulation bills, that they鈥檙e at the top of their game, and that they succeeded in being a world leader in AI regulation this year.

鈥淭his requires a thorough understanding and that鈥檚 not present in many legislatures around the world and in that sense California is a leader,鈥 he said. 鈥淭he fact that California achieved as much as it did in a year is not an insignificant feat and this will presumably continue.鈥

Despite advising lawmakers about two bills that failed to pass the possibility of Senate Bill 1047 facing a veto, de Graaf said he sees convergence with EU AI policy in the passage of a bill that requires AI developers to .

The fact that the bill meant to protect citizens from discriminatory AI didn鈥檛 pass is a really disappointing reflection of the power of tech capital in California politics, said UC Irvine School of Law professor Veena Dubal, whose has dealt with technology and marginalized workers.

鈥淚t really feels like our legislature has been captured by tech companies who by their very structure don鈥檛 have the interest of the public at the forefront of their own advocacy or decision making, because they鈥檙e profit making machines,鈥 she said.

She thinks events of the past legislative session show that California will not be a leader in regulating generative AI because the power of tech companies is too unwieldy, but she does see signs of promise in bills passed to protect kids from AI. She鈥檚 encouraged by digital replica bills supported by SAG-AFTRA passed, a reflection of worker strikes in 2022, and that lawmakers made clear that using generative AI to make child pornography and curate content for kids without parental consent should be illegal. What鈥檚 more challenging it seems is passing laws that require any degree of accountability. It shouldn鈥檛 be debatable whether people deserve protections from civil rights violations, and she wants lawmakers to label other uses of AI unacceptable, like using AI to evaluate people in the workplace.

鈥淭he fact that those laws (protecting kids) passed isn鈥檛 surprising, and my hope is that their passage paves a way for stopping or banning use of AI or automated decisionmaking in other areas of our lives in which it is clearly already wrecking harm,鈥 she said.

 is a nonprofit, nonpartisan media venture explaining California policies and politics.