老夫子传媒

漏 2024 | 老夫子传媒
Southern Oregon University
1250 Siskiyou Blvd.
Ashland, OR 97520
541.552.6301 | 800.782.6191
Listen | Discover | Engage a service of Southern Oregon University
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Why Silicon Valley wants to kill this AI bill

Alphabet CEO Sundar Pichai speaks at a Google I/O event in Mountain View, on May 14, 2024. California lawmakers are weighing a bill that would regulate powerful artificial intelligence systems but big tech companies, including Google, say the legislation would hamper innovation.
Jeff Chiu
/
AP Photo
Alphabet CEO Sundar Pichai speaks at a Google I/O event in Mountain View, on May 14, 2024. California lawmakers are weighing a bill that would regulate powerful artificial intelligence systems but big tech companies, including Google, say the legislation would hamper innovation.

The sprawling California legislation offers protection to whistleblowers and citizens. The coming weeks could decide its fate.

Though lawmakers and advocates proposed dozens of bills to regulate artificial intelligence in California this year, none have attracted more disdain from big tech companies, startup founders, and investors than the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.

In letters to lawmakers, Meta said the legislation, , will 鈥渄eter AI innovation in California at a time where we should be promoting it,鈥 while Google claimed the bill will make 鈥淐alifornia one of the world鈥檚 least favorable jurisdictions for AI development and deployment.鈥 A letter signed by more than 130 startup founders and incubator Y Combinator goes even further, claiming that 鈥渧ague language鈥 could 鈥渒ill California tech.鈥

Prominent AI researchers are also taking sides. Last week, Yoshua Bengio and former Google AI researcher Geoffrey Hinton, who are sometimes called the 鈥済odfathers of AI,鈥 of the bill. Stanford professor and former Google Cloud chief AI scientist Fei-Fei Li, who is often called the 鈥済odmother of AI鈥 .

The bill, approved 32-1 by the state Senate in May, must survive the Assembly Appropriations suspense file on Thursday and win final approval by Aug. 31 to reach Gov. Gavin Newsom this year.

The bill, introduced by San Francisco Democrat Scott Wiener in February, is sprawling. It would:

  • Require developers of the most costly and powerful AI tools to test whether they can enable attacks on public infrastructure, highly damaging cyber attacks, or mass casualty events; or can help create chemical, biological, radioactive, or nuclear weapons. 
  • Establish CalCompute, a public 鈥渃loud鈥 of shared computers that could be used to help build and host AI tools, to offer an alternative to the small handful of big tech companies offering cloud computing services, to conduct research into what the bill calls 鈥渢he safe and secure deployment of large-scale artificial intelligence models,鈥 and to foster the equitable development of technology.
  • Protect whistleblowers at companies that are building advanced forms of AI and contractors to those companies.

The latter protections are among the reasons whistleblower and former OpenAI employee Daniel Kokotajlo supports SB 1047, he told CalMatters. He also likes that it takes steps toward more transparency and democratic governance around artificial intelligence, a technology he describes as 鈥渃ompletely unregulated.鈥

Kokotajlo earlier this year quit his job as a governance researcher at OpenAI, the San Francisco-based company behind the popular ChatGPT tool. Shortly thereafter he with allegations that he witnessed a violation of internal safety protocols at the company. OpenAI was 鈥渞ecklessly racing鈥 toward its stated artificial intelligence that surpasses human intelligence, Koktajlo . Kokotajlo also believes that advanced AI could contribute to the extinction of humanity 鈥 and that employees developing that technology are in the best position to guard against this.

In June, Kokotajlo joined more than a dozen current and former employees of OpenAI and Google in . Those workers were not the first to do so; Google employees after co-leads of the Ethical AI team were fired. That same year, Ifeyoma Ozoma, the author of a and a former Instagram employee, cosponsored California鈥檚 Silenced No More Act, a state law passed in 2022 to give workers the right to talk about discrimination and harassment even if they signed a non-disclosure agreement.

Kokotaljo said he believes that, had SB 1047 been in effect, it would have either prevented, or led an employee to promptly report, the safety violation he said he witnessed in 2022, involving an early deployment of an OpenAI model by Microsoft to a few thousand users in India without approval.

鈥淚 think that when push comes to shove, and a lot of money and power and reputation is on the line, things are moving very quickly with powerful new models,鈥 he told CalMatters. 鈥淚 don鈥檛 think the company should be trusted to follow their own procedures appropriately.鈥

When asked about Kokotajlo鈥檚 comments and OpenAI鈥檚 treatment of whistleblowers, OpenAI spokesperson Liz Bourgeois said company policy protects employees鈥 rights to raise issues.

Existing law primarily protects whistleblowers from retaliation in cases involving violation of state law, but SB 1047 would protect employees like Kokotajlo by giving them the right to report to the attorney general or labor commissioner any AI model that is capable of causing critical harm. The bill also prevents employers from blocking the disclosure of related information.

Whistleblower protections in SB 1047 were expanded by the Assembly Privacy and Consumer Protection committee in June. That recommendation came shortly after the letter from workers at Google and OpenAI, after OpenAI , and that OpenAI forced people leaving the company to sign nondisparagement agreements or forfeit stock options worth up to millions of dollars. The protections address a concern from the letter that existing whistleblower protections are insufficient 鈥渂ecause they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated.鈥

鈥淓mployees must be able to report dangerous practices without fear of retaliation.鈥
ASSEMBLYMEMBER REBECCA BAUER-KAHAN, DEMOCRAT FROM SAN RAMON

OpenAI spokesperson Hannah Wong said the company removed nondisparagement terms affecting departing employees. Despite these changes, last month a group of former OpenAI employees nondisclosure agreements at the company as possible violations of an by President Joe Biden last year to reduce risks posed by artificial intelligence.

Bay Area Democrat , who leads the Assembly Privacy and Consumer Protection Committee, said she helped add the whistleblower protections to SB 1047 because industry insiders have reported feeling muzzled by punitive non-disclosure agreements, even as more of them speak out about problems with AI.

鈥淚f Californians are going to feel comfortable engaging with these novel technologies, employees must be able to report dangerous practices without fear of retaliation,鈥 she said in a written statement. 鈥淭he protections the government provides should not be limited to the known risks of advanced AI, as these systems may be capable of causing harms that we cannot yet predict.鈥

Industry says bill imperils open source, startups

As vocal as they鈥檝e been in opposing SB 1047, tech giants have said little about the bill鈥檚 whistleblower protections, including in lengthy letters that Meta, Microsoft, and Google sent to lawmakers. Google declined to comment about those provisions, while Meta declined to make California public policy lead Kevin McKinley available for comment. OpenAI pointed to a previous comment by Bourgeois that stated, 鈥淲e believe rigorous debate about this technology is essential. OpenAI鈥檚 whistleblower policy protects employees鈥 rights to raise issues, including to any national, federal, state, or local government agency.鈥

Instead, opponents have highlighted the bill鈥檚 AI testing requirements and other safety provisions, saying compliance costs could kneecap startups and other small businesses. This would hurt the state economy, they add, since California is . The bill, however,  limits its AI restrictions to systems that cost more than $100 million, or require more than a certain quantity of computing power to train. Supporters say the vast majority of startups won鈥檛 be covered by the bill.

Opponents counter that small businesses would still suffer because SB 1047 would have a chilling effect on individuals and groups that release AI models and tools free to the public as open source software. Such software is widely used by startups, holding down costs and providing them a basis on which to build new tools. Meta has argued that developers of AI software will be less likely to release it as open source out of fear they will be held responsible for all the ways their code might be used by others.

鈥淚f we over regulate, if we over indulge and chase a shiny object, we can put ourselves in a perilous position.鈥
GOV. GAVIN NEWSOM

Open source software has a long history in California and has played a central role in the development of AI. In 2018, Google its influential 鈥淏ERT,鈥 an AI model that laid the groundwork for large language models such as the one behind ChatGPT and that sparked an AI arms race between companies including Google, Microsoft, and Nvidia. Other open source software tools have also played important roles in the spread of AI, including Apache Spark, which helps distribute computing tasks across multiple machines, and Google鈥檚 TensorFlow and Meta鈥檚 PyTorch, both of which allow developers to incorporate machine learning techniques into their software.

Meta has gone farther than its competitors in releasing the source code to its own large language model, Llama, which has been . In a in June, Meta deputy chief privacy officer Rob Sherman argued that the bill would 鈥渄eter AI innovation in California at a time when we should be promoting it鈥 and discourage release of open source models like Llama.

Ion Stoica is a professor at the University of California, Berkeley and cofounder of Databricks, an AI company built on Apache Spark. If SB 1047 passes, that within a year open source models from overseas, likely China, will overtake those made in the United States. Three of the top six top open source models available today come from China, according to the Stoica helped devise.

Open source defenders also voiced opposition to SB 1047 at a town hall hosted with Wiener at GitHub, an open source repository owned by Microsoft, and a generative AI symposium held in May.

Newsom, who has not taken a position on the legislation, told the audience it鈥檚 important to respond to AI inventors like Geoffrey Hinton who insist on the need for regulation , but also said he wants California to remain an AI leader and advised lawmakers against overreach. 鈥淚f we over regulate, if we over indulge and chase a shiny object, we can put ourselves in a perilous position,鈥 the governor said. 鈥淎t the same time we have an obligation to lead.鈥

Aiming to protect tech workers and society

Sunny Gandhi, vice president of government affairs at Encode Justice, a nonprofit focused on bringing young people into the fight against AI harms and a cosponsor of the bill, said it has sparked a backlash because tech firms are not used to being held responsible for the effects of their products

鈥淚t鈥檚 very different and terrifying for them that they are now being held to the same standards that pretty much all other products are in America,鈥 Ghandi said.鈥 There are liability provisions in there, and liability is alien to tech. That鈥檚 what they鈥檙e worried about.鈥

Wiener has disputed some criticisms of his bill, including a claim, in a circulated by startup incubator Y Combinator and signed by more than 130 startup founders, that the legislation could end up sending software developers 鈥渢o jail simply for failing to anticipate misuse of their software.鈥 That assertion arose from the fact that the bill requires builders of sufficiently large language models to submit their test results to the state and makes them guilty of perjury if they lie about the design or testing of an AI model.

鈥淚t鈥檚 very different and terrifying for them that they are now being held to the same standards that pretty much all other products are in America.鈥
SUNNY GANDHI, VICE PRESIDENT OF GOVERNMENT AFFAIRS AT ENCODE JUSTICE

Wiener said his office started listening to members of the tech community last fall before the bill was introduced and made a number of amendments to ensure the law only applies to major AI labs. Now is the time to act, he told startup founders, 鈥渂ecause I don鈥檛 have any confidence the federal government is going to act鈥 to regulate AI.

Within the past year, major AI labs signed on to testing and safety commitments with the and at international gatherings in the , , and , but those agreements are voluntary. President Biden has called on Congress to regulate artificial intelligence but it has yet to do so.

Wiener also said the bill is important because the Republican Party vowed, in the it adopted last month, to , arguing that the order stifles innovation.

In legislative hearings, Wiener has said it鈥檚 important to require compliance because 鈥渨e don鈥檛 know who will run these companies in a year or five years and what kind of profit pressures those companies will face at that time.鈥

AI company Anthropic, which is based in San Francisco, came out in support of the bill if a number of amendments are made, including doing away with a government entity called the Frontier Models Division. That division, which would review certifications from developers, establish an accreditation process for those who audit AI, and issue guidance on how to limit harms from advanced AI. Wiener told the Y Combinator audience he鈥檇 be open to doing away with the division.

Kokotajlo, the whistleblower, calls SB 1047 both a step in the right direction and not enough to prevent the potential harms of AI. He and the other signatories of the June have called on companies that are developing AI to create their own processes whereby current and former employees could anonymously report concerns to independent organizations with the expertise to verify whether concern is called for or not.

鈥淪ometimes the people who are worried will turn out to be wrong, and sometimes, I think the people who are worried will turn out to be right,鈥 he said.

In remarks at Y Combinator last month, Wiener thanked members of open source and AI communities for sharing critiques of the bill that led to amendments, but he also urged people to remember what happened when following years of inaction by the federal government.

鈥淎 lot of folks in the tech world were opposed to that bill and told us that everyone was going to leave California if we passed it. We passed it. That did not happen, and we set a standard that I think was a really powerful one.鈥

 is a nonprofit, nonpartisan media venture explaining California policies and politics.