California regulators are moving to restrict how employers can use artificial intelligence to screen workers and job applicants 鈥 warning that using AI to measure tone of voice, facial expressions and reaction times may run afoul of the law.
The draft regulations say that if companies use automated systems to limit or prioritize applicants based on pregnancy, national origin, religion or criminal history, that鈥檚 discrimination.
Members of the public have until July 18 to comment on the . After that, regulators in the California Civil Rights Department may amend and will eventually approve them, subject to final review by an administrative law judge, capping off a process that began three years ago.
The rules govern so-called 鈥渁utomated decision systems鈥 鈥 artificial intelligence and other computerized processes, including quizzes, games, resume screening, and even advertising placement. The regulations say using such systems to analyze physical characteristics or reaction times may constitute illegal discrimination. The systems may not be used at all, the new rules say, if they have an 鈥渁dverse impact鈥 on candidates based on certain protected characteristics.
The draft rules also require companies that sell predictive services to employers to keep records for four years in order to respond to discrimination claims.
A crackdown is necessary in part because while businesses want to automate parts of the hiring process, 鈥渢his new technology can obscure responsibility and make it harder to discern who鈥檚 responsible when a person is subjected to discriminatory decision-making,鈥 said Ken Wang, a policy associate with the California Employment Lawyers Association.
The draft regulations make it clear that third-party service providers are agents of the employer and hold employers responsible.
The California Civil Rights Department started exploring how algorithms, a type of automated decision system, can impact job opportunities and automate discrimination in the workplace . Back then, founder Lydia X. Z. Brown warned the agency about the harm that hiring algorithms can inflict on people with disabilities. Brown told CalMatters that whether the new draft rules will offer meaningful protection depends on how they鈥檙e put in place and enforced.
Researchers, advocates and journalists have amassed a body of evidence that AI models can automate discrimination, including in the workplace. Last month, the American Civil Liberties Union filed a complaint with the Federal Trade Commission alleging that resume screening software made by the company despite the company鈥檚 claim that its AI is 鈥渂ias free.鈥 An evaluation of leading artificial intelligence firm OpenAI鈥檚 GPT-3.5 technology found that the large language model . Though the company uses filters to prevent the language model from producing toxic language, also surfaced race, gender, and religious bias.
鈥淭his new technology can obscure responsibility.鈥KEN WANG, POLICY ASSOCIATE WITH THE CALIFORNIA EMPLOYMENT LAWYERS ASSOCIATION
Protecting people from automated bias understandably attracts a lot of attention, but sometimes hiring software that鈥檚 marketed as smart makes dumb decisions. Wearing glasses or a headscarf or having a bookshelf in the background of a video job interview can , according to an investigative report by German public broadcast station Bayerischer Rundfunk. So can when submitting a resume, according to researchers at New York University.
California鈥檚 proposed regulations are the latest in a series of initiatives aimed at protecting workers against businesses using harmful forms of AI.
In 2021, New York City lawmakers passed a law to protect job applicants from algorithmic discrimination in hiring, although researchers from Cornell University and Consumer Reports recently concluded that the law . And in 2022, the Equal Employment Opportunity Commission and the U.S. Justice Department clarified that employers .
The California Privacy Protection Agency, meanwhile, is that, among other things, define what information employers can collect on contractors, job applicants, and workers, allowing them to see what data employers collect and to opt-out from such collection or request human review.
Pending legislation would further empower the source of the draft revisions, the California Civil Rights Department. would allow the department to demand impact assessments from businesses and state agencies that use AI in order to protect against automated discrimination.
Outside of government, union leaders now increasingly argue that rank-and-file workers should be able to weigh in on the effectiveness and harms of AI in order to protect the public. Labor representatives have had conversations with California officials about specific projects as they .
is a nonprofit, nonpartisan media venture explaining California policies and politics.