Artificial Intelligence (AI) has rapidly emerged as a transformative force across various industries, revolutionizing the way we work and live.  As we have previously written about here, AI has huge potential benefits for employers.  However, as the technology continues to advance, its impact on the workforce raises important questions about discrimination, privacy, and accountability. Acknowledging the significance of these concerns, the state of California, as well as the federal government, has started to consider what potential regulations would need to look like to effectively govern the use of AI by employers.  Here are five issues California employers need to understand about the proposed regulations as well as the existing laws that already govern an employer’s use of AI:

1. California proposed legislation to regulate employers’ use of AI.

AB 331 is a proposed California bill to regulate employers’ use of AI in making employment decisions (among other non-employment determinations as well).  The bill would require a developer of the AI to perform an impact assessment for any automated decision tool that would set forth the purpose of the tool, potential adverse impacts from the deployer’s use of the tool, and the safeguards implemented to address “reasonable foreseeable risks of algorithmic discrimination” from the use of the tool.  The proposed law also would require notification to the employee that the AI is used to make the decision and provide the employee an option to opt out of the automated decision.  Among other reporting requirements that must be made by the developer of the software, the law also prohibited “algorithmic discrimination” by the user of the software.  The bill set forth that anyone discriminated against because of the use of the AI would be entitled to compensatory damages, declaratory relief, and attorneys fees and costs.  This proposed bill raises many concerns, as addressed below. 

2. Other state and federal regulations regarding employer’s use of AI.

On May 18, 2023, the EEOC released a “technical assistance document” entitled, “Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964.”  EEOC guidance explains that employers’ use of automated systems, such as AI, “may run the risk of violating existing civil rights laws.”  The document discusses whether an employer’s “’selection procedures’—the procedures it uses to make employment decisions such as hiring, promotion, and firing—have a disproportionately large negative effect on a basis that is prohibited by Title VII.”  A “disparate impact” or “adverse impact” under Title VII is illegal, as is intentional discrimination which his referred to as “disparate treatment.”  The EEOC notes that it adopted the Uniform Guidelines on Employee Selection Procedures under Title VII in 1978.  These Guidelines provide guidance to employers on how to determine if their selection procedures are “lawful for purposes of Title VII disparate impact analysis.”  The EEOC’s technical assistance is well thought out and relies upon guidance that is nearly a half a century old, well developed in the courts, and easily understood by employers.  As discussed below, the EEOC’s guidance is much more rational than California’s proposed AB 331. 

In October 2022, the White House published the “Blueprint for an AI Bill of Rights, Making Automated Systems Work For the American People.”  The document is only a white paper that has no legally binding effect on employers or their obligations.  It sets forth five principles that should be protected: 1) safe and effective systems, 2) algorithmic discrimination protections, 3) data privacy, 4) notice and explanation, and 5) human alternatives, consideration, and fallback.  The draft of California’s AB 331 is based largely on the concepts put forth in the White House’s white paper. 

On February 10, 2023, the California Civil Rights Council (CRC), has continued in conducting hearings and published revisions to draft regulations concerning automated decision-making systems used by employers and agents of employers, such as recruiters, payroll, and staffing agencies.  The draft regulations state that “[i]t is unlawful for an employer or other covered entity to use selection criteria (including a qualification standard, employment test, automated-decision system, or proxy) if such use has an  adverse impact on or constitutes disparate treatment of an applicant or employee or a class of applicants or employees on a basis protected by the Act,” unless the employer can show job-relatedness and is a business necessity and there are no less discriminatory policies or practices that could serve the same purpose. 

3. Are more laws already prohibiting that which is prohibited necessary? 

I’m confused, and concerned, about the draft of California’s proposed legislation in AB 331 stating that an employer’s use of artificial intelligence is subject to anti-discrimination laws that are already on the books.  Is there any question that the anti-discrimination laws already governing employers would not apply if an employer used software or artificial intelligence when making employment related decisions?  As noted by the EEOC in its technical assistance document discussed above, I fail to see how an employer would be able to abdicate their responsibilities to follow existing law by simply stating that they made the decision based on what some software told them to do.  This would be tantamount to arguing that an employer is not responsible for any of their decisions made, and therefore could escape liability by outsourcing all decisions to a third-party, such as a recruiter, or worse, their 8-year-old son.  This would be a very easy way to escape liability for discrimination claims. 

The proposed legislation, such as AB 331, might do more harm than good in the workplace.  The Digest of AB 331 explains the purpose of the proposed bill:

This bill would prohibit a deployer from using an automated decision tool in a manner that contributes to that results in algorithmic discrimination, which the bill would define to mean the condition in which an automated decision tool contributes to unjustified differential treatment or impacts disfavoring people based on their actual or perceived race, color, ethnicity, sex, religion, age, national origin, limited English proficiency, disability, veteran status, genetic information, reproductive health, or any other classification protected by state law.  

This raises a problematic question: If this law being proposed, is it currently legal for employers to use “an automated decision tool” in a manner that results in “algorithmic discrimination?”  It seems clear that employers are held accountable for their decisions, and it does not matter if they relied upon a third-party, software, or any other tool to make that decision.  Indeed, the EEOC made this point in its technical guidance issued on May 18, 2023 – that employers can be held responsible for a selection process that has a disparate impact, even if it was developed by an outside vendor or a software vendor that is an agent of an employer.  But it does raise the question about why California legislature is proposing a new law if it is already prohibited by current law.  One may counter that California is simply making it abundantly clear that this type of activity is illegal by passing this legislation.  But I would argue this is only adding more complexity and confusion to regulate an already illegal practice.

4. Concerns about what public data is being relied upon in the employment context. 

While I believe the proposed bill AB 331 is problematic on different levels, there are concerns about which data AI is using the make its decision, who controls this data, and who is in charge of telling the AI which data to use in decisions.  There are already laws that require employers to give notice to employees about background checks, obtain authorization to conduct the background check in certain circumstances, and disclosure to the employee about what information was obtained in the background check: federal Fair Credit Reporting Act (FCRA), California Investigative Consumer Reporting Agencies Act (ICRAA), the California Consumer Credit Reporting Agencies Act (CCRAA) (see our prior article here for more information about the ICRAA and CCRAA).  In addition, local governments, such as Los Angeles and San Francisco have implemented their own prohibitions on criminal history checks, and employers must also comply with these local requirements.

However, background checks and the use of AI in making employment decisions is vastly different.  Background checks show the data that is being used in the decision, unlike the potential use of AI, which is using multiple data sets in determining an ultimate outcome.  It may not be clear which data AI is using when making decisions, and this could be problematic if there is an error in the data being used.  This could result in individuals not being able to obtain employment because of an error in data being used by AI, and the individual and employer would never know of that error. 

5. Data leaks and privacy concerns by using AI software. 

Also, a major concern regarding the use of AI for employers and employees is data privacy.  As recently widely reported, ChatGPT and other AI software do not treat search queries as private, and any information put into the AI software is available for that company to review.  This raises concerns for both employers and employees.  If employers are using AI to make hiring decisions, can competitors see who their competition are interviewing?  If so, this would be a strategic advantage.  If in putting confidential employee information into an AI software that is not private, does this constitute a privacy breach?  Likely. 

Unless an employer can secure the data being entered into the AI system and the AI system itself, it is unlikely that AI software will be widely used because of the risk that competitors could be viewing strategic decisions being considered.  This is exactly Apple’s concern, and why it moved to ban ChatGPT by its employees