As of October 1, 2025, California’s Civil Rights Council regulations under FEHA hold employers liable for discrimination arising from the use of automated decision systems (ADS) in hiring.  The use of AI or other automated tools does not shield employers from liability; decisions made through such systems are treated as the employer’s own actions.  Employers must now treat software used in the hiring process like any other component of their hiring process: subject to bias scrutiny, oversight, and documentation. We broke down what these regulations mean for California employers in an earlier post — you can read it here.

Here are five key issues employers using AI or other software for the recruiting and hiring process need to understand:

1. Inventory & classify all ADS tools in your hiring stack

What to do now:

  • Map every AI, algorithmic, or rule-based tool used in recruitment (resume filters, profile matching, assessment tests, video interview scoring, targeted job ad delivery, etc.).
  • For each, document the vendor, version, data sources, update frequency, logic (if available), and how it integrates with your human decision steps.
  • Ask vendors for their anti-bias testing protocols and any audit or validation data. Confirm whether they (or you) will carry forward the onus of proof under FEHA if a disparate impact claim arises.
  • Classify tools by risk: e.g. tools that reject candidates vs. tools that rank, suggest, or surface candidates.

Why this matters:
Under the new rules, “ADS” is broadly defined — it includes any computational process that “makes or facilitates human decision making regarding an employment benefit.” Even tools that may seem benign (e.g. targeted job ads) can be covered by the new regulations.  An employer that lacks a clear inventory of its automated systems cannot credibly demonstrate oversight or accountability in an audit.

2. Conduct (or plan) bias testing + human review overlay

What to do now:

  • For each ADS, run bias/disparate impact analyses. Examine whether outcomes differ systematically by protected class (race, gender, disability, etc.).
  • Document the testing methodology (quality, efficacy, recency, scope) and your corrective actions. The regulations treat this as relevant evidence or defense in discrimination claims.
  • Ensure every ADS-enabled “decision” has a human in the loop — that is, a trained reviewer who can override, audit, or interpret algorithmic recommendations.
  • Maintain a process for reasonable accommodations when an ADS evaluation could disproportionately affect a protected group (e.g. an assessment that measures reaction time may disadvantage applicants with certain disabilities).
  • If an ADS asks puzzle/games or challenges likely to elicit disability information, evaluate whether it constitutes a “medical or psychological inquiry” (now prohibited in that context).

Why this matters:
Employers will not get a carve-out by saying AI made the decision.  If an algorithm produces adverse impact and you do not have bias testing or human intervention documented, courts or regulators may view you as negligent.  The regulations do not require mandatory testing, but lack of testing is a gap in your defense.

3. Retain audit-ready records for at least four years

What to do now:

  • Update your document retention policies: You must preserve “ADS-related records” including inputs, outputs (scores/ranks), selection criteria, audit results, vendor documentation, override logs, etc.
  • Ensure retention is for at least four years from creation or from the personnel action date — whichever is later.
  • Consider requiring vendors to supply you with audit logs, decision rationale, and transparency into their data pipeline, with contractual obligations.
  • Secure the stored data (both to protect privacy and guard against tampering) — chain of custody matters.
  • If a complaint or investigation emerges, make sure your preservation kicks in immediately (i.e. do not auto-purge relevant records).

Why this matters:
Without comprehensive, tamper-evident documentation, your risk increases significantly.

4: Revisit vendor contracts & liability allocation

What to do now:

  • Consider adding contractual clauses requiring transparency, audit rights, notification of updates, liability limitations, indemnification regarding bias or discriminatory outcomes.
  • Review if vendors represent their models have undergone anti-bias testing.
  • Understand that under FEHA’s new rules, your vendor may be treated as an “agent” whose discriminatory outputs can be attributed to you.
  • Understand contractual access to raw data, pipeline architecture, and change logs so your team (or auditors) can evaluate future risks.

Why this matters:
Employer will not be able to use a defense that “the AI vendor mishandled it.” The law contemplates third-party accountability.

5: Train and communicate internally

What to do now:

  • Train your HR, recruiting, and decision-makers on the new definitions (ADS, proxy, agent) and implications under FEHA.
  • Update your hiring policies to include steps involving ADS review, override authority, recordkeeping, and accommodation paths.
  • Consider a transparency notice for applicants (though not yet mandated under these rules) explaining that algorithmic tools may assist in screening or evaluation.
  • Monitor developments in complementary state/federal AI/algorithm law (e.g. disclosure statutes, “right to explanation” bills).

Why this matters:
Compliance depends on execution. If your people don’t understand what to watch for or override, the best policies on paper may fail in practice.