AI is no longer just a buzzword—it’s actively transforming the workplace. Whether employers are aware of it or not, AI tools are being embedded into daily operations across industries. With California pushing forward with proposed regulations that could take effect as early as July 1, 2025, employers must begin understanding the implications now. Here are five essential points to keep in mind:
1. AI Is Already in the Workplace—and Use Is Expanding
Many California employers are already using AI—even if only in limited ways. One of the most common use cases is resume screening. Large employers facing thousands of applications use AI to quickly sort resumes and identify the most qualified candidates. AI is also being used to:
- Draft employee communications: HR teams are using tools like ChatGPT to write performance reviews, disciplinary notices, or coaching emails.
- Assist in onboarding: Chatbots can guide new hires through paperwork, benefits enrollment, and training schedules.
- Enhance recruiting: Some available tools use video analysis and natural language processing to assess applicant responses.
Even managers are using AI to brainstorm better ways to give feedback or handle difficult conversations.
Example: An HR manager struggling to coach an underperforming employee may use ChatGPT to draft a constructive and empathetic email, saving time and ensuring the message is clear and legally sound.
2. California Is Leading with New AI Employment Regulations
California’s proposed Automated Decision Systems (ADS) regulations aim to ensure that AI tools used in employment decisions are fair, transparent, and compliant with anti-discrimination laws (see our prior article on the proposed regulations here). These rules would:
- Require notice to applicants when AI tools are used
- Mandate anti-bias testing and corrections
- Impose a four-year recordkeeping requirement
- Apply to vendors and hold employers legally responsible for third-party tools
The regulations are designed to prevent unintentional discrimination—such as AI tools screening out candidates with gaps in employment (which may affect women or caregivers more).
Example: If your company uses AI to rank candidates and the system deprioritizes people who’ve taken career breaks, that could result in a discriminatory impact. Under the new rules, you’d need to detect and fix that.
3. Employers Are Still Liable—Even When AI Makes the Decision
You can’t delegate legal responsibility to an algorithm. If an AI screening tool inadvertently excludes protected classes (e.g., based on age, race, or disability), your business is still liable under existing discrimination laws.
A current case in California, Mobley v. Workday, involves a 40+ year-old Black man who alleges he was repeatedly rejected by AI tools used by employers. The case is moving forward under current federal and state anti-discrimination laws.
Example: Even if a third-party software provider made the AI, if it screens out applicants unfairly, and your business uses it, your company can be sued under FEHA or Title VII.
4. AI May Actually Help Reduce Bias—When Used Correctly
There’s hope for AI to help level the playing field. A study by researchers at Stanford and USC found that AI-led hiring processes were more successful at identifying qualified candidates—particularly younger, less experienced, and female applicants—than traditional resume reviews and human interviews.
AI can promote a more “blind” evaluation process, reducing the potential for human bias related to names, photos, age, or education pedigree.
Example: A company replaces its initial resume screening with a structured AI interview system. The result? A more diverse candidate pool, selected based on skills and responses, not superficial traits.
5. Wearables and AI Tools Raise Serious Privacy Concerns
With tools like AI-powered glasses (like Google’s new Gemini glasses) and Zoom note-takers, privacy risks are growing. These tools can record conversations, analyze speech, and even recognize faces. California is a two-party consent state—recording without consent is illegal and could lead to criminal and civil penalties.
Example: An employee shows up wearing smart glasses that record everything they see and hear. If they record private conversations without the other party’s consent, it could violate California Penal Code section 632 and create liability for the employer.
Employers should start preparing policies or training on the use of wearables and AI tools in the workplace—even if a formal “AI governance policy” feels premature.
Final Thoughts
AI can help your business stay compliant, attract better candidates, and operate more efficiently—but only if you understand the risks and responsibilities. Think of it like a powerful new employee: it needs supervision, training, and accountability. Don’t wait until your competitors—and the regulators—are ahead of you.
Start small. Use AI to help HR draft communications or streamline workflows. Just be sure you pair every tool with legal oversight and human judgment.