Across the country, state legislatures are moving quickly to regulate artificial intelligence in the workplace. California’s proposed SB 947 – the Automated Decision Systems in the Workplace – introduced in the California Legislature on February 2, 2026, is one prominent example, but it is part of a broader trend: laws that seek to govern how employers may adopt, deploy, and rely on AI-driven tools when making employment-related decisions.

Other proposed AI-related legislation underscores how rapidly this movement is accelerating. For example, SB 951—the California Worker Technological Displacement Act—would require employers to provide at least 90 days’ advance notice before layoffs caused by “technological displacement.” In addition, the California Labor Federation has publicly stated that it will sponsor or support more than two dozen bills this year focused on the impact of artificial intelligence on workers in California.

While these bills are typically framed as worker-protection measures, they reflect a deeper and unresolved policy tension—whether AI in the workplace should be regulated piecemeal at the state level, or whether regulation must occur at the federal level to avoid a patchwork of rules that materially hinder innovation, adoption, and economic growth.

Using SB 947 as a case study, it becomes clear that many of these proposals proceed from the same assumptions and raise the same structural problems.

At a high level, bills like SB 947 seek to regulate employers’ use of “automated decision systems” (ADS)—a term defined so broadly that it can encompass AI-driven tools, analytics software, scoring systems, and other technology used to assist with employment decisions. The scope of regulated activity typically extends well beyond hiring and firing to include scheduling, compensation, performance evaluation, work assignments, and discipline.

Under SB 947, for example, employers would be prohibited from relying solely on an automated system for disciplinary or termination decisions and would be required to conduct a human “independent investigation” to corroborate any AI-generated output. Similar proposals impose restrictions on the types of data that may be used, prohibit “predictive behavior analysis,” and bar the use of systems that could infer protected characteristics.

These bills also commonly create new notice and disclosure obligations. If an AI-assisted tool is used in connection with discipline or termination, employers may be required to provide written post-use notices, identify vendors, explain human review processes, and produce data inputs, outputs, corroborating materials, and impact assessments upon request.

Enforcement mechanisms tend to be expansive. Using SB 947 again as an example, compliance would be enforced not only by labor agencies and public prosecutors, but also through private civil actions with attorneys’ fees and punitive damages available. The result is not simply technology regulation, but a new, litigation-driven compliance regime layered on top of already complex employment laws.

Layered onto this regulatory push is a more fundamental uncertainty: we still do not know what AI will do to jobs. Yet many of these bills proceed as if the answers are already settled.

Below are five reasons why state-level efforts to regulate employer adoption of AI—illustrated by SB 947—are misaligned with where the AI policy conversation is actually heading.

1. State-Level AI Regulation Ignores the Growing Federal Consensus on the Need for Uniform Standards

At the federal level, there is increasing bipartisan agreement on one point: a state-by-state approach to AI regulation is incompatible with innovation, compliance, and economic growth. Although Congress has not yet enacted comprehensive AI legislation, federal policymakers have repeatedly emphasized the need for a national framework, particularly for technologies deployed at scale.

AI systems do not respect state borders. Employers operating across multiple jurisdictions cannot realistically deploy one version of a scheduling, hiring, or performance tool for California, another for Colorado, another for Illinois, and another for New York. The compliance burden discourages adoption, especially for mid-sized employers without dedicated AI governance teams.

Bills like SB 947 move states in the opposite direction by layering unique definitions, procedural requirements, and disclosure obligations on top of existing employment law—contributing directly to the fragmentation federal policymakers are attempting to avoid.

2. A Patchwork of State Laws Does Not Protect Workers—It Discourages Responsible AI Adoption

One of the ironies of these proposals is that they may reduce fairness rather than enhance it. When employers are discouraged from using standardized, data-driven tools due to legal risk, decision-making does not disappear—it becomes more subjective.

AI tools, when designed and implemented responsibly, can help standardize employment decisions, improve documentation, flag compliance risks, and reduce arbitrary outcomes in a regulatory environment as complex as California’s. A framework that treats AI as presumptively suspect, while leaving human discretion largely unregulated, misunderstands where workplace risk actually arises.

Advocates for federal preemption are not arguing for deregulation. Nor are they suggesting that existing discrimination, wage and hour, or harassment laws should cease to apply. Rather, they are calling for uniform standards that encourage transparency and responsible adoption instead of regulatory avoidance.

3. These Bills Assume AI’s Impact on Jobs Is Known—It Is Not

State-level AI regulation efforts frequently assume that AI is primarily a job-elimination tool that must be constrained to protect workers. That assumption is premature.

While some routine and repetitive tasks will undoubtedly be automated, history shows that productivity-enhancing technologies often create new categories of work, increase demand in unexpected areas, and expand employment over time.

This dynamic is captured by Jevons’ Paradox: as efficiency improves and costs decrease, demand often increases rather than contracts. Applied to AI, tools that make management, scheduling, analysis, or compliance more efficient may expand operations and create new roles that did not previously exist.

We do not yet know which jobs will shrink, which will evolve, and which will expand. Laws that lock in assumptions too early risk distorting outcomes rather than protecting the workers who are actually impacted.

4. Overregulation Risks Driving AI Use Underground Rather Than Making It Transparent

Another unintended consequence of these proposals is that they incentivize informal or opaque AI use. If deploying AI tools triggers extensive notice obligations, disclosure rights, and litigation exposure, employers may still rely on AI—but in less visible and less documented ways.

That outcome is worse for workers. Transparency and accountability arise from clear, workable rules that encourage open use, not from regimes that make employers defensive. This is particularly problematic given that AI is already embedded in most modern software platforms—from email systems and document tools to scheduling, communications, and analytics.

A federal framework could establish baseline protections while allowing best practices to evolve. State-level mandates risk freezing rules before those practices are even developed.

5. States Risk Becoming Outliers as Federal AI Standards Are Likely to Emerge

Even if bills like SB 947 are enacted, they are unlikely to be the final word. Federal AI legislation—particularly legislation that expressly preempts conflicting state laws—remains a realistic possibility.

If and when federal standards emerge, employers may find themselves having invested heavily in state-specific compliance regimes that are later overridden or rendered obsolete. From a policy perspective, this is inefficient. From a business perspective, it is destabilizing and may influence decisions about where to invest and expand.

The Bottom Line

SB 947 is best understood as an example of a broader legislative trend: state efforts to regulate AI in the workplace before its impacts are fully understood. These proposals often assume harm before evidence, substitute procedural mandates for substantive outcomes, and overlook the growing federal consensus in favor of uniform standards.

AI will change work—there is no question about that. But how, how fast, and for whom remains an open question. A national framework focused on outcomes rather than fear is far more likely to protect workers and encourage responsible innovation than a growing patchwork of state experiments.

What Employers Should Be Doing Now

Regardless of how AI regulation ultimately develops, AI is already in the workplace—often before employers realize it. The real risk for California employers is not AI itself, but using it without clear policies, training, and legal guardrails.

To help employers navigate this evolving landscape, we are hosting a one-hour masterclass focused on the practical, real-world use of AI in the California workplace.

Masterclass: AI in the California Workplace — Practical Tools, Real Use Cases, and Legal Guardrails

We will cover how employers are actually using AI today—from hiring and scheduling to performance management and documentation—along with the key legal and compliance issues to understand, including wage-and-hour exposure, discrimination risk, privacy concerns, and PAGA implications. Attendees will leave with practical guidance on how to use AI responsibly and reduce risk.

Wednesday, February 25, 2026 | 10:00 a.m. PT – Register here.