Workday has evolved from an HRIS platform into a comprehensive AI-powered talent ecosystem. Its machine learning capabilities—candidate matching, skills intelligence, predictive analytics, and automated screening—are integrated so seamlessly that many HR teams don't realize they're deploying AI tools subject to the same regulations as standalone platforms like HireVue.
That integration is both Workday's strength and its compliance challenge. If you're using Workday Recruiting, Talent Marketplace, or Skills Cloud, you're almost certainly using AI in ways that trigger legal obligations. This guide explains what those features do, which laws apply, and how to stay compliant.
What You'll Learn:
- ✓ Which Workday features use AI/ML and how they work
- ✓ Applicable federal and state AI hiring regulations
- ✓ Workday's ongoing discrimination lawsuit and implications
- ✓ Required disclosures and bias audit obligations
- ✓ Step-by-step compliance implementation
- ✓ Risk mitigation strategies
Understanding Workday's AI-Powered Features
Workday's AI capabilities span recruiting, talent management, and workforce planning. Here's what's actually powered by machine learning:
1. Candidate Skills Match
What it does: Automatically extracts skills from job postings and candidate resumes/profiles, then calculates a match score indicating how well the candidate's skills align with the role.
How it works:
- Natural language processing (NLP) analyzes job descriptions to identify required skills
- ML algorithms parse candidate resumes and Workday profiles to extract skills and experience
- The system generates a percentage match score (e.g., "85% match")
- Candidates are ranked by match score for recruiter review
Compliance consideration: This is an Automated Employment Decision Tool (AEDT)under NYC Local Law 144 and similar statutes. If you use match scores to screen candidates or prioritize who to interview, bias audit requirements apply.
2. Job Recommendations (Spotlight)
What it does: Workday's "Spotlight" feature uses AI to match job seekers with relevant openings and surface passive internal candidates for open roles.
How it works:
- ML models analyze candidate profiles, work history, skills, and preferences
- Algorithms compare candidate attributes to job requirements
- The system proactively recommends jobs to candidates and candidates to hiring managers
- Recommendation strength is based on predicted fit and performance likelihood
Compliance consideration: If hiring managers rely on Spotlight recommendations to decide who to interview, this constitutes automated decision-making requiring disclosure and potential bias auditing.
3. Skills Intelligence and Ontology
What it does: Workday Skills Cloud uses AI to map skills across the organization, identify skill gaps, and recommend learning pathways and internal mobility opportunities.
How it works:
- ML algorithms build a skills taxonomy from job data, resumes, and employee profiles
- The system identifies adjacent skills and transferable capabilities
- AI suggests internal candidates for roles based on skills proximity
- Predictive models estimate skill development timelines
Compliance consideration: When used for internal mobility and promotions, skills-based matching is subject to the same bias audit and disclosure requirements as external hiring.
4. Predictive Analytics and Talent Insights
What it does: Workday uses historical data to predict candidate success, flight risk, time-to-fill, and other talent metrics.
How it works:
- ML models train on past hiring outcomes to predict future performance
- Algorithms identify patterns in successful hires' backgrounds and attributes
- The system flags high-potential candidates and flags candidates likely to decline offers or leave early
Compliance consideration: Predictive scoring that influences hiring or promotion decisions requires validation to ensure job-relatedness and avoid disparate impact.
5. Automated Screening and Pre-Qualification
What it does: Workday can automatically filter candidates based on minimum qualifications, knockout questions, or eligibility criteria.
How it works:
- Rules-based AI screens candidates against must-have requirements
- Candidates who don't meet criteria are automatically rejected or deprioritized
- ML may enhance screening by identifying patterns in successful candidate profiles
Compliance consideration: Automated rejection is explicitly covered by AI hiring laws. Employers must ensure screening criteria are job-related and don't produce disparate impact.
State and Federal Laws Governing Workday AI
Because Workday's AI features are embedded in core hiring workflows, nearly all AI hiring regulations apply:
Federal: EEOC Guidance on AI Hiring
The EEOC's May 2024 Technical Guidance makes clear: employer liability for algorithmic discrimination is not eliminated by using a vendor's tools. Key points:
- Title VII, ADA, and ADEA apply to AI hiring tools regardless of vendor
- Employers must validate that AI tools are job-related and consistent with business necessity
- Disparate impact analysis is required—if Workday's AI disproportionately screens out protected groups, employers can be held liable
- "We trusted Workday" is not a defense
New York City: Local Law 144
NYC's bias audit requirement explicitly covers Workday's candidate matching and recommendation features:
- Annual independent bias audit analyzing selection rates by race, ethnicity, and sex
- Public posting of audit results
- Candidate notification at least 10 days before AI use
- Alternative process for candidates who opt out
- Data retention transparency
Penalty: $500-$1,500 per violation; each day of non-compliance is a separate violation
California: AB 2930
California's AI hiring law (effective January 1, 2026) requires:
- Disclosure to candidates before deployment
- Annual bias testing and reporting
- Data minimization (collect only necessary data)
- Right to human review of automated decisions
Enforcement is via the California Attorney General; penalties follow CCPA-style structure.
Colorado: AI Act (HB 24-1278)
Colorado classifies AI hiring tools as "high-risk systems" requiring:
- Algorithmic impact assessment before deployment
- Disclosure to candidates and employees
- Opt-out rights with alternative evaluation
- Human review of AI-generated decisions
- Annual algorithmic accountability reporting
Penalty: Up to $20,000 per violation
Illinois: Limited Applicability
Illinois' AIVIA specifically covers video interview AI, so it generally doesn't apply to Workday's text/data-based matching features—unless you integrate Workday with a video interview AI platform.
The Workday Discrimination Lawsuit: What Happened
In 2023, a significant class action lawsuit was filed against Workday, Inc. alleging that its AI-based screening tools unlawfully discriminate against job applicants.
Key Allegations
The lawsuit (Mobley v. Workday, Inc., filed in California federal court) alleges:
- Algorithmic bias: Workday's "Candidate Skills Match" and automated screening tools disproportionately reject older applicants, Black applicants, and applicants with disabilities
- Opaque decision-making: Candidates are rejected without explanation or visibility into how the AI evaluated them
- Employer reliance: Companies using Workday delegate hiring decisions to the AI without human review or validation
- Failure to validate: Workday allegedly did not conduct sufficient adverse impact testing or job-relatedness validation
Workday's Response
Workday has publicly stated that its AI tools are designed with bias mitigation in mind and that the company conducts ongoing monitoring and testing. In a public statement on responsible AI and bias mitigation, Workday emphasizes:
- Use of debiasing techniques and fairness constraints
- Regular audits of algorithms by third-party experts
- Employer control over AI configuration and thresholds
- Transparency tools for understanding AI recommendations
However, Workday also acknowledges that "employers are responsible for their use of Workday features and must ensure compliance with employment laws."
Implications for Employers
This lawsuit underscores critical compliance realities:
- Vendor tools don't eliminate liability. Even if Workday's AI passes bias audits, employers can still be sued if their specific use produces discriminatory outcomes.
- Transparency matters. Candidates are increasingly demanding to know how AI evaluated them— and filing lawsuits when rejected without explanation.
- Validation is required. Relying on Workday's AI without employer-specific adverse impact analysis creates legal exposure.
Required Disclosures: What to Tell Candidates
Compliant Workday AI disclosure must explain which Workday features you're using and how they affect decisions.
Minimum Disclosure Elements
- ✓ That Workday's AI/ML features are used in hiring
- ✓ Specific features deployed (e.g., "Skills Match," "Spotlight recommendations")
- ✓ What the AI evaluates (skills, experience, profile data)
- ✓ How AI output influences decisions (e.g., "used to rank candidates," "determines interview invitations")
- ✓ Data collected and retention period
- ✓ Option to request human-only review
- ✓ Contact information for questions or accommodations
Sample Workday AI Disclosure Language
AI Use in Hiring Notice
[Company] uses Workday's artificial intelligence and machine learning features to support our hiring process. Specifically, we use:
- • Candidate Skills Match: AI analyzes your resume and profile to identify your skills and calculate how well they match our job requirements
- • Job Recommendations: AI suggests relevant job openings based on your profile and experience
- • Candidate Ranking: AI ranks candidates based on predicted fit and performance likelihood
These AI tools evaluate your skills, work history, education, and other information you provide. AI-generated match scores and rankings are used by our hiring team to determine who to interview and advance through our process.
You have the right to:
- • Request that your application be reviewed by a human without AI scoring
- • Ask questions about how the AI evaluated your candidacy
- • Request accommodations if you have a disability that may be affected by AI evaluation
To exercise these rights or ask questions, contact [email] or [phone number].
Disclosure Timing and Placement
Where and when to disclose:
- Job postings: Include AI use notice in job descriptions
- Application page: Display notice before candidate submits application
- Confirmation email: Send dedicated notice after application submission (NYC: at least 10 days before AI use)
- Workday career site: Add persistent AI notice to careers page footer
Step-by-Step Compliance Implementation
Phase 1: Inventory and Assessment (Weeks 1-2)
1. Identify which Workday AI features you're using
- Audit your Workday configuration (Recruiting, Talent Marketplace, Skills Cloud)
- Determine which AI/ML features are enabled
- Document how each feature influences hiring decisions
2. Map jurisdictional requirements
- Identify states/cities where you hire
- List applicable AI hiring laws
- Determine which Workday features trigger which requirements
Phase 2: Vendor Due Diligence (Weeks 3-4)
3. Request Workday compliance documentation
- Bias audit results for relevant AI features
- Technical documentation on how algorithms work
- Validation studies demonstrating job-relatedness
- Data privacy and security practices
- Contractual representations about compliance support
4. Conduct employer-specific impact analysis
- Pull hiring data from Workday by demographic category
- Calculate selection rates for candidates evaluated by AI vs. those who weren't
- Identify any statistically significant disparities
- If disparate impact exists, document job-relatedness justification
Phase 3: Policy and Process Updates (Weeks 5-6)
5. Create disclosure materials
- Draft job posting AI notice language
- Update Workday application page with disclosure
- Create post-application confirmation email with detailed AI notice
- Add AI use policy to careers site
6. Define alternative evaluation process
- Document how candidates who opt out of AI will be evaluated
- Train recruiters and hiring managers on executing alternative process
- Ensure opt-outs receive equivalent consideration (no penalty for opting out)
Phase 4: Bias Audit (Weeks 7-12, if required)
7. Commission independent bias audit (NYC, CA, CO)
- Hire qualified industrial-organizational psychologist or employment testing expert
- Provide auditor with candidate data (anonymized where possible)
- Review audit findings and address any identified disparate impact
- Publish audit results per local law requirements (NYC: public website)
Phase 5: Deployment and Training (Weeks 13-14)
8. Update Workday configuration
- Configure data retention settings per jurisdiction requirements
- Enable candidate notification workflows
- Set up opt-out request handling process
9. Train your team
- HR and recruiting: New disclosure and consent requirements
- Hiring managers: How to interpret AI scores without over-relying on them
- Legal/compliance: Ongoing monitoring and incident response
Phase 6: Ongoing Compliance (Continuous)
10. Monitor and iterate
- Quarterly review of hiring outcomes by demographic category
- Annual bias audits (where required or as best practice)
- Track Workday feature updates that may introduce new AI capabilities
- Update disclosures as regulations evolve
Common Compliance Pitfalls
❌ Pitfall 1: Not Realizing You're Using AI
The problem: Workday's AI is so integrated that HR teams often don't know which features involve machine learning. "Skills Match" sounds like a keyword search—but it's actually ML-powered scoring.
The fix: Audit your Workday configuration with Workday support or a consultant. Document exactly which AI/ML features are active.
❌ Pitfall 2: Over-Reliance on Match Scores
The problem: Recruiters see "62% match" and assume the candidate isn't qualified, without reading the actual resume. This creates disparate impact risk if the AI is biased.
The fix: Train hiring teams to treat AI scores as advisory, not determinative. Require human review of all candidates before rejection.
❌ Pitfall 3: No Employer-Specific Validation
The problem: Workday may publish bias audit results, but those are generic. Your specific job categories, candidate pool, and configuration may produce different (worse) outcomes.
The fix: Conduct your own adverse impact analysis using your actual Workday hiring data.
❌ Pitfall 4: Ignoring Internal Mobility AI
The problem: Many employers focus on external hiring compliance but forget that Workday's AI also powers internal job recommendations and promotions—which are equally regulated.
The fix: Apply the same disclosure, audit, and validation requirements to internal talent mobility features.
❌ Pitfall 5: Inadequate Opt-Out Process
The problem: Employer says "contact HR to opt out" but doesn't define what happens next. Candidate emails, gets no response, and assumes they're rejected.
The fix: Build a documented workflow: opt-out request → acknowledgment within 24 hours → human-only review → decision communication. Train HR on execution.
Risk Mitigation Strategies
To reduce legal exposure while using Workday AI:
1. Use AI as Advisory, Not Determinative
Configure Workday so AI scores inform human decision-makers but don't automatically reject or advance candidates. Require recruiter review before any AI-driven action.
2. Implement Human Override Process
Allow recruiters to override AI rankings when there's contextual justification (e.g., transferable skills the AI didn't recognize, unique experience, diversity goals).
3. Conduct Periodic Validation Studies
Annually review whether AI-scored candidates actually perform better than those the AI rejected. If not, the AI isn't job-related—creating legal risk.
4. Enhance Transparency
Consider providing rejected candidates with a brief explanation of how the AI evaluated them and what factors led to the decision. This reduces complaints and demonstrates good faith.
5. Disability Accommodations
Proactively identify how Workday's AI might disadvantage candidates with disabilities (e.g., resume formatting issues for screen reader users). Offer human review for accommodation requests.
How EmployArmor Simplifies Workday Compliance
Managing Workday AI compliance across multiple jurisdictions and features is complex. EmployArmor helps by:
- Workday AI inventory: Automated detection of which AI/ML features you're using
- Jurisdiction-specific disclosures: Generate compliant notices for every state/city where you hire
- Bias monitoring: Integrate with Workday data to track hiring outcomes by demographic category with automated disparate impact alerts
- Audit coordination: Connect with qualified auditors and manage the bias audit process
- Opt-out workflow: Automated handling of alternative evaluation requests
- Policy templates: Pre-built Workday AI hiring policies meeting all regulatory requirements
Using Workday AI? Assess Your Compliance Risk.
Get Your Free Workday Compliance Assessment →Frequently Asked Questions
How do I know if I'm using Workday AI features?
Check your Workday Recruiting configuration. If you have "Skills Match," "Spotlight," "Job Recommendations," or "Talent Marketplace" enabled, you're using AI. Contact your Workday account team for a full AI feature audit.
Do I need a bias audit for Workday?
NYC: Yes, if you use Workday AI for hiring or promotion decisions. California:Annual bias testing required. Other states: Not always legally required, but strongly recommended to reduce litigation risk.
Can I turn off Workday's AI features?
Yes, but you'll lose significant functionality. A better approach: use AI compliantly by implementing proper disclosures, audits, and human oversight.
Are we liable for Workday's algorithm if it's biased?
Yes. The EEOC and courts have made clear that vendor AI doesn't eliminate employer liability. You must validate Workday's tools for your specific use case.
What should we do about the Workday lawsuit?
Monitor the case for developments. Conduct your own adverse impact analysis to determine if your Workday deployment produces discriminatory outcomes. Document your validation efforts and bias mitigation measures.
How do Workday's AI features interact with internal promotions and career development?
Workday Talent Marketplace uses AI to match employees with internal opportunities based on skills, experience, and career goals. This falls under AI hiring regulations in many jurisdictions (NYC Local Law 144 explicitly covers promotions). You must provide the same disclosures to internal candidates as external ones. Document how AI recommendations are used—are they purely informational, or do they substantially influence promotion decisions? If the latter, full compliance requirements apply including bias audits and alternative processes.
Can we use Workday AI for recruiting but not for final hiring decisions?
Yes, but compliance still applies. Even if AI only creates a shortlist that humans review, it "substantially assists" decisions under most regulations. The AI determines who humans see and who gets filtered out—that's a consequential decision requiring disclosure. You can reduce risk by emphasizing human oversight: ensure recruiters can override AI rankings, review a sample of AI-rejected candidates periodically, and document independent human judgment at each stage. See our Compliance Program Guide for human oversight best practices.
2026 Compliance Updates for Workday
Recent Regulatory Developments
Several 2026 changes directly impact Workday deployments:
- Colorado AI Act (effective Feb 1, 2026): Workday AI qualifies as a "high-risk AI system" requiring algorithmic impact assessments. Employers using Workday in Colorado must document purpose, data sources, potential harms, mitigation measures, and human oversight procedures. Deadline: before deploying or by Feb 1, 2026 for existing deployments. Non-compliance: $5,000-10,000 per violation.
- California AB 2930 (effective Jan 1, 2026): Expands on CCPA ADMT rules with annual bias testing requirements and enhanced candidate rights. Workday users in California must conduct or commission annual audits and publish summary results. No private right of action yet, but AG enforcement is active.
- Illinois HB 3773 expansion (effective Jan 1, 2026): Adds requirement for employers to explain data inputs and outputs in AI disclosures. Generic "we use AI matching" is no longer sufficient—you must explain that Workday analyzes skills data, experience, and historical hiring patterns to generate match scores.
- EEOC Strategic Enforcement Plan: The EEOC announced in December 2025 that AI hiring tools (specifically mentioning enterprise HCM systems like Workday) are a priority for 2026-2028 enforcement. Expect increased audits and investigations of employers using Workday AI features.
Workday Product Updates Affecting Compliance
Workday released several AI enhancements in 2025-2026 that may change your compliance posture:
- Workday Illuminate (2025): Enhanced ML models for skills inference and career pathing. If you upgraded to Illuminate, review your disclosure language—the AI now makes broader inferences about candidate capabilities than earlier versions.
- Recruiter Skills Cloud (2026): New feature using external labor market data to augment candidate profiles. This data enrichment counts as "automated decision-making" under GDPR and California law. Ensure candidates consent to external data usage.
- SmartMatch 2.0 (2026): Improved matching algorithm with "explainability" features. Good news for compliance: you can now show candidates and regulators why the AI made certain matches. Bad news: this is a new algorithm version requiring fresh bias audits under NYC law.
Action Items for Existing Workday Customers
If you're already using Workday AI features, take these steps before Q2 2026:
- Conduct compliance gap analysis: Compare your current practices against Colorado, California, Illinois, and NYC requirements. Identify where you fall short. Use our free compliance scorecard for a structured assessment.
- Update disclosure language: Refresh your AI notices to meet 2026 specificity requirements. Mention "Workday" by name, explain Skills Match and other features you use, and describe how AI outputs influence decisions.
- Commission or conduct bias audit: If you haven't audited Workday AI in the past 12 months, you're out of compliance in NYC and California. Budget $15,000-25,000 for independent auditing services. See our Bias Audit Guide for vendor recommendations.
- Document human oversight: Create written procedures showing how recruiters review and override Workday AI recommendations. Train staff on these procedures. Maintain audit trail of instances where humans overrode AI suggestions.
- Review vendor contract: Ensure your Workday contract includes compliance support provisions: access to data for auditing, notification of algorithm changes, technical documentation for regulators, and indemnification for vendor-caused compliance failures.
Conclusion: Workday AI is Powerful—But Not Autopilot
Workday's AI features deliver real value: faster candidate matching, better skills visibility, more efficient recruiting. But that value comes with responsibility. In 2026, employers can't treat Workday AI as a "set it and forget it" solution.
The companies succeeding with Workday are those investing in compliance: understanding what the AI does, validating it for their specific use, disclosing it transparently, and maintaining human oversight. That's not just legal protection—it's how you build a hiring process that's both efficient and fair.
Related Resources
- AI Hiring Compliance Guide 2026
- Do I Need an AI Bias Audit?
- HireVue Compliance Guide
- Greenhouse ATS AI Compliance Guide
Disclaimer: This content is for informational purposes only and does not constitute legal advice. Employment laws vary by jurisdiction and change frequently. Consult a qualified employment attorney for guidance specific to your situation. EmployArmor provides compliance tools and resources but is not a law firm.