The European Union's AI Act, which entered into force in August 2024 and begins phased enforcement in 2026, is the world's most comprehensive AI regulation. While it's an EU law, its extraterritorial provisions mean US employers hiring workers located in the EU must comply—even if your company has no physical presence in Europe.
If you use AI in hiring and have candidates or employees in EU member states, this law applies to you.
Key Takeaway:
The EU AI Act classifies AI systems used in employment as "high-risk", triggering strict compliance requirements including conformity assessments, human oversight, transparency obligations, and record-keeping. Penalties reach €30 million or 6% of global annual revenue, whichever is higher.
What is the EU AI Act?
The AI Act (Regulation (EU) 2024/1689) is a comprehensive framework regulating artificial intelligence across the European Union. It takes a risk-based approach, categorizing AI systems into four levels:
- Unacceptable risk: Prohibited AI (e.g., social scoring, subliminal manipulation)
- High risk: Heavily regulated AI in critical domains (includes employment)
- Limited risk: Transparency requirements only
- Minimal risk: No specific obligations
Employment AI systems are classified as high-risk, meaning they face the Act's strictest requirements.
Extraterritorial Reach: When Does the EU AI Act Apply to US Employers?
The AI Act applies if:
- The AI provider is established in the EU (e.g., your vendor is EU-based), OR
- The output of the AI system is used in the EU (e.g., you hire someone located in Germany), OR
- The AI system is deployed within the EU (e.g., you have EU subsidiaries using the tool)
Practical Examples
Scenario 1: US Company, EU Candidate
A California-based tech startup uses AI resume screening to evaluate candidates for a remote position. One applicant is located in France. Result: EU AI Act applies to that evaluation, even though the company is US-based.
Scenario 2: US Company, EU Subsidiary
A New York financial services firm has offices in London and Frankfurt. They use AI video interview software globally. Result: EU AI Act applies to all use in EU offices, plus any EU-located candidates interviewing for US positions.
Scenario 3: US Company, EU Vendor
A Texas employer uses an AI hiring tool developed by a Paris-based vendor. Result: The vendor must comply with EU AI Act as the provider. The US employer, as deployer, has separate obligations if using the tool for EU hires.
Key Insight
You don't need EU operations to be subject to the EU AI Act. Hiring or managing workers locatedin the EU triggers compliance obligations, even if your company is entirely US-based.
Why Employment AI is "High-Risk"
Annex III of the AI Act explicitly lists employment AI systems as high-risk, including systems used for:
- Recruitment and selection of persons (job advertisements, CV analysis, interviews)
- Making decisions affecting employment, promotion, or termination
- Task allocation and monitoring of work performance
Rationale: Employment decisions profoundly impact individuals' livelihoods and fundamental rights. AI bias in hiring can perpetuate discrimination and violate EU Charter of Fundamental Rights.
Core Obligations for High-Risk Employment AI
1. Conformity Assessment
Before deploying a high-risk AI system, providers must conduct a conformity assessment to demonstrate compliance with EU AI Act requirements. This involves:
- Risk management system: Identify, analyze, and mitigate risks throughout AI lifecycle
- Data governance: Ensure training/testing data is relevant, representative, and free from bias
- Technical documentation: Detailed description of system design, development, and performance
- Logging capabilities: Automatic recording of system events for oversight and accountability
- Transparency: Clear information for deployers and affected persons
- Human oversight: Measures enabling human intervention and control
- Accuracy, robustness, cybersecurity: Technical performance standards
Conformity assessment can be conducted:
- Internally: Provider self-assesses and issues Declaration of Conformity
- Via Notified Body: For certain high-risk systems, independent third-party assessment required
For employment AI: Internal assessment is typically allowed, but must be thorough and documented.
2. Human Oversight
High-risk AI systems must include measures enabling effective human oversight, including:
- Understanding system capabilities and limitations: Humans must know what the AI can/cannot do
- Ability to monitor: Real-time awareness of AI operation
- Ability to intervene: Humans can stop or alter AI decisions
- Ability to override: Final decisions rest with humans, not algorithms
Practical example: A hiring manager reviews AI-generated candidate rankings but makes final hiring decisions. The manager can override AI recommendations and understands the AI's limitations.
3. Transparency and Information Obligations
For Deployers (Employers):
- Providers must give deployers sufficient information to understand the AI system
- Instructions for use, technical specifications, performance metrics
- Information about training data, known biases, limitations
For Affected Persons (Candidates/Employees):
- Inform individuals that AI is being used to make employment decisions
- Explain the purpose and logic of the AI system
- Provide information about their rights (right to explanation, right to contest decisions)
4. Record-Keeping and Logging
High-risk AI systems must automatically log:
- Events and decisions made by the AI system
- Data inputs and outputs
- Timestamp and context of AI operations
- Human oversight actions (interventions, overrides)
Retention period: Logs must be kept for at least 6 months, or longer if required by other EU laws (e.g., GDPR, employment law).
5. Accuracy, Robustness, and Cybersecurity
AI systems must achieve appropriate levels of:
- Accuracy: AI predictions must be correct and reliable
- Robustness: System performs consistently across scenarios
- Cybersecurity: Protected against attacks, manipulation, and unauthorized access
Providers must validate these characteristics and make performance metrics available to deployers.
6. Post-Market Monitoring
After deployment, providers must:
- Continuously monitor AI system performance
- Establish mechanisms for reporting serious incidents or malfunctions
- Update risk assessments as new information emerges
- Implement corrective actions when problems arise
Deployer Obligations: What US Employers Must Do
As the "deployer" (entity using the AI system), US employers have separate obligations:
1. Use AI Systems Appropriately
- Follow provider's instructions for use
- Use AI only for its intended purpose
- Ensure input data is relevant and of sufficient quality
2. Ensure Human Oversight
- Assign individuals to oversee AI system operation
- Train oversight personnel on AI capabilities and limitations
- Enable human intervention in AI decision-making
3. Monitor and Report
- Monitor AI system for malfunctions or unexpected behavior
- Report serious incidents to provider and authorities
- Suspend AI use if serious risks identified
4. Conduct Data Protection Impact Assessments (If Required)
Under GDPR, automated decision-making may require Data Protection Impact Assessments (DPIAs). EU AI Act coordinates with GDPR—often a single integrated assessment suffices.
5. Inform Affected Persons
- Tell candidates/employees that AI is being used
- Explain the AI's role in employment decisions
- Provide information about rights (explanation, contest decisions)
Prohibited AI Practices in Employment
The EU AI Act bans certain AI uses outright. In employment context, prohibited practices include:
- Subliminal manipulation: AI that manipulates behavior without awareness
- Exploiting vulnerabilities: AI targeting disabilities or socioeconomic status to manipulate behavior
- Social scoring: General-purpose evaluation of individuals' trustworthiness or social behavior
- Real-time remote biometric identification in public spaces: Limited exceptions; not generally applicable to employment but could affect workplace surveillance
Note: Personality assessments and behavioral analysis in hiring are allowed (as high-risk, not prohibited) if compliant with high-risk requirements.
GDPR and EU AI Act: How They Interact
AI hiring tools in the EU must comply with both the AI Act and GDPR:
GDPR Requirements for AI Hiring
- Legal basis for processing: Need lawful basis (typically legitimate interest or consent)
- Data minimization: Collect only necessary personal data
- Transparency: Explain AI logic in privacy notices
- Right to explanation: Individuals can request explanation of automated decisions
- Right to human intervention: Individuals can contest solely automated decisions and request human review
- Data Protection Impact Assessment: Required for high-risk automated processing
Coordinated Compliance
The AI Act explicitly states it's without prejudice to GDPR. In practice:
- AI Act conformity assessment can incorporate GDPR DPIA
- AI Act transparency obligations align with GDPR disclosure requirements
- AI Act human oversight dovetails with GDPR right to human intervention
- Both require logging and record-keeping (can use unified system)
Tip: Build integrated compliance framework addressing both regulations simultaneously.
Penalties for Non-Compliance
EU AI Act penalties are substantial:
Penalty Tiers:
- Prohibited AI use: Up to €35 million or 7% of global annual turnover
- Non-compliance with high-risk requirements: Up to €15 million or 3% of global turnover
- Incorrect/incomplete information to authorities: Up to €7.5 million or 1% of global turnover
Whichever amount is higher applies. For global companies, this can be hundreds of millions of dollars.
Beyond financial penalties:
- Injunctions: Orders to stop using non-compliant AI systems
- Product recalls: Removal of AI systems from market
- Reputational damage: Public enforcement actions harm brand
- Operational disruption: Forced to abandon AI tools mid-hiring cycle
Compliance Roadmap for US Employers
Step 1: Determine Applicability
- Do you hire candidates located in EU member states?
- Do you have employees or contractors working in the EU?
- Do you use AI tools from EU-based vendors?
If yes to any, EU AI Act likely applies.
Step 2: Inventory AI Systems
- List all AI tools used in recruitment, hiring, performance management, promotions
- Classify each as high-risk, limited-risk, or minimal-risk
- Identify providers (who developed the AI) and deployers (who uses it—likely you)
Step 3: Engage Providers
For AI tools you buy from vendors:
- Request conformity assessment documentation
- Obtain technical documentation and instructions for use
- Verify CE marking (EU conformity declaration)
- Require contractual representations of EU AI Act compliance
- Establish incident reporting procedures
Step 4: Implement Deployer Requirements
- Assign human oversight responsibility
- Train personnel on AI system operation and limitations
- Establish monitoring and incident response processes
- Create candidate/employee notification mechanisms
- Set up logging and record-keeping infrastructure
Step 5: Coordinate with GDPR Compliance
- Update privacy notices to explain AI use
- Conduct or update DPIAs for automated decision-making
- Establish procedures for right to explanation and human review requests
- Align data retention policies with GDPR and AI Act requirements
Step 6: Monitor Regulatory Developments
The AI Act has phased implementation. Key dates:
- August 2024: Act entered into force
- February 2025: Prohibited AI rules apply
- August 2026: High-risk AI obligations apply (employment AI)
- August 2027: Remaining provisions apply
As of February 2026: You're in the compliance window for high-risk employment AI. Full enforcement begins August 2026.
Practical Challenges for US Employers
Challenge 1: Vendor Readiness
Many US AI vendors are not yet EU AI Act compliant. If your vendor can't provide conformity documentation by August 2026, you may need to:
- Switch to compliant vendor
- Pause AI use for EU hires
- Pressure vendor to achieve compliance
- Accept enforcement risk (not recommended)
Challenge 2: Cost of Compliance
Conformity assessments, technical documentation, logging infrastructure, and human oversight add cost. Estimate €50,000-€250,000 per AI system for full EU AI Act compliance.
Challenge 3: Conflicting Requirements
US state laws (e.g., NYC bias audits) may have different standards than EU AI Act conformity assessments. You may need parallel compliance processes for different jurisdictions.
Challenge 4: Extraterritorial Enforcement
EU member states can enforce against non-EU companies. Enforcement mechanisms include:
- Penalties enforceable against EU assets or revenues
- Market access restrictions (banned from EU hiring)
- Cooperation with US authorities (via mutual legal assistance treaties)
How EmployArmor Helps with EU AI Act Compliance
- Applicability analysis: Determine if EU AI Act applies to your hiring practices
- Vendor assessment: Evaluate AI provider compliance with EU AI Act
- Deployer obligation tracking: Manage human oversight, monitoring, and notification requirements
- GDPR coordination: Integrated compliance framework for AI Act + GDPR
- Candidate notification: EU-compliant disclosure and explanation processes
- Documentation repository: Centralized logging and record-keeping
Hiring in the EU?
Get Your EU AI Act Readiness Assessment →Frequently Asked Questions
Does the EU AI Act apply if we only hire EU citizens working in the US?
No. The key factor is location of the individual when AI is used, not their citizenship. If the candidate is physically in the US during evaluation, EU AI Act likely doesn't apply (though other factors could trigger it).
What if we hire remote EU workers through a PEO or contractor arrangement?
Likely still applies. If you're making hiring decisions about individuals located in the EU using AI, you're subject to the Act regardless of employment structure.
Can we just exclude EU candidates to avoid EU AI Act compliance?
Technically possible but may violate EU anti-discrimination law or limit your talent pool. Many US companies with global ambitions choose compliance over exclusion.
How does this interact with the UK's AI regulation?
The UK (post-Brexit) is developing separate AI regulation. Currently, the UK follows a sector-specific approach rather than comprehensive AI Act. If you hire in both EU and UK, monitor UK developments separately.
What if our AI vendor is US-based but we use the tool for EU hires?
The vendor (provider) is subject to EU AI Act if their product is used in the EU. You (deployer) have separate obligations. Work with vendor to ensure they can meet provider requirements; you handle deployer requirements.
Do US-based companies with no EU operations but occasional EU remote hires really need to comply?
Yes, if you're using AI to evaluate candidates located in the EU. The EU AI Act has extraterritorial reach—it applies based on where the AI's outputs are used or affect individuals, not where your company is headquartered. Even one EU-based remote hire evaluated with AI triggers compliance obligations. Penalties can reach €30 million or 6% of global turnover, whichever is higher. The EU takes extraterritorial enforcement seriously (see GDPR enforcement against US companies). You cannot ignore EU AI Act just because you're US-based. Alternative: explicitly exclude EU candidates from roles if you're unwilling to invest in compliance, but this limits your talent pool significantly. See our Compliance Program Guide for multi-jurisdictional implementation strategies.
How does the EU AI Act's conformity assessment process work for high-risk hiring AI?
Conformity assessment is required before deploying high-risk AI (employment/HR AI is explicitly high-risk under Annex III). Process involves: (1) Provider (AI vendor) conducts internal testing against EU requirements, (2) Provider prepares technical documentation and Declaration of Conformity, (3) For some high-risk systems, third-party Notified Body conducts independent assessment, (4) CE marking applied to compliant systems. As deployer (employer), you must verify your vendor completed conformity assessment and obtain Declaration of Conformity before use. US vendors may struggle with this—many aren't set up for EU conformity assessment. Budget expectation: Conformity assessment adds $50,000-150,000 to vendor's product development costs, which they may pass through to customers. Timeline: 6-12 months for initial assessment. Deployment without conformity assessment is a serious violation—don't deploy AI in EU without verification.
Practical EU AI Act Implementation for US Employers
Phase 1: Scope Assessment (Months 1-2)
- ☐ Identify all positions that may hire EU-located candidates
- ☐ Document current AI tools used in hiring process
- ☐ Determine which AI systems qualify as "high-risk" under EU AI Act
- ☐ Map which EU member states you hire in (different data protection authorities)
- ☐ Estimate annual EU hiring volume to assess compliance cost-benefit
Phase 2: Vendor Due Diligence (Months 3-4)
- ☐ Request EU AI Act compliance documentation from all AI vendors
- ☐ Verify conformity assessments and CE marking where required
- ☐ Review vendor technical documentation for transparency obligations
- ☐ Confirm vendor has EU representative (required for non-EU providers)
- ☐ Negotiate contract addendums for EU AI Act compliance support
Phase 3: Deployer Obligations (Months 5-7)
- ☐ Conduct Fundamental Rights Impact Assessment (FRIA)
- ☐ Implement human oversight procedures for EU AI decisions
- ☐ Create transparency notices for EU candidates (what AI evaluates, their rights)
- ☐ Establish logging and monitoring systems (EU requires extensive recordkeeping)
- ☐ Designate responsible person for EU AI Act compliance
- ☐ Register high-risk AI systems in EU database (when operational, expected 2027)
Phase 4: Ongoing Compliance (Continuous)
- ☐ Monitor AI system performance and outcomes (quarterly minimum)
- ☐ Update FRIAs annually or when material changes occur
- ☐ Maintain logs of AI decisions for required retention period
- ☐ Report serious incidents to relevant authorities within 15 days
- ☐ Cooperate with market surveillance authorities during inspections
Enforcement Timeline and Expectations
Phased Implementation (2024-2027)
- August 2024: EU AI Act entered into force
- February 2025: Prohibited AI practices banned (doesn't significantly affect hiring)
- August 2025: Governance and notified body framework operational
- August 2026: Obligations for high-risk AI systems (including hiring AI) begin
- August 2027: Full enforcement including high-risk AI system registration database
Current Enforcement Posture
As of February 2026, EU member states are still building enforcement capacity. However, early signals suggest aggressive enforcement:
- Germany: BfDI (data protection authority) issued guidance that hiring AI enforcement is a priority, coordinating with labor authorities.
- France: CNIL announced hiring AI as focus area for 2026 inspections, particularly in tech and financial services sectors.
- Netherlands: Dutch DPA investigating several unnamed employers for non-compliant hiring AI, outcomes expected Q2 2026.
- Ireland: DPC (relevant for many US tech companies with EU HQs in Ireland) issued preliminary warning letters to multinationals about AI hiring compliance.
Related Resources
- AI Hiring Compliance Guide 2026
- Federal AI Hiring Laws (US)
- State-by-State AI Hiring Laws
- How to Conduct an AI Impact Assessment
Disclaimer: This content is for informational purposes only and does not constitute legal advice. Employment laws vary by jurisdiction and change frequently. Consult a qualified employment attorney for guidance specific to your situation. EmployArmor provides compliance tools and resources but is not a law firm.