AI in Hiring 2026: How CVs Are Filtered and What the EU AI Act Changes
Published April 21, 2026 · 9 min read
Quick answer: In 2026, AI screens most CVs at mid-to-large employers before a recruiter opens them. Under the EU AI Act, AI used for employment decisions is a “high-risk” system with full compliance obligations from 2 August 2026 — transparency, human oversight, bias testing, incident logging. Candidates gain a right to know AI was used and to contest the outcome.
How AI actually filters CVs today
The typical 2026 hiring stack has four AI layers:
- CV parsing: A parser (HireEZ, Sovren, Rchilli, or bespoke) converts a PDF into structured fields. Modern parsers handle most layouts but still trip on icons, tables, and non-standard headings. See our ATS guide.
- Match scoring: A second model (often an LLM with embeddings) compares your CV to the job description. A score from 0–100 is produced. Anything below a threshold (commonly 60–70) is auto-rejected or routed to a low-priority queue.
- Skill extraction & enrichment: The system infers skills from context (“led migration to Kubernetes” implies Docker, YAML, CI/CD). It also cross-references public profiles (LinkedIn, GitHub) where the employer has paid for enrichment.
- Automated outreach: Top-scoring candidates get automated emails, interview scheduler links, and in some pipelines a first-round async video interview scored by an LLM.
Where the EU AI Act draws the line
Employment is on Annex III of the EU AI Act. AI systems used for “recruitment or selection, in particular to place targeted job advertisements, to analyse and filter job applications, and to evaluate candidates” are high-risk. High-risk obligations apply from 2 August 2026.
Key obligations for employers using these systems:
- Risk management system across the AI's lifecycle
- Data governance — bias testing, representative training data, documented datasets
- Technical documentation — kept for 10 years after the AI is placed on the market
- Automatic logging — records of the AI's operation
- Human oversight — a human can stop, overrule, or reverse an AI decision
- Transparency to candidates — people must be told when they are subject to an AI-assisted employment decision
- Incident reporting to national authorities if the system fails or causes harm
Penalties for non-compliance: up to €35 million or 7% of global annual turnover, whichever is higher. Enforcement starts with EU market surveillance authorities.
What this means for candidates in practice
- You have a right (under both Article 22 of GDPR and the AI Act) to meaningful human review of significant automated decisions. Ask for it when rejected.
- Employers must disclose that AI was used. Many add “AI-assisted screening” to application pages. If it's absent, you can ask.
- Bias audits are coming. Gender, age, race, and disability discrimination claims under AI-assisted decisions are now a live regulatory area in every EU country.
What this means for employers
- Identify every AI tool touching your pipeline. CV parsing, video interview scoring, scheduling bots, outbound enrichment — all in scope.
- Ask vendors for their AI Act conformity assessment. Most serious vendors will have one by Q3 2026.
- Document human-in-the-loop processes. An AI “recommendation” is fine; an AI “automatic rejection” needs strong procedural controls.
- Run bias testing quarterly. Tools: IBM AI Fairness 360, Fairlearn, or your ATS's native fairness dashboards.
- Consider compliance tooling: GeraCompliance, OneTrust AI Governance, or Credo AI.
The UK (post-Brexit)
The UK is not bound by the EU AI Act, but most UK multinationals comply anyway (their EU entities are). The UK's Algorithmic Transparency Recording Standard (ATRS) and the Data Protection and Digital Information Act 2024 impose softer but similar transparency obligations. ICO enforcement under GDPR Article 22 is active: 2024 saw several UK rulings on automated hiring decisions.
US, Canada, Australia
- US: NYC Local Law 144 requires bias audits of automated employment-decision tools. Colorado AI Act (effective 2026) mirrors the EU framework. California follows in 2027.
- Canada: Federal AIDA (Artificial Intelligence and Data Act) proposed, in progress.
- Australia: Voluntary AI Ethics Framework; binding legislation under consultation (2025–2026).
Related reading
CV that passes ATS · EU AI Act 2026 deadline · Red flags in job postings
GeraJobs does AI-match with full transparency.
Candidates see why a role matched. Employers see bias reports.
See how our AI match works →