Skip to content

Legal

Customer Guidance: EU AI Act High-Risk Categories

Effective date: March 22, 2026 · Last updated: March 22, 2026

The EU Artificial Intelligence Act (Regulation (EU) 2024/1689) classifies certain AI systems as high-risk based on their intended purpose. When you deploy Intended to govern AI agent actions in regulated domains, your overall AI system — not Intended alone — may fall under Annex III high-risk classification. In all cases, you as the deployer are responsible for ensuring your overall system complies with Chapter 2 requirements. Intended provides the transparency, audit trail, and human oversight tools that the AI Act requires, but compliance is a shared responsibility.

Employment and Worker Management

Annex III, Section 4

AI systems used in employment, worker management, and access to self-employment are classified as high-risk when they make or substantially influence decisions about recruitment, hiring, task allocation, performance monitoring, or termination.

Example use cases

  • AI agents that screen resumes or rank candidates for hiring decisions
  • AI agents that initiate or recommend employee terminations or disciplinary actions
  • AI agents that allocate shifts, tasks, or workloads based on performance data
  • AI agents that monitor employee productivity and flag underperformance

Your obligations as deployer

  • Conduct a conformity assessment before deploying the system
  • Implement a risk management system that identifies and mitigates risks to workers' rights
  • Ensure training data is representative and free from bias that could lead to discrimination
  • Provide clear information to affected workers about the AI system's role in decisions
  • Designate human oversight roles that can review and override automated recommendations
  • Maintain technical documentation and records of all decisions for regulatory inspection

How Intended helps

  • Escalation workflows ensure high-risk HR decisions are routed to human reviewers before execution
  • Complete audit trails record every decision with full context, satisfying documentation requirements
  • Evidence bundles provide exportable, cryptographically signed records for regulatory inspection
  • Policy engine enables customers to configure risk thresholds that flag sensitive employment actions
  • Role-based access control ensures only authorized personnel can approve consequential HR actions

Financial Services

Annex III, Section 5

AI systems used to evaluate creditworthiness, risk-assess individuals for insurance, or make decisions about access to essential financial services are classified as high-risk.

Example use cases

  • AI agents that approve or deny payment transactions above configured thresholds
  • AI agents that assess credit risk or make lending recommendations
  • AI agents that process insurance claims or adjust coverage based on risk profiles
  • AI agents that flag transactions for fraud review and determine disposition

Your obligations as deployer

  • Implement robust data governance to ensure accuracy and prevent discrimination in financial decisions
  • Provide affected individuals with clear explanations of how the AI system contributed to decisions
  • Ensure human oversight with the ability to review and reverse automated financial decisions
  • Maintain detailed logs of all decisions for audit by financial regulators
  • Conduct regular bias testing and fairness assessments on the AI system's outputs
  • Register the system in the EU database for high-risk AI systems before deployment

How Intended helps

  • Fail-closed architecture ensures that system errors result in denial, preventing unauthorized financial actions
  • Time-limited authority tokens (300-second TTL) with single-use nonces prevent replay of financial authorizations
  • Tamper-evident audit ledger provides regulators with verifiable records of every decision
  • Configurable risk scoring allows financial institutions to set domain-specific thresholds and escalation triggers
  • Cryptographic receipts enable independent verification of authorization decisions by auditors

Critical Infrastructure

Annex III, Section 2

AI systems used as safety components in the management and operation of critical digital infrastructure, road traffic, or the supply of water, gas, heating, and electricity are classified as high-risk.

Example use cases

  • AI agents managing power grid operations or load balancing
  • AI agents controlling water treatment or distribution systems
  • AI agents operating network infrastructure for telecommunications
  • AI agents managing industrial control systems (SCADA/ICS)

Your obligations as deployer

  • Implement comprehensive risk management covering safety, cybersecurity, and reliability
  • Ensure the AI system meets accuracy, robustness, and cybersecurity requirements under Article 15
  • Maintain human oversight capabilities that allow operators to intervene in real time
  • Implement redundancy and fail-safe mechanisms to prevent cascading failures
  • Conduct thorough testing in conditions reflecting real-world operating environments
  • Provide technical documentation sufficient for regulatory authorities to assess compliance

How Intended helps

  • Fail-closed design ensures that any system failure defaults to denial, preventing unauthorized critical infrastructure changes
  • Per-tenant encryption and key isolation protect critical infrastructure credentials from cross-tenant exposure
  • Real-time escalation workflows route high-consequence infrastructure decisions to on-call human operators
  • Hash-chained audit ledger provides tamper-evident records for post-incident analysis and regulatory review
  • Configurable policy engine allows operators to define strict action boundaries and approval chains for infrastructure operations

Education and Vocational Training

Annex III, Section 3

AI systems used to determine access to or admission to educational and vocational training institutions, to evaluate learning outcomes, or to assess the appropriate level of education for an individual are classified as high-risk.

Example use cases

  • AI agents that evaluate student applications and make admissions recommendations
  • AI agents that grade assignments, exams, or assessments
  • AI agents that determine student placement or track assignment
  • AI agents that flag academic integrity violations and recommend disciplinary action

Your obligations as deployer

  • Ensure training data and evaluation criteria are free from bias related to protected characteristics
  • Provide students and guardians with transparent information about the AI system's role in educational decisions
  • Implement meaningful human oversight where educators can review and override AI-generated assessments
  • Maintain detailed records of how the AI system contributed to each educational decision
  • Conduct regular audits of the system's outputs for fairness and accuracy
  • Ensure the system meets robustness requirements to prevent manipulation of outcomes

How Intended helps

  • Escalation workflows require human educator review for consequential decisions like grading and admissions
  • Complete audit trails document every AI-assisted educational decision with full context and reasoning
  • Evidence bundles allow institutions to provide students with verifiable records of how decisions were made
  • Policy engine enables institutions to define guardrails that prevent AI agents from making final educational decisions without human approval
  • Self-approval prevention ensures that the AI agent requesting an action cannot also approve it

Questions?

For questions about Intended and the EU AI Act, or for assistance with your compliance assessment, contact compliance@intended.so.