Artificial intelligence is now embedded in many core workplace functions, from recruitment and workforce planning to performance evaluation and task allocation. While AI tools can improve efficiency, they also raise significant concerns around transparency, discrimination and workers’ rights.
The Artificial Intelligence Act, Regulation (EU) 2024/1689 (the “AI Act”), establishes a comprehensive, risk-based framework governing the development and deployment of AI systems within the EU. In the employment context, the AI Act is particularly significant, as it explicitly recognises that AI systems used to manage workers may pose heightened risks to fundamental rights, including equality, privacy and access to employment.
Employers’ role under the AI Act
In most workplace settings, employers will not be the developers of AI systems but rather users of tools supplied by third parties, such as AI-assisted recruitment platforms or workforce analytics tools. Under the AI Act, these employers are generally classified as “deployers”, meaning natural or legal persons who use an AI system in the course of their professional activities. Employers may also be treated as “providers” of a high-risk AI system, and assume significantly heavier obligations, where they place an AI system on the market under their own name, substantially modify an existing system, or change its intended purpose in a way that renders it high-risk.
The AI Act applies to both AI providers and deployers located in the EU, or outside the EU if the output produced by the AI system is used within the EU. Importantly, employers remain responsible where AI systems are operated on their behalf by third parties, such as contractors, service providers or other outsourced arrangements.
High-Risk AI Systems in the Workplace
The AI Act classifies as high-risk those AI systems that can materially affect individuals’ access to employment, working conditions or livelihoods. This includes AI systems intended to be used for:
- recruitment or selection (such as CV screening or candidate ranking),
- decisions affecting promotion, remuneration or termination of the work-related contractual relationship,
- task allocation based on behavioural or performance data, and
- monitoring or evaluating individuals’ performance or conduct at work.
The concept of a “work-related contractual relationship” is deliberately broad and is not limited to traditional employment contracts. This means that AI systems affecting workers engaged under non-standard arrangements, including platform workers and self-employed consultants, may also fall within scope.
Key Obligations for Deployers of High-Risk AI Systems
Employers using high-risk AI systems are subject to substantial obligations, including:
- Human Oversight: assigning appropriately trained individuals to supervise AI systems and intervene where necessary.
- Data Governance: ensuring that input data under the employer’s control is relevant, representative, and suitable for the AI system’s intended purpose.
- Monitoring and Incident Response: continuously monitoring system performance and suspending use where risks or serious incidents are identified.
- Log Retention: maintaining logs generated by the AI system for an appropriate period in the event of regulatory review.
- Regulatory Cooperation: cooperating with competent authorities in any action taken related to the use of the AI system.
- Transparency towards Individuals: informing individuals when decisions affecting them are supported by a high-risk AI system.
Prohibited AI Practices in Employment
The AI Act establishes a set of prohibited AI practices, some of which are particularly relevant to employment. Employers must not use AI systems that:
- Manipulate or deceive individuals through subliminal techniques that influence an individual’s behaviour without their awareness. Example: using manipulative interfaces to pressure employees into sharing sensitive data.
- Exploit vulnerabilities related to age, disability or socio-economic status. Example: Targeting financially stressed employees to accept unfavourable contract terms.
- Apply social scoring based on unrelated personal characteristics or behaviour, leading to discriminatory treatment. Example: limiting promotion prospects based on social media activity.
- Scrape facial images from public sources to create biometric databases without a targeted, lawful purpose. Example: building an internal facial-recognition tool for employee monitoring.
In addition, the AI Act mandates an appropriate level of AI literacy among staff involved in the use or supervision of AI systems. Employers must ensure that relevant personnel understand how such systems function, their limitations, and how their outputs should be interpreted.
AI Act and the Interplay with the Platform Work Directive
While the AI Act provides the overarching AI governance framework, it operates in conjunction with the Platform Work Directive (EU) 2024/2831 (the “Directive”), which specifically addresses algorithmic management in digital labour platforms. Together, these instruments form a complementary regulatory regime governing the use of AI in the management of work.
The Directive regulates automated monitoring and decision-making systems used to organise platform work. It introduces requirements around transparency, human oversight, limits on data processing and the right to explanation and human review where automated systems significantly affect working conditions or contractual status. It also recognises that the use of such systems may be relevant when assessing whether a person has been misclassified as self-employed.
The Directive reinforces a broader EU regulatory expectation that AI-driven management of work must be subject to heightened safeguards, meaningful human oversight and accountability. These principles align closely with, and complement, the obligations imposed by the AI Act, creating a coherent framework for the regulation of algorithmic management across both traditional and platform-based work arrangements.
Preparing for compliance
The AI Act’s obligations will take effect in stages, with prohibitions and AI literacy requirements already applicable and the high-risk regime applying from August 2026. In parallel, the Directive must be transposed by Member States by 2 December 2026, after which it will apply alongside the AI Act in relevant platform work contexts, creating an integrated compliance landscape for employers utilising AI-driven workforce management tools.
Employers should review how AI systems are currently being used within their organisations, identify any tools that may fall within the prohibitions or may be classified as high-risk, and confirm their role and obligations under the AI Act and, where applicable, the Directive. Early engagement with these requirements will be essential to ensure a smooth transition to the new regulatory environment.

