AI in HR: Compliance duties
by Jeremiasz Kuśmierz
AI-driven solutions are increasingly embedded across business functions, including human resources. AI is used both in recruitment and in performance evaluation and employee monitoring.
While these tools can enhance efficiency, they raise serious compliance concerns, particularly around bias, discrimination, and data privacy. One widely cited example is Amazon’s recruitment tool, which was reportedly biased against women and ultimately abandoned in 2015. Similarly, HireVue faced backlash for using facial and voice analysis to assess candidates’ expressions and tone. Due to transparency and fairness concerns, the company discontinued this feature in 2021. LinkedIn has also been criticised for reinforcing gender stereotypes through its algorithmic recommendations.
Regulatory framework
These examples illustrate how early AI deployments suffered from a lack of oversight, but the regulatory landscape in the European Union has evolved. General Data Protection Regulation (GDPR) and the newly adopted EU AI Act now impose strict requirements to ensure fairness and accountability in AI systems used in employment contexts.
Under the AI Act, systems used for recruitment, candidate evaluation, or job application filtering are classified as high-risk (Annex III, point 4). Employers using such systems – referred to as deployers – must comply with several obligations, including:
- Using AI systems according to their intended purpose and instructions;
- Assigning trained personnel to oversee their use;
- Monitoring data quality and retaining automatically generated logs;
- Informing employees and their representatives before deployment; and
- Conducting a data protection impact assessment where required under GDPR.
Employers must also ensure that automated decision-making (such as hiring or dismissal) does not occur without meaningful human oversight. Even where less oversight is allowed, Article 22 of the GDPR guarantees individuals the right to human review, to express their views, and to contest decisions.
Improved practices and ongoing risks
Thanks to regulatory pressure and compliance efforts of the industry, AI tools today are often trained on more diverse data sets, with an increased focus on transparency and human-in-the-loop design.
Yet, as AI applications expand, new risks continue to emerge, particularly in the area of employee monitoring. AI tools now track productivity, analyse communications, and monitor behaviour in real time. These monitoring systems, like those used for hiring, are also classified as high risk under the AI Act (Annex III, point 5) and are therefore subject to the same legal obligations relating to transparency, data protection, and human oversight.
A recent example illustrates how regulators are responding to these risks: in early 2024, Amazon was fined EUR 32 million in France by France’s data protection authority CNIL for what it deemed “excessive surveillance” of warehouse workers. The AI-driven system used reportedly tracked every second of employee activity, prompting concerns over proportionality, lack of transparency, and retention of sensitive performance data.
Whether enforcement actions like this will temper the rapid adoption of AI in the workplace remains to be seen but they clearly signal a growing regulatory readiness to act and underscore the importance of compliance procedures and practices to ensure responsible, lawful, and transparent deployment of AI systems by employers.
Jeremiasz heads the Penteris compliance department. He spreads his work across compliance, corporate, M&A, employment, and risk management.