Back to articles

Italy’s human-centric AI law: Worker dignity over algorithmic efficiency

by Sara Mandelli

As artificial intelligence raises growing concerns over job security, digital surveillance, and human dignity at work, Italy's response is structured and deliberate: a human-centred framework built on algorithmic transparency, strict limits on digital monitoring, and clear classification of high-risk AI systems – all anchored in a tradition of labour protections now reinforced to meet the challenges of the AI era.

Italian Legislative Decree No. 152/1997, as amended in 2022, requires companies using automated systems for recruitment, task allocation, or performance evaluation to inform workers in advance, disclosing not only the tool's existence, but also its logic, parameters, training data, human oversight mechanisms, and potential discriminatory impacts. 

Law No. 132/2025 reinforces this principle: AI deployment must protect workers' physical and psychological integrity, expressly prohibiting any use capable of undermining human dignity or generating discrimination on grounds of gender, age, ethnic origin, religious belief, or social condition. To monitor this transition, a National Observatory has been set up under the Ministry of Labour, with the mandate to identify exposed sectors and support ongoing vocational training.

Within this framework, the Statuto dei Lavoratori (Law No. 300/1970) retains its role as the fundamental legislative pillar underpinning workers' protection. Remote monitoring without justified grounds is expressly prohibited, and algorithmic surveillance systems tracking productivity or movements require either a trade union agreement or prior authorisation from the labour inspectorate. Occupational health and safety regulations further impose on employers the obligation to assess psychosocial risks, including “automation stress” – anxiety arising from pervasive digital monitoring, performance pressure, and fear of displacement by automated systems.

In alignment with the EU Artificial Intelligence Act, Italy has classified AI systems used in recruitment, promotions, and performance evaluations as high-risk, requiring automated decisions affecting workers' professional lives to be transparent, verifiable, and correctable by human operators. Italy also treats the digital divide as a serious risk to be actively countered through structured upskilling and reskilling programmes. 

In this context, a growing number of Italian companies have begun to adopt internal policies aimed at ensuring the responsible deployment of AI, safeguarding workers’ rights, preventing algorithmic discrimination, and promoting procedural fairness. At the same time, trade unions are exploring targeted amendments to collective agreements to address AI's growing workplace impact – most notably in the metalworking sector, reflecting a shared commitment that innovation must be ethical, accountable, and genuinely inclusive.

The underlying principle is unequivocal: in Italy, no algorithmic efficiency gain can ever come at the cost of human dignity.


Sara Mandelli is an Italian attorney who specialises in labour law, advising companies on employment contracts, dismissals, and trade union relations, and in compliance, with a particular focus on corporate liability.

about 19 hours ago

Baldi & Partners