Artificial Intelligence in Recruitment – Risks For Employers



Artificial intelligence (AI) tools are more and more frequently supporting the employee recruitment process by classifying candidates or targeting job offers at selected individuals through popular online platforms. To put it simply, the mechanism of operation of AI is based on the AI model’s analysis of data sets (input data) entered by humans. That analysis is carried out using complex algorithms, i.e. a set of rules and instructions that allow machines to reason, learn and make decisions. The goal of the analysis is to select specific results (outputs) that will be in line with the guidelines set out in advance by humans. The entire process is therefore ‘machine-based’ and there is no human intervention. Algorithms operate on limited input data. However, artificial intelligence treats those data as complete and representative so, in consequence, they form the exclusive basis for AI's judgements. For that reason, the use of AI in recruitment may involve risks – in the business, organisational and legal areas


RISKS AND DANGERS OF USING ARTIFICIAL INTELLIGENCE IN RECRUITMENT 


AI tools may be tailor-made to suit the needs of a particular organisation or be a standard tool used by all or certain categories of entities. Nevertheless, in each case, the use of artificial intelligence by employers in the recruitment process represents a major risk. This is because the fact that AI builds its database on the basis of historical data may result in creating patterns that are discriminatory in nature. It is therefore recommended that AI decisions should be ultimately verified by humans. As a result, if recruiters are to use the support of AI without becoming exposed to the aforementioned risks, this can pose a huge organisational challenge for employers.

A case revealed in 2018 by Reuters seems to be an interesting example in this respect – machine learning specialists at Amazon found that a new algorithm they had created and used in the recruitment process discriminated against women. The system was designed to screen hundreds of applications for one position and select the best five. However, in 2015, Amazon discovered that the new system did not assess candidates applying for technical and development positions in a gender-neutral way. The reason for that was that Amazon's computer models had been ‘trained’ to assess candidates on the basis of patterns derived from the CVs of candidates sent to the company over the previous 10 years. As it turned out, the majority of CVs from that period were from men.

Particular attention should therefore be paid to the input data fed into AI. If such data reflect ingrained, systemic biases, the algorithms will not only fail to eliminate discrimination, but will perpetuate it. It should be emphasised that the original data collected do not have to be intentionally discriminatory – it is sufficient that people with similar characteristics work in the organisation in question. In consequence, artificial intelligence can exclude people who do not represent the characteristics of those already employed from the recruitment process. Importantly, AI is generally unable to justify its choices using objective criteria. If a job applicant raises an allegation of a breach of the principle of equal treatment, the burden is on the organisation using AI to prove that there has been no breach. It is an accepted view among legal scholars and commentators that algorithmic discrimination constitutes discrimination within the meaning of the Labour Code.

Another risk associated with the use of artificial intelligence in recruitment is the improper processing of personal data. Publicly available AI tools, such as ChatGPT, often collect the data entered and send them to unauthorised parties. Once personal data are made available to an AI tool, it creates the risk of uncontrolled processing of the data and it is a process that cannot be reversed.

What should also be mentioned here is profiling. According to the GDPR, it is any form of automated processing of personal data that involves the use of such data to evaluate an individual. Special consideration should be given to a separate category of profiling that refers to situations where automated data processing is used to make decisions that produce legal effects concerning a person or similarly affect a person. This is a special type of profiling, which is subject to stricter regulations due to its potential impact on the rights and freedoms of an individual. Profiling can occur not only on the basis of key word searches in application documents, but there have also been cases where during online interviews recruiters used tools to analyse candidates’ facial expressions when they were answering questions.

It should be noted that pursuant to Article 22 of the GDPR, the data subject has the right not to be subject to a decision that is based solely on automated processing, including profiling, and that produces legal effects concerning that person or significantly affects them in a similar manner. This means that the data subject should have the right to make sure that decisions that concern them and significantly affect their situation are not based solely on automated processing of their personal data. In consequence, failure to inform a person of the automated processing of their personal data or of using AI against their objection constitutes a violation of the right to data protection. What is more, profiling candidates without their knowledge or against their objection may constitute a violation of their personal rights.

What is also a business risk for an employer is the manipulation of CV data by a candidate. In the case of the recruitment process carried out with the use of AI, a candidate may be able to position their CV using encrypted information. As a result, the algorithm may, at the preliminary stage, accept a candidate that would be immediately rejected by a human. On the other hand, a person with the desired competences who has not written their CV in a manner suited to the way the algorithm works may be excluded from the further recruitment process. So, ultimately, the risk that the output data prepared by AI may be falsified must be taken into account. This means that they need to be verified manually.


THE ARTIFICIAL INTELLIGENCE ACT AND THE POLISH DRAFT LAW 


In 2024, Regulation (EU) 2024/1689 of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (“AI Act”) has been promulgated. Its aim is to standardise the rules on the use of artificial intelligence systems in the European Union countries. The EU legislator is aware of the risks associated with the use of AI by employers in the recruitment process. The AI Act draws attention to the phenomenon of algorithmic discrimination and the danger of violating fundamental rights in terms of data protection and privacy. AI systems intended to be used for recruitment purposes, in particular to place targeted job advertisements, to analyse and filter job applications, and to evaluate candidates, will be considered high-risk systems.

Under the provisions of the AI Act, entities using high-risk AI systems in their operations (deployers) will be required to establish, implement, document and maintain a risk management system for the associated risks. In addition, the design of high-risk AI systems will need to ensure transparency of their operation, enabling the entities using them to interpret the system's outputs and use them appropriately. Most importantly, such an AI system will need to be overseen by a human.

Legislation on high-risk AI systems will apply in part from August 2026.

In order to ensure effective application of and compliance with the AI Act, Poland started working on a draft law on artificial intelligence systems in October 2024. The new law will give powers to national market surveillance authorities and establish law enforcement mechanisms. It is intended to enable the elimination of AI systems that do not comply with the requirements from the market and establish procedures for imposing administrative fines for violations. In addition, it is to set out procedures for enforcing bans on certain AI practices, including the creation of a mechanism for filing complaints in relation to the use of such practices. It should be noted that the current draft law does not include all obligations arising from the AI Act. Further legislative steps will be taken as the European Commission publishes the necessary implementing acts scheduled for 2025-2026.


WHO IS RESPONSIBLE FOR ARTIFICIAL INTELLIGENCE?


An organisation that offers employment is the controller of the personal data of candidates in the recruitment process. Any irregularities relating to the processing of such data that result from the operation of artificial intelligence are the sole liability of the controller. Pursuant to the Labour Code, in the event of discrimination, a person directly affected by an employer’s violation of the principle of equal treatment in employment, including a job applicant, is entitled to compensation. Irrespective of whether the discrimination is intentional or not, the employer is fully responsible for the treatment of employees and job applicants. No laws that are currently in effect provide for the possibility of sharing responsibility in that regard with the AI system provider. Any recourse claims against the AI system provider may be very difficult, for legal and practical reasons.



Aleksandra Homołko – trainee attorney at law

Daniel Tyzenhauz – associate

Wiewiórski Legal, November 2024