With the ever-increasing use of AI in the workplace, employees in a variety of sectors have been asking themselves whether AI will ultimately lead to the redundancy of their jobs as they are replaced by technology-driven solutions. However, to what extent might AI also be responsible for individuals being unsuccessful in applications in the first place, and how should employers being using AI to assist with their recruitment decisions?
According to research conducted by the Institute of Student Employers (ISE), the last year has seen an increase in the use of AI in recruitment processes, with nearly a third of employers using AI as part of their hiring practices, up from fewer than 1 in 10 in the previous year. In utilising this technology, employers are experiencing the benefits of increased speed and efficiency in analysing large volumes of data, as well as reducing error through the automation of repetitive tasks. These efficiencies can in turn reduce the length of the hiring process and the associated costs, as well as providing a clear and streamlined process for both the candidate and the employer. Data-driven processes can also in theory help eliminate unconscious bias in recruitment decisions by making data-driven objective decisions that focus on skills and experiences rather than any subjective perceptions and create a level playing field for all candidates.
How is AI being used in recruitment?
AI offers a variety of tools that can be effectively utilised to assist in recruitment processes. The screening of CVs can be a time-consuming process, but AI tools can be utilised to identify candidates with the required qualifications or experience in a more effective way which reduces the risk of strong candidates being inadvertently overlooked either through human error or because an employer simply lacks the resources to adequately screen all applications received for a particular role. As a by-product of this process, systems can be used to match unsuccessful candidates against other vacancies which may exist within an organisation, helping to best identify talented individuals who might otherwise be lost to the business. Once initial screening has taken place, chatbots can then be used to answer questions that candidates may have about the business more generally or to answer specific questions about the vacancy. Chatbots can even be used to conduct first stage interviews and to conduct information gathering exercises.
Using data secured from previous recruitment exercises and retention data for previously successful candidates, employers can also potentially use AI systems to identify skills, patterns, or attributes that have resulted in successful recruitment exercises in the past which could predict the likely success of a candidate in any given role. This can not only help identify the best candidate for a role in terms of skills and experience at that time, but also those who are more likely to stay with a business and therefore improve staff retention rates. Even where a face-to-face interview is conducted, employers can also potentially use AI-powered video interview platforms. Such platforms can be used to analyse a variety of traits and characteristics such as facial expressions and tone of voice to assess a particular candidate’s communication skills and emotional intelligence in a way which is more data driven and objective than the subjective perceptions of an interviewing manager.
Finally, once a candidate in is due to join, AI-driven systems can implement personalised training plans and onboarding processes which address the needs of the new joiner to help fill any skill and knowledge gaps and help them increase their productivity in the new role more quickly.
The drawbacks of AI in recruitment
With all these potential benefits, it is understandable why an increasing number of employers are looking to incorporate the use of AI into their recruitment processes. However, although the ISE survey found that 83% of respondents to its survey found that AI was increasing speed and efficiency, it still identified a degree of resistance towards the wholesale embrace of AI-driven solutions and a continuing desire to retain the personal in recruitment processes. 70% of employers surveyed expressed a preference for a more human-centric approach, and 63% of respondents expressed concerns over the reliability when relying upon AI systems.
33% of respondents also identified data security as a concern in utilising AI-systems, and in addition to holding the data securely, employers are also required to consider their data protection obligations when using AI tools. The information gathered and processed is likely to include personal data and special category data, requiring an employer to identify a specific condition to permit the processing of the data, and its recruitment privacy notice should provide transparency around the ways in which candidate data will be processed. Also, where an employer is utilising automated decision making processes the Information Commissioner’s Code identifies that an employer must ensure all the applicants are informed of this, advised how to make representations against any adverse decision, consider any representations and take them into account before making a final decision, and test and keep the results produced by the system under review to ensure they properly and fairly apply the employer's shortlisting criteria to all applicants. There is also an obligation to ensure that tests based on the interpretation of scientific evidence, such as psychological tests, are used and interpreted only by those who are qualified to do so.
Aside from data issues, there are other potential risks inherent in using AI-driven systems, not least derived from the fact that as with any software they are dependent upon the data originally entered by humans. AI systems rely upon algorithms, and where there is either error or bias programmed into the system by its designers, then the output will similarly be corrupted. Although in the UK there is no concept of ‘unfair’ recruitment and an employer is broadly free to recruit whomever it may choose, any recruitment decisions must not be discriminatory. In several high-profile cases, AI recruitment tools have been found to have an in-built bias in their algorithms, resulting in certain groups being selected in preference to others. Not only can this leave an employer open to discrimination claims from unsuccessful applicants, but it can also damage the business by overlooking potentially excellent candidates and damaging workplace diversity. This can then in turn discourage other talented individuals from applying for roles in the future.
In addition, the algorithms used can only analyse the data that is either volunteered by the candidate or that is otherwise publicly available. There is therefore a risk that the use of an algorithm will fail to obtain a full and accurate picture of any candidate, instead favouring stand out performers in settings it determines are of more import, to the detriment of those whose strengths are not identified by the algorithm.
Another risk is around the lack of transparency. Over and above the data protection obligation in respect of transparency, AI systems can be difficult to understand, and both applicants and employers may not fully appreciate the basis for certain decisions. This can create the perception of unfairness for candidates, and make it challenging for employers to identify and address any bias in the system. Our privacy experts can assist further on data protection, so your legal and reputational risks are managed to avoid security breaches, or failure to comply with data protection laws. Finally, where an AI system makes a biased decision there is a lack of accountability, which is crucial in ensuring fair and non-discriminatory recruitment decisions.
What should employer’s do?
Due to the significant benefits that can be derived from utilising AI-driven recruitment decisions, employers should be considering how they can best incorporate these systems into their recruitment journey. There is a plethora of providers available, with a range of solutions available to meet the needs of employers whether they are looking to introduce a first-stage screening or sifting, or the development of chatbots and video analysis. As with all AI solutions, the focus should be on establishing the need of the business and implementing something that meets that need, rather than introducing a system just for the sake of not wanting to be seen as backwards.
To mitigate the risks around potential bias, employers should ensure that their systems are regularly audited to ensure that there are no biases built into the systems, and any employees interacting with the systems are full trained. Employers should also focus on transparency, both to ensure that how systems are being used is clearly communicated to candidates and to ensure that they are complying with the data protection obligations.
Can we help you?
Our experts understand the opportunities and challenges within technology advancements such as AI. We also see HR teams are increasingly in-demand, with a widening range of issues and industry developments, which is why our team works with each client in the way that suits them best. Get in touch today, for employment law advice and HR training for your business and your people on [email protected].
Consistent with our policy when giving comment and advice on a non-specific basis, we cannot assume legal responsibility for the accuracy of any particular statement. In the case of specific problems we recommend that professional advice be sought.