
With British unemployment set to reach a high not seen since 2015, hiring processes are turning into even more of a pressure cooker for job seekers and businesses alike. Faced with a flood of applications for every position, employers are increasingly turning to AI to help speed up the hiring process and find the best candidates for roles.
But not all AI hiring tools are created equal. While ethical, explainable AI promises to enhance recruitment, some garden-variety models are making it worse. These biased black boxes can rule talented candidates out of the running without employers even realising it’s happening. Not only does this mean businesses deploying these AI tools risk slamming the door on top talent, but they could also be leaving themselves vulnerable to discrimination cases. You need only look at the Workday collective action lawsuit, which asserts the company employed discriminatory job applicant screening technology, to see the risks.
This doesn’t mean companies must throw in the towel on AI in recruitment altogether. The right tech, deployed in the right way, will allow us to hire faster and fairer, benefiting both employers and workers. To unlock AI’s gains without risk, we need to make sure we’re carefully considering how, and when, we’re using the tech. If you’re using AI as part of your recruitment strategy, this is where you should start those checks.
Debias your dataset
You’ll have heard that an AI model is only as good as the data it’s trained on, likely in reference to popular generative AI tools like ChatGPT, Gemini, and Claude. But this is just as true for specialised AI hiring tools. For an AI tool to give fair, unbiased outputs, it needs to be trained on a dataset that’s been scrubbed of historical bias. Otherwise, the model learns from and perpetuates the historical inequalities and biases present in our society and, therefore, our data.
Case in point is the infamous Amazon example. Trained on CV data submitted by applicants over 10 years, who were mostly men, it learned to penalise the word ‘women’. Even though the tool was then edited to view the word neutrally, ultimately its judgment couldn’t be trusted, and it had to go. Employers must make sure that any AI tools used to help with hiring are trained on clean, debiased datasets if they want to give all applicants an equal opportunity to showcase their skills and find the best person for the role.
Centre explainability
Just like a student taking a maths exam, any good algorithm needs to be able to show its working. This ‘explainability’ is what allows humans to check the soundness of a model’s logic, and it’s essential given that some AI tools can make mistakes or reach flawed decisions, especially when fed by the aforementioned dodgy data.
Employers cannot afford to take a model’s decisions on trust. If we want to ensure that tech isn’t blocking well-suited applicants, we need to give teams the ability to interrogate how a model is reaching its decisions and cross-check its accuracy. And that comes from steering clear of unknowable, black box tech and only deploying AI in hiring that has explainability built into its core.
Make the process transparent
With fears circling about the use of AI in hiring, transparency is critical. Recent Gartner research found that just 26% of job candidates trust that AI will fairly evaluate them, and a third (32%) were concerned about AI potentially failing their applications. To avoid turning off talent at the very beginning – and risk losing those that would be an asset to your team – employers must be upfront on job adverts and careers pages about the way they’re using AI.
Companies need to make it clear that they’re not outsourcing the entire hiring process to it, but instead strategically deploying robust tools for certain tasks. For example, using the tech to tailor more personalised screening questions. Or deploying it to anonymise details like names and pronouns on CVs (which can trigger unconscious bias) to ensure candidates are being assessed solely based on their role-relevant skills. Being upfront about how you’re using AI and showing that you’ve taken steps to ensure it is fair will help to alleviate job seekers’ concerns.
It helps to also explain why you’re choosing to use robust, debiased AI tooling: to build an effective, efficient assessment process that benefits everyone.
As employers turn to AI to help with their hiring, they need to be careful about the tech they choose. While AI can help stretched hiring teams, careful implementation is essential. Employers who are using AI within parts of their recruitment process need to be transparent about it, and only opt for explainable models trained on clean, de-biased data, if they want to find – and keep – top talent.
Khyati Sundaram is the chief executive of Applied, a hiring platform designed to reduce bias.
With British unemployment set to reach a high not seen since 2015, hiring processes are turning into even more of a pressure cooker for job seekers and businesses alike. Faced with a flood of applications for every position, employers are increasingly turning to AI to help speed up the hiring process and find the best candidates for roles.
But not all AI hiring tools are created equal. While ethical, explainable AI promises to enhance recruitment, some garden-variety models are making it worse. These biased black boxes can rule talented candidates out of the running without employers even realising it’s happening. Not only does this mean businesses deploying these AI tools risk slamming the door on top talent, but they could also be leaving themselves vulnerable to discrimination cases. You need only look at the Workday collective action lawsuit, which asserts the company employed discriminatory job applicant screening technology, to see the risks.
This doesn’t mean companies must throw in the towel on AI in recruitment altogether. The right tech, deployed in the right way, will allow us to hire faster and fairer, benefiting both employers and workers. To unlock AI's gains without risk, we need to make sure we’re carefully considering how, and when, we’re using the tech. If you’re using AI as part of your recruitment strategy, this is where you should start those checks.




