Could AI Eliminate Bias In Recruitment – Or Make It Worse?

What happens when human resources is no longer run by humans? We may soon find out. Artificial intelligence (AI) tools have been creeping into our working lives for the past few decades via process automation, data analysis and virtual chatbots. But it wasn’t until ChatGPT entered the market in 2022 that questions really started being asked about the future of mankind’s involvement in long-standing business activities — recruitment included.

AI models are now widely leveraged to carry out duties like copywriting and coding. But could they really find their way into hiring practices? AI has been touted to streamline sifting and increase interviewing efficiency — but the jury’s out on whether these intelligent tools could eliminate bias in recruiting or exacerbate it further.

Here, we’ll explore some of the challenges surrounding AI and its implications for recruitment.

What is bias in recruitment?

Bias in recruitment refers to unfair and prejudiced attitudes or preferences that influence hiring decisions. These biases can be conscious or unconscious — they are generally based on a candidate’s individual characteristics, such as race, gender, age, ethnicity, disability, or sexuality.

They can also manifest as ‘affinity bias’, the tendency to favour people with similar interests, backgrounds and experiences as us.

It’s widely understood that all people are affected by some form of unconscious bias. For example, we might refer to associations and stereotypes to ‘fill in the gaps’ when trying to get to know somebody new, or treat them more kindly if we see similarities to ourselves in them.

However, this can have detrimental effects for hiring practices.

Why is bias in recruitment an issue?

Bias in recruitment has a number of consequences. It can lead to the unfair exclusion of qualified candidates, reinforcement of harmful stereotypes, and limitations on workplace diversity.

And this is a problem. Diversity consultants EW Group explain that “workforce diversity has a measurable effect on business performance”, on account that “diverse teams are more innovative, more productive and perform better in problem-solving”. As a result, many organisations are striving to promote fair and inclusive hiring practices for better business as well as ethics.

Some companies aim to identify and mitigate recruitment bias through standardised processes, diverse interview panels, and dedicated recruitment and selection training.

How could AI help reduce bias in recruitment?

According to a 2022 study, 65% of recruiters already use AI in the hiring process. But how is it being implemented?

Advocates claim that AI can reduce bias in recruitment by introducing objectivity and consistency into candidate screening. Talent acquisition platform Jobylon posits that “AI is able to screen candidates objectively based on factors such as qualifications and experience without relying on subjective factors such as age, gender, and race” — while human recruiters are inherently influenced by these.

It’s also suggested that AI could help craft unbiased job descriptions by identifying and suggesting neutral language to use.

By minimising human intervention and relying on data-driven decision-making, AI may have the potential to reduce bias and precipitate more equitable hiring outcomes for organisations.

What are the risks of leveraging AI in recruitment?

But in spite of the prospective benefits, AI also carries inherent risks, including the potential to exacerbate biases.

One concern is that AI algorithms may ‘inherit’ biases when they are trained with historical data. If previous hiring decisions were skewed against specific characteristics, AI systems trained on this data may inadvertently perpetuate those same biases.

For example, if women have been historically underrepresented in a company’s workforce, the AI system trained on previous hiring data may lean in favour of men who resemble the existing workforce, further marginalising women.

In one study, researchers from Cambridge University argued that leveraging AI to reduce recruitment bias is counter-productive. In a statement to the BBC, Dr Kerry Mackereth explained that “tools can’t be trained to only identify job-related characteristics and strip out gender and race from the hiring process, because the kinds of attributes we think are essential for being a good employee are inherently bound up with gender and race”.

Additionally, AI algorithms can inadvertently identify proxy variables correlated with certain characteristics — for example, a postcode or school attended to indicate socioeconomic class. This means they could indirectly introduce biases into decision-making — even in ways that human hiring managers don’t.

The takeaway

There’s no doubt about it, artificial intelligence is here to stay — and it certainly has useful applications in the recruitment process. However, it’s crucial that it doesn’t go unchecked by a human. Ultimately, AI’s effectiveness in eliminating bias depends on the quality and diversity of the data it is trained on, as well as the ongoing monitoring by host organisations to ensure fairness.

When leveraging AI, HR teams should be prepared to regularly update training data and employ techniques to de-bias algorithms. Only then can they ensure that AI truly contributes to fair and inclusive recruitment processes, rather than reinforcing existing biases.


Written by Adam Eaton