Love it or hate it, AI is here to stay. Though there are some very real concerns around AI “taking jobs,” AI tools are also being used in the hiring process to help people get jobs too.

These tools can be quite sophisticated, far beyond merely screening a database of CVs. AI-powered video interviews, aptitude testing, and personality testing (the irony!) are all being used out in the wild – all of which make hiring managers’ lives easier.

However, just because AI tools can take some of the work away from hiring managers and internal recruitment teams, that doesn’t mean they necessarily should.

Let’s not forget that candidates are also using publicly available generative AI tools like Chat GPT to help them with applications. With both parties increasingly using AI – it’s going to be interesting to see what the future holds for your average recruiting process.

So let’s put the hiring and selection process under a microscope and explore the ethical implications of using AI in each. We’ll start with what I believe to be the least ethically questionable use, but we’ll eventually move into deeper, more debate-heavy waters. Let’s dive in.

The Initial Application: “Is This Your CV? Or Is It ChatGPT’s?”

As I’m sure many of my colleagues in the talent sphere are aware, applicants are increasingly using AI in their job hunt. Whether it’s using ChatGPT to fill in a few blank sentences on their CV; using application automation to maximise the spread of applications; or simply having to jump through employer-mandated, AI-powered testing hoops in order to apply.

Hunting for a new role can become a full time job in itself. With generative AI – the ultimate, publicly available productivity tool – available at everyone’s fingertips, it’s only natural that people are going to use it to make their job hunt easier. In fact, I recently shared a few tips to help applicants effectively and ethically harness AI tools in their CVs, applications, and interview preparations.

Hiring managers using CV management software to handle, filter, and rank applicants is nothing new. There’s even advice out there to help candidates optimise their CVs for the best possible visibility within these tools. I for one am curious to see how this advice will shift as AI screening tools advance in depth and sophistication.

Though hiring managers may baulk at the idea of applicants using AI tools, personally I feel that it is a bit of a cat-and-mouse game. If hiring managers use AI, then I see nothing wrong with applicants using AI, within reason, to help elaborate on some of the finer points they are trying to get across. Fair’s fair.

Some of the newest entrants to the job market agree. Arctic Shores recently found that just under half (47%) of students and recent graduates think that employers should allow them to use GenAI in the application process, with only 13% of them considering it cheating.

AI vs. AI – Fight!

But this raises another quandary: should employers use AI to filter out CVs and applications that have used AI to create them? Effectively using AI to fight AI? To me, this “detect and deter” approach feels a little unfair given the amount of young applicants who are clearly using GenAI in applications – not to mention the ever-present possibility of false positives/false negatives. Interestingly, AI detection tools aren’t great for detecting the use of AI, with some being less than 50% accurate in doing so.

The Class Divide

There’s also a bit of a class issue at play here. People who are more practised with AI may end up creating better CVs and better responses to assessments. But in order to become more practised with AI, people need frequent access to moderately sophisticated IT, the disposable income available to purchase/access it, and a reliable internet connection.

They also need spare time and undivided attention – outright luxuries for some – in order to master the skill of AI prompting. These factors might prevent those from lower socio-economic backgrounds from working on their AI skills – a massive shame considering AI skills might give them the employability boost they may be crying out for, in more ways than one.

Murkier Ethical Waters

One further question that needs to be asked is that of individuals who use AI to try and blag a position that they are woefully underqualified for. AI tools may help them make their blagging seem more credible, meaning that hiring managers may need to spend more time reading between the lines to weed out the chancers.

AI application screening tools are often lauded as something that will help hiring managers to spend less time on the screening and applications process. But this is only true if all applicants are 100% honest in their applications!

Now Scanning Applicant #1,867: The Use of AI in
Testing & Shortlisting

AI is already a fixture in CV management tools, helping talent teams collate and rank applications like never before. Broader applicant screening platforms promise an easy life for those in the talent industry: you can screen applicants, set up automated AI-powered video interviews, initiate psychometric profiling, and provide aptitude testing, all completely automated and online.

Though many stages of the hiring process are easily automatable, that doesn’t necessarily mean they should be. AI is great at helping humans filter through large quantities of data like CVs, but it should be approached as a boon to hiring productivity, not left to run the entire hiring shebang on its own.

“Pencils Down, Puny Humans” –
The Problems with AI Testing

The use of artificial intelligence tools in recruitment goes far beyond scanning resumes and ranking applicants. AI tools are being used to administer and mark aptitude testing and personality profiling. Video resumes (where applicants are asked to speak candidly about their achievements on a self-submitted, pre-recorded video) are the latest hot asset – and one that can be fully reviewed and summarised by AI. And perhaps most alarmingly, AI is actually being left to conduct real-time video interviews with applicants, where it can analyse things like verbal responses and body language on the fly (which may potentially penalise neurodivergent candidates).

Personally, I find in-depth AI testing incredibly problematic. Human applicants are pouring all of their humanity into personality tests and earnestly proving their lived experience, only for that data to be run through an algorithm. In no way is their individual humanity being respected or heard.

We also have to remember that timed aptitude testing might disadvantage neurodiverse candidates or those with manual dexterity challenges. And when testing is fully online, you may be omitting applicants who don’t have access to fast internet connections or relatively sophisticated tech setups.

We get the quandary that most hiring managers have: hiring the right people can be incredibly time consuming. Delegating all testing, interviewing, and ranking to a robot may temptingly eradicate much of this time-suck, but it can be incredibly impersonal and disheartening for applicants when they don’t actually speak to a real person about a role until they reach later rounds.

A faceless, impersonal application process like this would undoubtedly feel outright hostile from the candidate’s perspective. But there are only so many AI video interviews, algorithmically led personality tests, and online aptitude exams that truly worthy applicants can stand before they abandon that process – likely in favour of other selection processes where they feel the employer gives more of a damn about them. Creating a valuable employer brand starts with how the hiring process comes across after all.

You Can’t Eliminate Bias from an Already
Possibly Biased System

AI hiring tools are often credited with “eliminating bias” from the talent selection process, yet nothing could be further from the truth.

Though it pertains to an older case, Amazon’s now scrapped AI hiring tool is a great example of how AI can actually introduce bias into the hiring process. In short, the AI was trained by observing patterns in software developer resumes that had been submitted over a 10 year span. However, due to the vast male dominance in the tech industry, the majority of those resumes were from male applicants. The result? An AI that taught itself to prefer male candidates and penalise female candidates.

I am sceptical when companies claim that AI tools eradicate all bias from the recruitment process. Even with the most even-handed, neutral AI tool out there, employers eventually have to meet shortlisted candidates at some point before they hand out an employment contract. It’s here that bias can easily creep in, where a potentially prejudiced hiring team member sees a name, a gender, a skin colour, a disability. In this way, bias isn’t really removed at all – merely delayed.

Many marketed AI tools are also black boxes – they use proprietary code that can’t be independently examined for fairness, bias, or validity. Creating an open source solution could be a great response to this, but I am unsure who will take up this fight as there is too much money to be made in it by going down the mainstream commercial route.

Concerns of Detection & Diversity

We’ve already touched upon how AI-powered testing may disadvantage those without access to a certain level of technology or those with certain disabilities. But it’s also interesting to examine the other ways the more in-depth AI-powered talent selection processes intersect with class, culture, and race.

Going back to the Arctic Shores report, they found that Black and Minority Ethnic students and graduates were more likely to use generative AI platforms to help them with job applications. If employers introduced an AI “detect and deter” approach to screen for use of AI and discredit the applicants that use it, this may introduce a silent, under-the-table bias into the selection process – one that applicants may never even know was affecting their chances.

Another part of the report talks about how candidates would feel if the employer banned the use of GenAI in the application process; namely asking “How comfortable or uncomfortable would you feel about completing an online assessment if it was being monitored live or recorded to detect usage of a tool like ChatGPT?”

Overall, less than 50% of respondents would feel comfortable with that situation, and the idea made the majority of neurodiverse and female respondents uncomfortable. Arctic Shores come to the same conclusion as we do – that an AI “detect and deter” approach in the recruitment process can actively harm workplace diversity.

ChatGPT Plus offers access to new features, faster responses, and guaranteed availability at peak times. However, 38% of Arctic Shores respondents said that they found ChatGPT’s premium offering too expensive. Might this mean that applicants who can afford it will have an unfair advantage in terms of features and speed? Could this further put more power in the hands of those with a certain amount of disposable income compared to those without?

Previously on the blog, we’ve explored how samey, strict application processes can result in samey, less diverse hires. But without properly bias-less checks and balances, AI tools could make this problem even worse – potentially resulting in an “insert parameters, get carbon copy candidates” situation.

In Conclusion: Balancing Efficiency with Ethics

It would be very easy for us to fall into a binary argument here: either go all out on AI, or block its use altogether. However, as we can see above, the best option is probably to be found in the grey area between the two. AI isn’t going away any time soon, so a middle ground needs to be forged.

Hiring managers, we know it’s tempting to delegate most of the process to a robot. But know that going too far in this direction might ultimately lead to failure. AI can make your to-do list shorter, but without introducing a bit of human common sense, it could also:

● Exclude candidates with more informally recognised skill sets because they don’t have specific qualifications or keywords in their application.
● Introduce biases into the selection process (or at least not eradicate them) that could play out across sensitive factors like gender, race, LGBTQIA+, and disability.
● Attract new hires who tick all of your qualification, experience, and psychometric boxes, but are a terrible cultural fit.

AI tools can be great at tasks like filtering out poorly qualified individuals, but it probably shouldn’t be let loose on the entire hiring process. You’re deciding upon the fates of humans, after all, so I argue it should be a highly interpersonal, human-led process.

However, interestingly, that Arctic Shores data shows that having the first interview/assessment in person would put off nearly 40% of the survey’s respondents from applying altogether. Nearly 50% of neurodivergent applicants would be deterred, as would 40% of minority ethnic candidates.

So if you’re going to use AI, maybe reserve it for those early stages when you have huge swathes of data to painstakingly wade through, implementing increasingly more human strategies beyond that.

The same Arctic Shores report found that both free and paid versions of ChatGPT excelled at aptitude, judgement, and personality assessments. Though this testing is easily rendered virtually, it may be best to keep it in-person, or at least supplied in a way that can be invigilated by a human, to avoid any AI-powered cheating.

So breathe easy hiring managers – I believe your job can’t, and shouldn’t, be delegated to a robot. Intuition and genuine interpersonal nous is more important than ever in recruitment. AI can’t screen for cultural fit; it can’t screen for potential; it can’t screen for that unspoken spark that makes someone more than just a name on a CV. The tech isn’t there. Not yet anyway…

Back to Publications

More Publications

The Business Skills Security Leaders Need and How to Build Them

Read more

Were you aware of these Cybersecurity awareness days?

Read more

5 People-Focused Ways to Build a Robust Cybersecurity Culture

Read more