Can AI Make
Hiring Fairer?
The Promise and Pitfalls of Technology in the Recruitment World
Ed Godwin – Co-Founder, Talent Unlimited
Diversity and inclusion (D&I) have long been essential to building a thriving workplace, and businesses of all sizes are increasingly turning to artificial intelligence to improve their hiring practices. With AI-powered tools, companies can potentially tackle some of the biases that often sneak into hiring decisions. But can technology really make a difference—or might it introduce new challenges?
Hiring, as any recruiter will tell you, is seldom straightforward. A survey by the Chartered Institute of Personnel and Development found that over half of hiring managers struggle with unconscious bias. Studies show that diversity—whether in gender, ethnicity, or socioeconomic background—drives higher innovation and profitability. Yet, traditional recruitment practices don’t always support these goals. Here’s how AI promises to help, and where caution is still needed.Screening for Skills, Not Stereotypes
Experience the Future of Hiring
with 25 Free Interviews
Get a firsthand look at how our platform streamlines your hiring process and transforms the recruitment experience for both recruiters and candidates.
Book your demo now and receive 25 free interviews.
Terms and Conditions apply
AI’s first advantage is in early screening, a stage where even the most conscientious hiring managers can fall prey to unconscious preferences. Research from Oxford University revealed that job applicants with ethnic-sounding names were 40% less likely to receive a callback than those with Anglo-Saxon names, even with similar qualifications. AI-driven blind screening removes such identifiers, reducing the risk of unconscious bias in the selection process.
Standardising Interviews for Fairer Comparisons
Interviews are often influenced by subtle, subjective factors, such as the “halo effect” or first impressions. AI-based tools can help to standardise interview questions and scoring criteria, creating a more level playing field. According to Harvard Business Review, structured interviews significantly reduce bias by focusing on skills and abilities rather than personal impressions.
From my years in HR, I know how easily interviews can turn subjective. People often talk about having a “gut feeling” or knowing a candidate was right for the role straight away. But these instinctive responses can create biases and can be influenced by seemingly small factors like shared interests or appearance. AI’s structured approach doesn’t just promise objectivity; it’s a step toward levelling the playing field, ensuring all candidates are measured by the same criteria.
Dr. Alex Hanna, a sociologist specialising in AI ethics, adds, “When questions and scoring are standardised, AI offers a truly level playing field.” But here, too, human oversight is essential. While standardisation is a step forward, language patterns and cultural expressions could still be misinterpreted by AI if models aren’t rigorously tested across diverse populations.The Essential Role of Human Oversight
Even the most advanced AI tools require regular human oversight to ensure fairness and relevance. AI may flag certain applicants for specific roles, but a human touch is crucial to interpret data in context. A recent study by MIT Sloan found that 78% of companies using AI in hiring believe that human oversight remains necessary to avoid potential misjudgements or over-reliance on algorithmic recommendations.
This need for balance is echoed by Professor Chris Bail of Duke University, who notes, “AI can highlight trends we might not see ourselves, but without conscious human oversight, these tools may simply reinforce existing biases in unexpected ways.” Dr. Kate Crawford, author of Atlas of AI, concurs, arguing that “unchecked AI can reinforce the very biases it’s intended to solve, particularly if there’s an over-reliance on automated decisions without human validation.”Broadening the Talent Pool with AI-Driven Sourcing
Traditional recruitment networks can be limited, shaped by longstanding personal connections or a preference for certain schools. LinkedIn’s Global Talent Trends report highlights that data-driven recruitment increases diversity by around 30%, as AI-driven sourcing tools can scan broader datasets to find qualified candidates from varied backgrounds.
AI sourcing widens the net, helping companies reach talent they might otherwise miss. But it’s not without potential drawbacks. “If an AI only pulls from homogenous datasets, it could limit diversity instead of enhancing it,” warns Dr Joy Buolamwini, author of Unmasking AI. Ensuring diverse data sources is crucial for expanding the talent pool effectively.
Leveraging Analytics for Continuous Improvement
AI provides valuable insights through data, allowing companies to assess D&I outcomes over time. McKinsey found that companies which regularly review their diversity data are more likely to meet their D&I goals. AI-powered analytics can reveal trends in recruitment, helping to pinpoint where diversity initiatives are working—or falling short.
Yet, Professor Chris Bail notes that metrics alone may miss nuanced dynamics, such as workplace culture or a sense of belonging. “Metrics are useful, but they shouldn’t be the only measure of success,” he cautions.
The Case for a Balanced Approach
In my view, AI isn’t a replacement but a partner. It’s a smart tool in our toolkit—one that still depends on our commitment to fairness, representation, and staying true to our values. With the right approach, AI can make hiring not only faster but, crucially, fairer. The human touch remains essential, yet by handling the heavy lifting, AI allows recruiters to reinvest their time in building strong relationships with those most likely to join their organisation.