Reverse Engineering the Reject Pile to Fix AI’s Biggest Hiring Flaw

Credit: Outlever

Key Points

  • AI’s failure in recruiting is not just a technological error, but the result of a flawed human strategy that over-relies on rigid keywords and biased data.

  • Kshitij Gupta, a seasoned global talent leader at LTV.ai, says recruiters must evolve from operators into strategic advisors who design the search, not just execute it.

  • He advocates for using AI to find candidates who match a narrative, not just keywords.

  • Nuanced judgment remains irreplaceable for assessing a candidate’s potential and culture fit, making human oversight more vital than ever.

Ironically, AI was used to help us manage scale, but what it ended up doing was filtering out the very candidates we want most: the creative problem-solvers, the people who think outside the box, and the career switchers with unconventional backgrounds.

Kshitij Gupta

Founding People and Culture Lead
LTV.ai

The promise of AI in recruiting was simple: manage the overwhelming scale of applications and surface the best candidates with unmatched efficiency. Yet in practice, these systems often worsen the very talent shortages they were meant to solve, screening out qualified, creative, and unconventional candidates before a human ever sees their resume. It suggests the entire approach is backward. What if instead leaders used AI not to filter candidates out, but to filter them in?

Kshitij Gupta, Founding People and Culture Lead at LTV.ai, aims to flip recruiting AI on its head. With 14 years of experience, Gupta is a seasoned HR and talent acquisition leader, notably growing Haptik from 200 to over 400 employees in just 18 months. His work in global recruitment gives him a clear vision for how to fix AI’s failures in talent acquisition.

  • Flipping the filter: He sees a fundamental misapplication of AI’s capabilities in talent acquisition. “The majority of talent and HR leaders are currently using AI to filter out candidates. Where we can make it better is by using it to filter in candidates with a narrative that fits the opening, not keywords.”

  • Hiring the rejects: AI’s weaknesses in recruiting aren’t just technological flaws. They are the direct results of a flawed human strategy. Gupta explains that when leaders deploy AI with an over-reliance on rigid keyword matching and train it on biased data, the result is a system that systematically excludes the very people it should be finding. “Ironically, AI was used to help us manage scale, but what it ended up doing was filtering out the very candidates we want most: the creative problem-solvers, the people who think outside the box, and the career switchers with unconventional backgrounds.”

The machines aren’t failing on their own, he says. They are simply executing a flawed command, failing to grasp the human context they were never asked to look for. Current systems, for example, emphasize keywords over transferable skills. A teacher possesses highly transferable skills for a customer success role, but an AI often fails to understand that, learning from biases in its historical data, and thereby, missing a candidate’s true potential.

  • The keyword trap: Over-reliance on exact keyword matching creates unnecessary barriers. “If I’m looking for someone with experience in React.js, it will only include candidates who put that exact term. But if someone writes, ‘I have experience in React,’ they might get filtered out. So, overemphasizing and relying on that exact keyword matching becomes a problem.”

  • Bias amplification: Beyond perpetuating past human biases, AI’s learning process can create entirely new ones. If a system rejects thousands of applicants from one location for a high-volume role, it might incorrectly ‘learn’ to disqualify candidates from that area in the future, even when a specific role requires hiring from that location. This negative feedback loop creates new, flawed rules. “We have to acknowledge our role in this failure. The problem stems directly from how we set up the system, the details we feed it, and the ways we ultimately choose to use it.”

The majority of talent and HR leaders are currently using AI to filter out candidates. We can make it better by using AI to filter in candidates with a narrative that fits the opening, not strict keywords.

Kshitij Gupta

Founding People and Culture Lead
LTV.ai

The fix isn’t just better technology. It’s redefining the human role in recruiting. He notes a dangerous over-reliance on basic, automated searches, where many recruiters simply ask a tool like ChatGPT for a Boolean string, drastically reducing the talent pool. True expertise involves casting a wide net initially to discover the full range of available talent. This evolution in the recruiter’s role recasts them as strategic advisors.

  • Advisor, not operator: “The traditional recruiter role has been purely operational, focused on ad hoc hiring, screening profiles, and sourcing people. We must evolve into a strategic talent advisory position, where we can focus on finding the right potential, understanding market trends, and guiding the business by working closely with its leaders.” In this model, recruiters are freed from manual, ad-hoc tasks, allowing them to focus on spotting potential and guiding strategy. The AI, in turn, can be redeployed for tasks where it truly excels: objective evaluation and efficient, personalized outreach.

Gupta suggests using AI for objective technical screening, such as having candidates write actual code for AI review, which provides a more scientific result than subjective resume screening. Its capabilities are best utilized when directed towards intelligent inclusion and scalable, personalized communication. “We should use it to find people who match a narrative. I should be able to ask the system to find a candidate who, for example, started as an entrepreneur, moved to a large company, and built something from scratch.” Gupta believes AI’s true strength lies in facilitating personalized outreach at scale. He points to features like LinkedIn’s ability to customize emails for hundreds of candidates as an ideal use case. In his view, AI should be a tool for broadening engagement and connecting with potential talent, not a mechanism for indiscriminately rejecting applicants based on rigid filters.

  • Hire for the curve: A talent leader needs to understand the learning curve for each skill. Rejecting a candidate who knows a complex language like Java just because they do not list Python is a mistake. “They can definitely pick it up. We have to start making these kinds of decisions in our searches.”

  • The culture question: The lack of nuanced recognition by AI makes human judgment even more vital. It is incapable of assessing a candidate’s potential to grow or their alignment with company culture. In these domains, a human advisor remains irreplaceable. “If an employee reports being uncomfortable with a colleague of a certain gender, an AI cannot grasp the repercussions for team dynamics. A human can. From a cultural perspective, the human role is irreplaceable, and we must build systems that support, not replace, robust human decision-making.”

To safeguard this system, Gupta proposes a final layer of oversight. Given the risk of AI creating new, flawed biases, a dedicated mechanism for continuous human surveillance is critical. This crucial safeguard ensures that AI’s efficiency doesn’t come at the cost of valuable talent. “I propose what I call a ‘rescue round.’ It’s a mandatory second check where humans review profiles the AI has rejected. This becomes a dedicated process to ensure we are not accidentally filtering out the very people we need to find.” This process embodies the necessary continuous human surveillance of AI’s decisions.

TL;DR

  • AI’s failure in recruiting is not just a technological error, but the result of a flawed human strategy that over-relies on rigid keywords and biased data.

  • Kshitij Gupta, a seasoned global talent leader at LTV.ai, says recruiters must evolve from operators into strategic advisors who design the search, not just execute it.

  • He advocates for using AI to find candidates who match a narrative, not just keywords.

  • Nuanced judgment remains irreplaceable for assessing a candidate’s potential and culture fit, making human oversight more vital than ever.

Ironically, AI was used to help us manage scale, but what it ended up doing was filtering out the very candidates we want most: the creative problem-solvers, the people who think outside the box, and the career switchers with unconventional backgrounds.

Kshitij Gupta

LTV.ai

Founding People and Culture Lead

Ironically, AI was used to help us manage scale, but what it ended up doing was filtering out the very candidates we want most: the creative problem-solvers, the people who think outside the box, and the career switchers with unconventional backgrounds.
Kshitij Gupta
LTV.ai

Founding People and Culture Lead

The promise of AI in recruiting was simple: manage the overwhelming scale of applications and surface the best candidates with unmatched efficiency. Yet in practice, these systems often worsen the very talent shortages they were meant to solve, screening out qualified, creative, and unconventional candidates before a human ever sees their resume. It suggests the entire approach is backward. What if instead leaders used AI not to filter candidates out, but to filter them in?

Kshitij Gupta, Founding People and Culture Lead at LTV.ai, aims to flip recruiting AI on its head. With 14 years of experience, Gupta is a seasoned HR and talent acquisition leader, notably growing Haptik from 200 to over 400 employees in just 18 months. His work in global recruitment gives him a clear vision for how to fix AI’s failures in talent acquisition.

  • Flipping the filter: He sees a fundamental misapplication of AI’s capabilities in talent acquisition. “The majority of talent and HR leaders are currently using AI to filter out candidates. Where we can make it better is by using it to filter in candidates with a narrative that fits the opening, not keywords.”

  • Hiring the rejects: AI’s weaknesses in recruiting aren’t just technological flaws. They are the direct results of a flawed human strategy. Gupta explains that when leaders deploy AI with an over-reliance on rigid keyword matching and train it on biased data, the result is a system that systematically excludes the very people it should be finding. “Ironically, AI was used to help us manage scale, but what it ended up doing was filtering out the very candidates we want most: the creative problem-solvers, the people who think outside the box, and the career switchers with unconventional backgrounds.”

The machines aren’t failing on their own, he says. They are simply executing a flawed command, failing to grasp the human context they were never asked to look for. Current systems, for example, emphasize keywords over transferable skills. A teacher possesses highly transferable skills for a customer success role, but an AI often fails to understand that, learning from biases in its historical data, and thereby, missing a candidate’s true potential.

  • The keyword trap: Over-reliance on exact keyword matching creates unnecessary barriers. “If I’m looking for someone with experience in React.js, it will only include candidates who put that exact term. But if someone writes, ‘I have experience in React,’ they might get filtered out. So, overemphasizing and relying on that exact keyword matching becomes a problem.”

  • Bias amplification: Beyond perpetuating past human biases, AI’s learning process can create entirely new ones. If a system rejects thousands of applicants from one location for a high-volume role, it might incorrectly ‘learn’ to disqualify candidates from that area in the future, even when a specific role requires hiring from that location. This negative feedback loop creates new, flawed rules. “We have to acknowledge our role in this failure. The problem stems directly from how we set up the system, the details we feed it, and the ways we ultimately choose to use it.”

The majority of talent and HR leaders are currently using AI to filter out candidates. We can make it better by using AI to filter in candidates with a narrative that fits the opening, not strict keywords.

Kshitij Gupta

LTV.ai

Founding People and Culture Lead

The majority of talent and HR leaders are currently using AI to filter out candidates. We can make it better by using AI to filter in candidates with a narrative that fits the opening, not strict keywords.
Kshitij Gupta
LTV.ai

Founding People and Culture Lead

The fix isn’t just better technology. It’s redefining the human role in recruiting. He notes a dangerous over-reliance on basic, automated searches, where many recruiters simply ask a tool like ChatGPT for a Boolean string, drastically reducing the talent pool. True expertise involves casting a wide net initially to discover the full range of available talent. This evolution in the recruiter’s role recasts them as strategic advisors.

  • Advisor, not operator: “The traditional recruiter role has been purely operational, focused on ad hoc hiring, screening profiles, and sourcing people. We must evolve into a strategic talent advisory position, where we can focus on finding the right potential, understanding market trends, and guiding the business by working closely with its leaders.” In this model, recruiters are freed from manual, ad-hoc tasks, allowing them to focus on spotting potential and guiding strategy. The AI, in turn, can be redeployed for tasks where it truly excels: objective evaluation and efficient, personalized outreach.

Gupta suggests using AI for objective technical screening, such as having candidates write actual code for AI review, which provides a more scientific result than subjective resume screening. Its capabilities are best utilized when directed towards intelligent inclusion and scalable, personalized communication. “We should use it to find people who match a narrative. I should be able to ask the system to find a candidate who, for example, started as an entrepreneur, moved to a large company, and built something from scratch.” Gupta believes AI’s true strength lies in facilitating personalized outreach at scale. He points to features like LinkedIn’s ability to customize emails for hundreds of candidates as an ideal use case. In his view, AI should be a tool for broadening engagement and connecting with potential talent, not a mechanism for indiscriminately rejecting applicants based on rigid filters.

  • Hire for the curve: A talent leader needs to understand the learning curve for each skill. Rejecting a candidate who knows a complex language like Java just because they do not list Python is a mistake. “They can definitely pick it up. We have to start making these kinds of decisions in our searches.”

  • The culture question: The lack of nuanced recognition by AI makes human judgment even more vital. It is incapable of assessing a candidate’s potential to grow or their alignment with company culture. In these domains, a human advisor remains irreplaceable. “If an employee reports being uncomfortable with a colleague of a certain gender, an AI cannot grasp the repercussions for team dynamics. A human can. From a cultural perspective, the human role is irreplaceable, and we must build systems that support, not replace, robust human decision-making.”

To safeguard this system, Gupta proposes a final layer of oversight. Given the risk of AI creating new, flawed biases, a dedicated mechanism for continuous human surveillance is critical. This crucial safeguard ensures that AI’s efficiency doesn’t come at the cost of valuable talent. “I propose what I call a ‘rescue round.’ It’s a mandatory second check where humans review profiles the AI has rejected. This becomes a dedicated process to ensure we are not accidentally filtering out the very people we need to find.” This process embodies the necessary continuous human surveillance of AI’s decisions.