More
    HomeAI NewsFutureWhen Your Job Interviewer Isn’t Human

    When Your Job Interviewer Isn’t Human

    Kendall McGill had only been on the interview call for a few moments when she realized something wasn’t right.

    • Rising AI Adoption in Hiring: Companies are increasingly using AI for interviews to cut costs and speed up recruitment, with 96% of hiring professionals relying on it for tasks like screening, though this often leaves candidates feeling alienated and surprised.
    • Candidate Experiences and Concerns: Job seekers like Kendall McGill and Wafa Shafiq report discomfort with impersonal AI interactions, highlighting the need for transparency, human elements, and warnings about potential biases in AI systems.
    • Ethical and Future Implications: While AI streamlines processes, it raises issues like embedded biases, candidates cheating with AI, and calls for regulation to ensure fair, ethical use on both sides of the hiring equation.

    In an era where technology permeates every aspect of our lives, the job market is no exception. Artificial intelligence is transforming how companies scout and select talent, promising efficiency and cost savings. But for many job seekers, this shift feels like a step into a dystopian future, where the warmth of human connection is replaced by cold algorithms. From surprise AI interviews to concerns over bias and ethical dilemmas, the rise of non-human interviewers is reshaping the hiring landscape—and not always for the better. Drawing from real experiences and expert insights, this article explores the broader implications of AI in recruitment, weighing its benefits against the human costs.

    Kendall McGill’s encounter encapsulates the unease many feel. Applying for a project management role in Baltimore, she joined what she thought was a standard interview call, only to be greeted by an unmistakably robotic voice. “It was a standard AI voice,” she recalls. “When you get on an interview, you can tell if it’s a real person.” Uncomfortable with the impersonal setup, McGill chose to hang up, preferring the nuances of human interaction. “I would much rather talk to human beings and get those experiences that you get when talking to a human,” she says. Her story isn’t isolated; a growing chorus of candidates echoes this sentiment, feeling alienated by processes that prioritize speed over empathy.

    This trend is driven by compelling data. A recent Resume Now report surveyed over 900 U.S. hiring professionals, revealing that 96% incorporate AI into recruiting tasks like screening and resume analysis. Impressively, 94% believe these tools effectively pinpoint strong candidates, and 73% report a faster time-to-hire. Career experts like Keith Spencer from Resume Now aren’t surprised—AI is often cheaper and quicker than traditional recruiters. Yet, he cautions against over-reliance: “It’s important to make sure that AI or automation doesn’t completely take the human element out of the hiring process.” Spencer emphasizes that the candidate experience offers a first peek into company culture, potentially impacting long-term retention if it feels robotic and detached.

    Wafa Shafiq’s experience in Mississauga, Canada, adds another layer to the narrative. Applying for a marketing specialist position through a recruiting agency, she scheduled a video interview with “Alex from Apriora,” only to discover via a quick search that Alex was an AI recruiter. Caught off guard, Shafiq proceeded but found the interaction lacking. “There was no small talk. There was no chance to connect in the way that we do with a recruiter, which was a little bit odd,” she recounts. The AI handled basic questions efficiently—asking six or seven and allowing her to pose her own—but faltered on nuanced queries. “It felt a little dystopian, but felt very efficient because it was very to the point,” Shafiq says. She didn’t advance, and while open to future AI interviews, she stresses the value of forewarning: “Even if it were just to provide a heads up, this is going to be an AI interview, here are ways to prep—that just makes me feel the company is setting me up for success.”

    Transparency emerges as a critical theme. Spencer agrees that AI interviews “should not be a surprise,” warning that springing them on candidates undermines trust. “You shouldn’t enter into an interview fully thinking you’re going to be speaking to a real life human being, and then all of a sudden, you’re interacting with an AI avatar,” he says. This lack of disclosure can erode the candidate’s first impression, turning what should be an exciting opportunity into a bewildering ordeal.

    Beyond the interview itself, questions linger about what happens next. Matthew Bidwell, a management professor at the Wharton School of the University of Pennsylvania, likens AI interviews to older one-way video tools, where recordings were human-reviewed. Now, he fears AI might handle both screening and evaluation, raising alarms. “The early work on these large language models does suggest that they do have some kind of race and gender bias baked in,” Bidwell notes. Studies, including one by Algorithmic Justice League founder Joy Buolamwini, have exposed racial and gender biases in AI systems, sparking legal and ethical concerns. If AI fully automates decisions, it could perpetuate inequalities, disadvantaging underrepresented groups and inviting lawsuits.

    The irony? Candidates are flipping the script by using AI themselves. Eric Lu, co-founder of video editing startup Kapwing, shared a bizarre interview where a seemingly qualified engineering candidate faltered under scrutiny. The applicant claimed experience with a daycare app but couldn’t explain basic features, like sending texts to parents or implementing “lazy loading.” When pressed, they went silent and eventually admitted to using AI for preparation—likely generating and memorizing answers. “This was a candidate that had a real LinkedIn profile, seemed to be a real student at a reputable college, had real job experiences under their belt,” Lu says. “But [they] still chose to prepare for this interview in this way.” Bidwell calls this “hugely problematic,” suggesting in-person interviews as a countermeasure to detect such tactics.

    Lu’s team at Kapwing has implemented safeguards, like verifying online presences and requiring live video calls, to filter out AI-assisted fraud. Yet, the experience prompted deeper reflection: “What does our interview process look like in the age of AI? And how does that have to change?” Resume Now’s report echoes this tension, with 79% of hiring professionals advocating for regulations on AI-generated content in applications. Spencer acknowledges the contradiction—employers embrace AI while wary of candidates doing the same—but stresses ethical use: “Both employers and candidates use AI ethically—in other words, to support or enhance their work, not replace it.”

    From a broader perspective, AI’s role in hiring reflects larger societal shifts toward automation, where efficiency battles humanity. Companies like Kapwing resist AI interviews, understanding the business need for screening amid applicant floods, but Lu empathizes with candidates: “If I was a candidate, I would be pretty sad about it.” For individuals like McGill, the human touch remains irreplaceable. “I think there are certain auditory or visual cues that you get when you’re talking to a real person that is just hard to communicate strictly with a computer,” she says. As AI evolves, the challenge lies in balancing innovation with empathy—ensuring technology serves people, not supplants them. Without thoughtful integration, the job hunt could become even more isolating, prompting calls for regulations that prioritize fairness and transparency in this brave new world of work.

    Must Read