The rise of generative AI tools has stirred up much fear lately, with many asking if machines will soon replace humans. But investigative reporter and NYU journalism professor Hilke Schellmann asks a different question in The Algorithm (Hachette, Jan. 2024), which focuses on the role of AI in hiring practices: What if the problem is that AI is way dumber than we think? And what if we’re relying on this bad technology to make bad hiring decisions?

When you saw your first example of an AI hiring tool, you were impressed. How has your thinking evolved since then?

The first product demo I saw showed a tool magically calculating an interviewee’s facial expressions and predicting how amazing this person would be at a job or not. I thought: wow, I’ve hired people before, and it’s hard. This machine has figured out this problem that humans can’t figure out! That sparked my curiosity as a reporter. How does the tool make these calculations? How can we predict if someone will be good at a given job? The more I talked to vendors and applicants, the more the magic got chipped away. It turns out there’s no scientific bearing for it. We might smile, but does it mean we’re happy, or nervous? If the tool has a worse success rate than a random number generator, I suggest we don’t use it.

What do companies believe AI can do better than humans?

Third-party vendors have built tools that they say will save a lot of labor and money—HR managers won’t have to interview en masse anymore. The candidate gets pre-recorded questions and records themselves answering. The computer then predicts how good they’d be for the job. If you have 1,000 candidates and the tool pulls out 50 qualified candidates, the saved costs are huge. The tools deliver on efficiencies, but I haven’t seen any evidence that they actually pick the most qualified candidates.

What are some of this technology’s shortcomings?

Companies think this will democratize hiring, but the training data introduces discrimination. Amazon trained its tool on its existing predominantly male workforce, so now the tool down-weights if the word “women” appears—as in, “women’s soccer club” in the hobbies section—because it doesn’t match the company’s history of successful candidates. I’ve seen résumé screeners that identified people named Thomas as better candidates for the job because the company has many employees named Thomas. Any human screener would catch that, but the tool doesn’t.

How do you hope to contribute to the debate around AI?

I’m trying to have a broader conversation about how we can and should hire. We should get away from hiring without any human controls, and also without going back to the human-biased old system. If we want to measure people, how do we do it? How do we make it fair, and how do we accurately predict success?

Return to the main feature.