By Atta Tarki and Chris Schipper
Job seekers at FedEx’s Panama division likely had an eye-opening experience when they showed up for interviews last year. As originally reported by The Guardian, the company was using a newfangled lie-detector, EyeDetect, which claims to be able to analyze eye movements and changes in pupil size to determine if job applicants are being truthful. Other companies using EyeDetect include McDonald’s, Uber in Mexico, and Experian in Colombia.
Lately, it feels like every month brings a new entrant with a game-changing idea. These innovators promise to help hiring teams solve their biggest challenges—whether through a new lie detector, virtual-reality interview product or algorithmic assessment tool. Some of these ideas seem straight out of Star Trek!
For example, a research group is using brain scans to predict the eventual skill level of surgeons. Another offering focused on lie detection through artificial intelligence, examining a candidate’s body language, facial movements and even thinking patterns. Many of these ideas are only in the testing phase, but at the same time, some are already finding their way to the market.
I often get asked whether I believe these technologies will be game-changers—things that truly transform the way we think about talent. And while I am always wary of underestimating the next big thing (and waking up a year later to find myself obsolete), I have come to believe that any truly disruptive technology will need to overcome two major pitfalls: overhype and abuse.
That’s harder than it sounds.
Consider lie detection. I have written before about the challenges of stripping lying and dishonesty from the recruiting process, so of course, any short-cuts will be tempting. After all, it takes a lot of hard work to train interviewers to minimize falsehoods by building trust and asking the right questions (and even then, no interviewer is perfect, and these issues will continue to persist). But I am skeptical that these new tools can really help. I cannot speak to the veracity of EyeDetect’s marketing claims—perhaps it really is capable of catching lies (while protecting truths)—but the concept of using technology to root out dishonesty is nothing new. In fact, it has a long, sad history.
Take the polygraph—the lie detector we all know and love from old spy movies and police procedurals. As the economy began to heat up following the Second World War, this tool started to gain traction in hiring. By the early eighties, more than a million Americans a year were taking polygraph tests for employment purposes.
But then two things happened. First, in a classic case of overhype, the polygraph proved not to be the ironclad lie detector its proponents claimed, both failing to detect lies as well as generating false positives where a truthful statement was actually flagged as false. Even worse, some employers abused the technology, asking embarrassing questions about private matters such as sexual preferences.
Regulation followed, and in 1988 the federal government banned all private employers from using polygraphs in the hiring process (or as a reason to keep a job).
Algorithms Are Limited
Even when they hold great promise, I find myself slow to go all-in on new tech. Algorithmic hiring tools, for example, hold the potential to strip bias from hiring, which would bring enormous benefits to both companies and society. But these tools will only be successful if we remain cognizant of their limitations. Facial recognition software, for example, has received much criticism for drawing conclusions based only on a small sample. Can its insights really be trusted? And what about when used across different nationalities or ethnic groups?
Data science has yet become foolproof, and the hiring-by-algorithm continues to suffer spectacular failures. In one high-profile example, Amazon suddenly halted use of its vaunted hiring algorithm in 2017. The reason? Recruiters discovered it was applying lower scores to women because historically the longest retained and highest performing employees had been men. If Amazon failed to make the algorithmic approach work, what does that suggest for other, less data-savvy companies?
Along with overhype, algorithmic hiring is also open to abuse (like was seen with the polygraph). Already, companies are stripping data from social media sites to feed their tools, and while not technically illegal this practice raises some immediate moral questions. Is it okay to incorporate this data without direct authorization from the candidates? Going a step further, is it okay to even ask for such data in the first place, or should that information be considered private? Public backlash—in the form of restrictive regulation—is a real risk.
Finding Low-Risk Tools
Now, I do not mean to argue that HR managers should be luddites—after all, improvement in any medium can only occur when people try something new. I only wish that they walk away from this article aware of the risks, and with an eye toward the bigger picture as they drive change in our modern age of information.
How can executives decide when to adopt a new tool? As a first step, savvy recruiters will recognize that some tools carry greater risk than others—and be sure to start small when rolling those out (controlled trials, testing, etc.). But, there are also many low-hanging fruits that promise to improve hiring at minimal risk.
For instance, Goldman Sachs now uses asynchronous video interviews, in which candidates record their answers to interview questions, to enable a greater number of interviews and include persons from more diverse backgrounds. All recorded responses are assessed by a Goldman recruiter or business professional.
In another example, the U.S. Department of Agriculture uses VR to answer candidate questions about the day-to-day responsibilities of different roles. VR has started to see use across a range of industries, from railroad operators to banks, as a way to help candidates experience their firm, culture and realities of the role.
But once the low-risk, the low-hanging fruit has been plucked, what then? Decisions around whether to employ intrusive or riskier technology will become more complicated, and with few firm rules in place, we are entering a realm where professional judgment is required.
Warren Buffett’s publicity test can be a helpful decision aid: Would you be comfortable having a given hiring practice, and your reasoning behind it, published on the front page of The New York Times (or going viral on Twitter)?
The question isn’t purely hypothetical. For decades, interviewers saw it as their right to put candidates through “stress interviews” in which they verbally abused candidates—until a viral tweet from a millennial exposed the practice and caused enormous damage to the offending company’s brand. When assessing your own hiring practices, never forget than an even greater risk than overlooking top people is that such people never even apply because of how they perceive your company or process—a fact especially true in a tight labor market.
When making business decisions, I am a big proponent of augmenting intuition and gut-feeling with objective data and analysis. But in this instance, savvy executives should remember to use their moral compass—if a new technology feels risky, do you really want to use it?
This article is originally published on HR People + Strategy.
Atta Tarki is the founder and CEO of ECA and the author of Evidence-Based Recruiting (McGraw Hill, February 2020) and Chris Schipper is the Vice President and Managing Director of ECA.