Hiring Accuracy Blogs & Articles | Journeyfront

Lessons from Google's Worthless Interview Questions

Written by Nick Lyon | 5/21/18 8:18 PM

The technology giant, Google, is known for hiring the best talent in the market and equally known for their wacky interview questions. For years, Google interviewers used these seemingly random questions to "help" them hire great talent. Turns out though, these questions didn't help at all. 

“We found that brainteasers are a complete waste of time,” Laszlo Bock, senior vice president of people operations at Google, told the New York Times. “They don’t predict anything. They serve primarily to make the interviewer feel smart.”

Some of our favorite questions

Google's interview question have been all over the internet. We found a list compiled by Lewis Lin, a Seattle based job coach. It was fun to read through their questions; we've pulled out some of our favorites here:

  • If you look at a clock and the time is 3:15, what is the angle between the hour and the minute hands?
  • Design an evacuation plan for San Francisco.
  • How many vacuum’s are made per year in USA?
  • How many piano tuners are there in the entire world?
  • How many haircuts do you think happen in America every year?
  • How much should you charge to wash all the windows in Seattle?
  • Explain the significance of "dead beef"

Google's analysis – How they did it

Google took data from tens of thousands of interviews and compared how candidates in those interviews score relative to how the hired candidates actually performed in their job. They looked for correlations between scores on specific questions and the overall score provided by specific interviewers to see whether there were people at Google who were particularly good at hiring.

Google's analysis – What they found

As mentioned before, Google found zero correlation between how well a candidates scored on brain teaser questions and how well they performed in their job.

They also found that  only one interviewer was consistently good at at predicting whether a candidate would perform well. Think about that for a moment. Across thousands of interviews, Google learned that all but one of their interviewers were not predictive—you could essentially flip a coin

What we can learn

One of the biggest takeaways from Google's story is that you can actually know, with some sort of certainty, which interview questions are working and which are not. Many hiring managers continue to use the same interview questions over and over—or pick new ones based on recommendations, a recent article read, or how they're feeling on a given day. Google has shown that we can analyze the questions we ask and learn how predictive our questions are.

There are lots of other, specific lessons that can be gleaned from Google's experience, but we'll call out three:

  1. A bad process is costly: Think about all the costs associated with investing in a process that is not predictive. Time wasted by candidates and  interviewers, a bad candidate experience (I guess few people enjoyed these fear-inducing questions), and likely many false positives from basing decisions on a non-predictive indicator
  2. Causal hypothesis must be validated: Google is a tech giant with a complex product solving big problems. I'm sure Google wants those it hires to be "Smart." What better way to test whether someone is smart than by giving candidates tricky, complicated questions to see how they think? Makes perfect sense. Except the data showed that it didn't. As humans, we're really good at developing plausible-sounding stories. We do this all the time, and we especially do this in hiring. For this reason, it's critical that we run the necessary analysis to test our hypotheses.
  3. Collecting data on your hiring process is critical: This point is somewhat obvious, but is so critical—and oft overlooked—that I'm going to say the obvious: Google never would never have learned they were wrong if they hadn't collected and stored their interview score data. They would still be wasting time, inducing fear, and hiring false positives.

What assumptions about what accurately predicts a good hire are built into your hiring process? How might you test these? What data do you need to start collecting to run the required analyses?  Stay tuned for a post on Journeyfront's tips for running your own analysis on the effectiveness of your hiring process.