Sign in

In conversation with

Rich Paret

The founder of Tenarch on viewing recruiting as a long game and continually refining your processes

Keep interview sessions short—and use your time wisely.

I work with a lot of companies to refine their interview processes, and I would say the number one thing is that these interview processes are too long. They want everybody on the team to meet the people that they’re thinking about hiring—which is a good instinct, but you shouldn’t subject either the interviewers or the interviewees to that. 

There are a couple of ways that you can think about it. One is just overall interview length: the shorter you can do it, the better. It’s like exercise: What’s the minimal amount of exercise I need to stay in shape? If I’m doing more than that, that’s not good; I get tired and I’m wasting my time. What I tell people is if you have a full interview loop—not just the technical piece, but basically any touch-time with the candidate—that’s four hours or under, that’s about average. 

A lot of the companies I work with take six hours or longer—and I tell them to get that shorter, because hiring is competitive. There’s plenty of research-backed evidence that says longer interviews don’t make you better at making decisions. So stop stacking interviews up, and really focus on, hey, what do you need to know, and how are you going to go get that information? 

A great way to look at that is to think about all the things you want to know and then rank them. If we want to know if this person is a good team player, how are we going to figure that out, and then how many points is that worth? If the person needs 100 points, is it worth 30 points to be a team player? Is it worth 80 points? If you weight them, then you can make some decisions about OK, where should we be spending that interview budget?

Hiring should be a qualitative and a quantitative process.

In general, I see folks over-focus on skills and under-focus on competencies and organizational fit. So they might say, we’re going to do four and a half hours of technical interviewing on whether this person is a great SRE or Python programmer or something, and then 30 minutes with some person on the adjunct team or the HR person on, “Is this person a good teammate?” That would be wrong. That’s not the correct budget. If you said, “Oh, it’s more like 50-50,” I’d say, “OK, well, we’re getting there.”

These are processes in systems. I’m a systems thinker. You have to start with what you’ve got, and then you look at the results and say, OK, after you complete a bunch of these loops, what’s going on? What’s not going well? Where do we need to make adjustments over time? But you have to look at these things from a systems and metrics perspective. How much time are we spending on this stuff? What do we value? It’s a qualitative process and a quantitative process: try to put some numbers behind this thing, so you can actually make better decisions.

View recruiting as a long game.

I treat hiring as strategically important, so when I do it myself, or when I was building teams at previous companies, I’d be very involved in the process. If I met somebody and we had a good connection—even though I knew I had a great recruiting team that would follow up with them—I would also send an email and say, “Hey, it was really great to meet you.” I try to establish that personal connection. 

I also view recruiting as a long game. Even if I meet somebody and they’re not the right person now, two things: one, my business is growing, so they might be the right person in the future, and two, that person knows people, and it’s valuable to me if they go, “Hey, I talked to Rich and it didn’t work out for whatever reason, but I had a great experience.” That kind of currency is amazing—and when you don’t have it, it’s not like somebody is going to tell you that. It’s like trying to sail your boat at low tide: everything is more difficult, but you don’t know why. 

Can they do a good job? Will they do a good job?

I’d call the front end of the hiring process “sourcing and screening”: finding people and seeing if they’re interested. Then I’d call the interview part “selecting”: How are we going to decide who we’re going to hire from this pool of candidates? 

There are lots of different ways that you can break this down. I advise people to think about it in two parts. The first is expertise: Does this person have the necessary expertise to fulfill the mission of the job at this time? If they’re a product manager, do they have product-management expertise? If they’re the CFO, do they have CFO-type expertise? 

That’s can they do a good job, right? The other question you’re trying to answer is, will they do a good job? And that breaks down into competencies. Sometimes people will talk about this as attitudes and aptitudes, or soft skills, or culture. But these are the things that really determine whether or not they will do a good job. I think everybody’s worked with people who were excellent, and experts in their field—but they were in the wrong job. They weren’t right for the role. They’re not bad people. But it wasn’t a good match on the competency side, the work preference side, et cetera. 

So when you’re designing the interview and you’re trying to stay within those four hours, try to think OK, I have a budget, and I want to learn: Can they do a good job? Will they do a good job? How am I going to figure that out?

Simulate the real work—and try to gather multiple data points.

There are two types of interviews that have strong evidentiary support: work sample tests and structured behavioral interviews. A work sample test is basically anything you can give a candidate to perform that closely mirrors the work that is done in the job. For software engineers, we often give them coding exercises, right? And for a long time, we gave people whiteboard algorithmic problems—some organizations still do. I advise people to not do that anymore, for a couple of reasons. One is they’re typically testing for the wrong thing: not for the work they’re going to do, but for fundamental computer science concepts. And it’s been shown that doesn’t necessarily translate to being able to do the work. 

There are a lot of different companies that make platforms to make these things easier to administer, but the idea is simple. If you’re an SRE and you code all day in Python, and you typically work with terraform and these different tool chains, here’s a toy problem. Please work on this for a structured period of time, and here’s a rubric that allows us to assess how you did. You can also do this for product managers and designers—really, any kind of work with some kind of output where you can design a simulation. 

The other type is the structured behavioral interview. This one is basically the classic, “Tell me about a time when X,” where “X” is “you demonstrated leadership” or “you had to solve a particular problem.” When you’re designing questions, figure out what you want in advance—use your team and ask, “Hey, what are the actual soft skills that we care about for this role? What does a good answer look like?” If you get stuck, you can ask: “What does a not-good answer look like?” Everybody’s got a story in the back of their head about somebody who was really difficult from a communication perspective. We don’t want that. 

When you give these kinds of interviews, I think it’s very important to stay out of hypotheticals. Not that hypotheticals can’t be predictive, but research has shown that you have to construct hypotheticals very carefully—as dilemmas that have forced choice—and that’s really hard for people to do. What you really want to do is stay in that “tell me about a time”—you want to ask about past behavior. “In your past job, what things were particularly challenging?”

For the things you really care about, you don’t want to just ask about one time—you want to find a way to understand if they’re good a communicator from multiple data points, and I like to do that by asking these kinds of questions about many of their previous jobs. If you ask for one example, that people can sort of reach back into their head and go like, “Oh yeah, here’s this one time that I showed leadership.” But what you want to actually understand is, does this person have a track record with these kinds of behavioral questions? 

Your hiring processes are an opportunity to learn and loop.

Many software product development teams use some flavor of Lean or Agile principles—even if they don’t do it by the book, those concepts exist in organizations, so the easiest thing to think about when assessing hiring processes is the concept of a retrospective. After every candidate, whether you hired them or not, on a cadence—let’s say monthly, maybe weekly, depends on how quickly your hiring machine is spinning—just getting folks in a room, or an asynchronous Slack group, and asking, what went well? What didn’t go well? What did we learn? What are ideas for improvement for next time? 

Think about the long game. You can see if there’s any low-hanging fruit: say the recruiter didn’t have the salary information published up front, and we got all the way to the end of the interview, and the person's salary expectations were out of band—you’ve wasted all this energy. So you say, “Oops. Let’s make sure we post all salary information up front.” That’s one that you could take and just do. But if we’re like, hey, we ran this interview process and we developed a rubric and we ran somebody through it and we thought they were really good—but then we hired them and they weren’t what we thought, then maybe we need to really reassess how we’re assessing technical skills or competencies. That’s bigger—that’s not just patching it up. But both of those things can be surfaced through these kinds of retrospectives. 

The other important thing here is that this is loops on loops. The interview loop, where we take somebody through an interview process then decide if we’re going to hire them or not—that loop is tight and should be iterated on. But think about once they accept the offer, and then they start and they get onboarded, and then they’re productive—and then they have a performance review. That’s another big ’ol loop, right? And it can be really powerful, when we can say, what did we learn? At 90 days, this person is killing it, or they’re doing what we expect, or maybe they’re under. How do we feed that information back into the recruiting process?

Rich Paret is a tech leader who was formerly a VP of Engineering at Twitter, a Senior Director at Google, and an early leader at many startups. He has built and scaled teams both large and small, and most recently founded Tenarch to help leaders recruit and build great teams.

Continue reading

Conor Harris

Privvy’s VP of engineering on building teams, leadership philosophies, and broadening horizons

Melissa Leffler

Drift’s VP of Engineering shares her strategies for building and scaling engineering teams

view all conversations

our newsletter is cool

allma, inc © 2023