A data-driven approach to identifying future leaders

 

Is a leader born or made? Executives and HR have historically held divergent opinions. The answer in fact lies somewhere in between. Stacey Philpot and Kelly Monahan talk about how inherent biases can become barriers in choosing a leader—and how diversity and a data-driven approach can remove them.

So you can see how what happens over time is that these archetypes get created in organizations around who makes a leader and who has the potential to be a leader, and those archetypes tend to be someone who’s seen as attractive, close to headquarters, similar to those in power, and good at one thing. 

Learn More

Explore the leadership collection  

Subscribe on iTunes

Listen to more podcasts

TRANSCRIPT

TANYA OTT: Are great leaders born or made? You’ve probably got an opinion on the question, but is it borne out in the data?

I’m Tanya Ott and this is the Press Room, Deloitte University Press’s podcast on the issues and ideas that matter to your business today. 

Chances are you’ve been passed over for a promotion in your career. Or maybe you’ve promoted someone who just wasn’t right for the job. How those decisions are made can often seem pretty mystifying to the people we work with. But it doesn’t have to be. Some companies are using objective assessment tools to identify high-potential talent. Stacey Philpot and Kelly Monahan write about it in their article “A data-driven approach to identifying future leaders.” It’s published in [the] MIT Sloan Management Review.   

Kelly works in Deloitte’s Center for Integrated Research. She focuses on behavioral economics and how we make decisions, especially on talent. Stacey is a principal with Deloitte Consulting LLP. She’s got a background in organizational psychology and decades of experience in leadership development.

STACEY PHILPOT: How do we take what we know about social science and use that to make businesses more productive and make leaders more effective?

TANYA OTT: Traditionally, many of us have looked at leadership and identifying future leaders as a gut call. Oftentimes, that results in picking people that look like us or sound like us or went to a school where we went to or whatever. But you and Kelly are looking at a much more data-driven approach to identifying leaders. Tell us what you did.

STACEY PHILPOT: Part of it started with what is the age-old question in leadership. If you go back through all of the leadership, social science, [and] business leadership [literature], there’s always been this question: Are leaders born or made? We found that people have kind of a bias toward one or the other.

In my years of working in the field, what I’ve noticed over time is that many of us in the HR world or leadership development practitioners, we tend to be really drawn toward the “leaders are made” point of view. You know, creating processes or programs or things that will help them. But a lot of actual executives in business lean toward the “leaders are born” [paradigm] and you see it in their language.

HR professionals talk about, “We need to do things to prepare our leaders, to develop them. We need to train them. We need to make sure we’re creating the right opportunities for our people.” It’s really focused on the actions that will train or develop people. Whereas when you talk to executives, which I do a lot now, they’ll say things like, “Can you help me find a person? How do we know we have the right people? How do we find the person who’s going to be the next great innovator?” And it’s really more of an emphasis on finding and that represents their bias that leaders are really born and that you need to just identify them and if you just find the right person everything will work out.

The reality is, and what the research says and what anecdotally we’ve seen, is it is both. You need to be able to identify the right leaders and then you need to develop them. Some of the inefficiencies and some of the problems in the field come when people are using the wrong tool to do the wrong thing. Meaning, they’re trying to develop what is more innate or they’re trying to identify what can be learned, if you will. So we set out to look at, “How do we bring more clarity around that so that we’re identifying leaders better?”

TANYA OTT: You studied 245 organizations across North America.1 You took a look at the leadership, in particular, diversity within the leadership ranks. What did you find?

KELLY MONAHAN: There were really three main findings or focuses of this research. The first that many organizational leaders, I think it’s about 75 percent, aspire to have a diverse organization.2 We intuitively know that having diversity of thought and just overall diversity within our top levels of leadership will lead to high performance. That case has been well made throughout the research and throughout the literature.

Organizations that have diversity tend to outperform those who don’t. However, of those 75 percent, only 11 percent actually reported having a diverse organization.3 You see this paradox and this phenomenon happening within our organizations today and we don’t know how to address this issue. But we aspire to.

So what Stacey and I really dove into is trying to understand, “Is there something deeper happening, just in terms of the way that we’re hardwired to make decisions, that we almost can’t get away from ourselves?” And so we really explore the different biases that impede leadership decision making and really focused on… You know what, training and unconscious bias training may not be the answer. Instead, we might have to actually do something much deeper, which we call data-driven guardrails to help protect us against these biases within our organizations today.

TANYA OTT: Let’s talk a little bit about the biases, because that’s one of your specialties. One of the biases that I’ve talked with a lot of folks about on this podcast is this—and I’m sure it’s got a way more scientific and official name than this—but it’s the bias toward people like me. Do you look like me? Did you go to the school that I went to? All those sorts of things.

KELLY MONAHAN: Yes. I think there’s probably a couple of different biases that encompass that. We tend to call it the affinity bias. We tend to be drawn toward people who represent our own image and make us comfortable, and that’s a really natural thing for us to do as humans.

During the research, we came across this one study that asked people to break down the names of five to seven people who they consider in their inner circle to trust—people within your own personal social network that you seek out for advice if you’re having a problem. These are truly your supportive network. And 91 percent of those personal networks lacked any sort of diversity.4

This affinity bias that we have is just very real, and I think we see it very strongly in our personal lives. It’s not intentional. It’s just the way in which we’re hardwired. And I think that unfortunately it carries over into the workplace and we tend to surround ourselves and pick leaders who resemble our own selves.

TANYA OTT: I remember there was an exercise that went viral on a social media network in the last couple of years which was, “Look through your list of friends and see—if you’re white, how many of them are white and if you’re black, how many of them are black.” I know a lot of people personally that were very surprised because they thought their network was much more diverse than it really was.

KELLY MONAHAN: Absolutely.

TANYA OTT: One of the other biases is what you call the halo effect. It’s this idea that if somebody is good at one thing, they’re going to be good at lots of things or all things.

KELLY MONAHAN: I do a lot of management training and a lot of management development, and the halo effect is probably one of the biases that comes up the most of these trainings that really aggravates the other managers. You see this come through a lot when you’re doing performance evaluations and performance meetings.

I think it goes both ways. You have the halo effect where someone who is technically proficient may not have those social skills or emotional intelligence that’s really necessary to enter into a position of leadership. Yet, we so quickly promote them because they have these strong technical capabilities.

But the opposite is true, too. I call it the horn effect—if someone makes a mistake pretty early on in their career, what ends up happening is that mistake carries through for them throughout their entire career. We actually might be missing potential leaders due to the horn effect. They cannot escape the shadow of this mistake they’ve made and we unintentionally attribute that to future performance as well.

TANYA OTT: That is the dreaded middle school or high school—it’s going on your permanent record kind of thing.

KELLY MONAHAN: Yes, exactly! Except in our minds.

TANYA OTT: As you allude to, there are different kinds of diversity. There’s racial diversity, ethnic, gender. But there’s also that softer thing that can work for or against someone. And the example that you write about is, “You’re a real extrovert, you’re good at making presentations and talking to groups, so you must be a dynamic leader.” Which, of course, can really work against people who are introverted, but may have really strong leadership skills.

KELLY MONAHAN: Yes. You know, in a lot of ways, these [traits]—what we would typically ascribe to introverts—tend to be really good leadership characteristics.

TANYA OTT: Such as?

KELLY MONAHAN: For example, introverts sometimes can connect at a much deeper level with others and be more in tune [with] another’s emotion, so displaying empathy. They tend to be much more reflective in their decision making and as a result, they’re able to actually connect [on] deeper levels with others which, when you’re in a position of management, that relationship is key to what you’re able to make or do with your employees.

So, individual stereotypes that we have toward what makes good leadership [is] somebody who’s extroverted or charismatic. I saw this a lot in my previous role as I was an HR business partner, where we almost exclusively were looking for leaders within our sales and marketing organization because these are the people who were out there making the business work. But in reality, what we ended up finding is our top-notch leaders were actually hidden within the operations part of our organization. Those people who’d grown up in the operations group knew how the business actually worked and knew the nuts and bolts of it and we found were actually much more successful in leadership positions than necessarily always going to the sales or marketing organization.

TANYA OTT: Interesting. We’ve talked about the affinity bias, the halo effect—are there other biases or effects like that that really rose to the top when you were looking at why organizations are so homogenous?

KELLY MONAHAN: One of the other biases that I actually don’t think gets a lot of mention within the academic literature, but I think we see it much more in play out in the workplace, is the proximity bias. Proximity, of just truly, I’m interacting with you much more on a day-to-day basis.

We see this as the workforce is becoming more dispersed and more global and more remote. This is a problem because the people that we actually have the most proximity to, we tend to then have a higher level of trust and actually believe in this relationship more than those who don’t.

Being able to use data protects you from this proximity bias because even though I’m seeing person A every day, person B actually might be the more talented or have the more potential for leadership. But because I’m not seeing them and interacting with them to the same degree, I’m unable to actually go ahead and slot them as a leader.

TANYA OTT: The challenging thing is how do you judge whether somebody has those things, right? Because so many people just work from gut and gut can be really wrong. One of the more traditional tools is the Nine Box Grid. First, can you explain what that Nine Box Grid is, for people who aren’t familiar with it?

KELLY MONAHAN: It’s the way that was intended to objectively plot people against other people on a grid. What it does, and you have on one axis your past performance, you’re able to look backward and figure out how this employee performed. And then what it does on the other axis is, it looks at forward potential—how well will this person perform in the future.

You can see that there’s an inherent problem with that question with the Nine Box Grid, if you’re not necessarily using data, because that simply is coming from your gut and your intuition. We know from behavioral science research that when it comes to uncertainty or future decision making, as humans, we’re not really good at that. Those are our two Achilles heels: uncertainty and planning for the future. You take those two biases that we innately have and then project them on to someone else and try to debate with other business professionals how much potential does this person actually have to perform. That’s almost impossible for us to be really good at deciding.

That’s the Nine Box Grid. It has nine boxes you can plot someone and it looks at their past performance, as well as their future potential.

TANYA OTT: That is not overly reliable. What do you suggest instead?

STACEY PHILPOT: Exactly! What we’ve found is that the first thing you can do as an organization to get better identification of who your true leaders are is you get a more measurable definition of what you mean by potential. What matters? What are the innate attributes and traits that we’re looking for in people? And we would say, again the research says, it’s the four that we’ve identified.

KELLY MONAHAN: We think people who are intrinsically motivated, have the ability to work well with others, can adapt to change, and also possess what we call natural business acumen and IQ. Those things do matter in leadership positions and we really cannot guess by observing behavior whether people have those.

STACEY PHILPOT: And then you come up with specific ways to measure it. So, for instance, we have a screening tool that we use with some organizations where, [for] each of those four areas of potential, we’ve got specific questions where a manager can rate all of their people on a scale, so that it’s more of a data-based approach.

And this is what helps get rid of bias. Right? None of us want to be biased, but you know, they start looking at all the members on their team and say, “Well, this person on my team, they’re really great with people and they’re really adaptable. I never really thought about how important that might be.” It just is a way of bringing more concreteness to a discussion where normally people have been, as you said, kind of relying on their gut, which we really find tends to be influenced by biases.

TANYA OTT: So that’s a survey of some sort and what kinds of questions might be that?

KELLY MONAHAN: Yes. It’s a survey and you think about it almost as a sliding scale. If you have a team working with you, if I were to ask you, “Okay, you have these five people that directly report to you—on this scale, how well are they willing to take risk?” Then you score each of your individuals depending on their risk aptitude. Or you might say, “This person actively seeks out working with others.” You start measuring everyone against the exact same dimensions. That’s one part of it.

Secondarily, you can also send out a survey question to individuals, very similar, “How apt are you at taking risk? How comfortable are you connecting with others?” What you start to do is have a picture painted of how well the individual believes they’re able to do these things—something that we call self-efficacy—and then, as well, the manager’s perspective of what the manager is seeing and observing.

TANYA OTT: That’s really easy when the two line up. It’s a little more complicated when the employee thinks they’re really adept or interested or skilled at something and the management goes, “Yeah, well, about that....”

KELLY MONAHAN: Exactly! Yes. And that creates [a] disengagement issue.

TANYA OTT: What are the advantages of this and what are the potential pitfalls or disadvantages of this?

STACEY PHILPOT: It’s a great question, and nothing is perfect. What we’re saying is a step forward: Let’s start by defining what you mean by potential and let’s have a reliable way to measure what we mean by potential, because that’s better than what we’ve got. But of course if a manager is rating someone in terms of how smart they are or motivated they are, there are still some human biases that are going to inform that. So it’s not perfect, if you will.

The second thing is, it’s also about context. I’m a big believer that human behavior isn’t just driven by a personality. We don’t just do things because of who we are. We’re influenced by our context, which is why we’re different in different situations. One of the challenges for people is that sometimes they’re not seen as a future leader because they’re acting in accordance with the job description of the job they currently have. And so helping managers say, “Does this person not seem ready because they’re doing really [well] in their current job, or they don’t have the potential to evolve into what we need from them?” That can be the hardest thing for people to really think through. And it happens a lot, so I think that’s another thing to be mindful of.

TANYA OTT: I’m going to not try to impress you by using something like “inter-rate reliability,” blah, blah, blah.

STACEY PHILPOT: (laughs)

TANYA OTT: My husband is a research methods and assessment guy and so I get inundated with this stuff. So this is all really interesting. Now, you mentioned scaling these things out. If you’re in a large organization, how well do these kinds of screening tools scale out and what does that do in terms of saving time and money and those other things that we’re also looking at in this process?

STACEY PHILPOT: Okay, I’ll match your comment about inter-rate reliability. There’s no one single answer that’s going to solve every use case. The way we think about it is the more critical the hire is or the more critical the role is, the more valid, reliable, and rigorous of an assessment approach is needed.

For example, we have what we call Full Executive Assessment. It’s something where we actually have it peer reviewed by a second psychologist, so that we look at inter-rater reliability. Meaning, if two people look at the same data, do they make the same conclusions? So you’ve got multiple points of data and you’ve got multiple people and that’s the best way to take bias out of the equation. That’s for things like CEO succession or executive committee members or really key strategic hires, but it does take a little bit of time.

Then there are times where you really want to assess someone because you want to develop them. You want them to get insight or feedback. You’re not really deciding whether they’re going to get a new job, but you want them to know what kind of leader they are and where their strengths and weaknesses are. That’s a place where things like 360 feedback are fantastic, because people get a sense of how they’re perceived by other people and you can usually do that against a leadership framework or a leadership model.

360s aren’t great around deciding whether somebody gets a job, because there’s so much variability between raters. It’s not a very reliable way to do that. But it is really helpful for helping someone improve their awareness and perhaps become a better leader on their own initiative.

Then there are things like, at a broad level, when you’re a leader and you acquired a company. We’ve worked with a lot of companies in M&A situations and all of a sudden you have thousands more people that you have no data on and no relationships with. How do you find the hidden gems there? How do you find the leaders there? That’s where you need some broader-based tools, where you can have managers quickly assess what’s the potential of this person in some databased ways. We have some screening tools for those use cases as well.

TANYA OTT: We’ve covered a whole lot of territory here. Any other tips or advice that you have for organizations that haven’t been using a data-driven approach but want to start implementing it?

KELLY MONAHAN: The one thing I would say is a recommendation for organizations, above and beyond a data tool, is… You know [in a previous podcast episode], we had talked about an orchestra back in the 1970s [that] was having a lot of issues with diversity, specifically gender diversity. And even though it was about 48 percent of females were graduating with a degree in music, they represented a fraction of the actual participants. The only way the orchestra industry figured out as a way to fix this problem was to do blind tryouts. Much like how the popular TV show The Voice [does].

We know it’s an inherently biased process. If you look at someone singing as opposed to just listening, you’re going to have a completely different perspective. The one thing we do recommend, above and beyond using data, is trying to figure out ways in which we can blind leadership selection, hiring selection, promotion process. There are ways that you can form committees and have this third-party, objective point of view, that looks at past performance data, that looks at 360 feedback and reviews—but it doesn’t have the name and it doesn’t have the function or it doesn’t have the gender. We think that it’s also a promising step toward creating a more inclusive environment and starting to see our own diversity numbers in the workplace increase.

TANYA OTT: Stacey Philpot and Kelly Monahan delve deeper into the research in their article “A data-driven approach to identifying future leaders.” It’s published in [the] MIT Sloan Management Review. You can find a link at dupress.deloitte.com.

TANYA OTT: I’m Tanya Ott for the Press Room. Catch ya again in two weeks.

This podcast is provided by Deloitte and is intended to provide general information only. This podcast is not intended to constitute advice or services of any kind. For additional information about Deloitte, go to Deloitte.com/about.