In horticulture, there is an oft-repeated bit of advice given to new gardeners, “Right plant, right place.” It’s pure wisdom. When we choose a location that naturally provides the right environment for a given plant, that plant is likely to thrive with minimal care. The same plant, placed in a less suitable location, no matter how much care it receives, is likely to underperform.
Similarly, when we succeed in putting the right people in the right roles, we’re setting them up for success. They are far more likely to thrive than those whose skills are not a good match for their roles. This is not only good for employees, it’s good for their teams and the organization, because employees whose skills match the demands and challenges of their roles are more able to contribute to team health and productivity. Everyone benefits. That’s an ethical outcome.
A few gold-standard applications of the “right person, right role” principle are out there. Most of these involve recruitment programs that begin with an educational program or internship. These programs, when well-designed, create an excellent opportunity for employers to determine how effectively prospective employees are likely to tackle the challenges and requirements of particular roles. When educational or internship programs provide benefits for both prospective employees and the employer, they can certainly be counted as ethical.
Unfortunately, few businesses have the resources required to support this kind of program, and even companies that run programs of this kind need to figure out who should be accepted into the programs themselves. For most recruitment, the vast majority of employers rely upon tools like assessments, background checks, interviews, and resumé scans. These stand in as proxies—approximations—for the gold standard.
Most employers use these tools in recruitment processes that involve several steps, each of which successively winnows down the pool of candidates. Let’s take a look at the ethical implications of some typical winnowing practices in light of the right person, right role principle.
Step 1—the first cut
The first step in most recruitment processes is often the most inexpensive and easy-to-execute method at an employer’s disposal. A variety of tools like resumé scans and self-report surveys dominate in this step. Unfortunately, although these tools are inexpensive, they have little predictive power.
The typical first cut poses ethical issues. As noted above, first-cut proxies generally have little predictive power. This lack of predictive power means that many, if not most, suitable applicants will be immediately deprived of the opportunity to compete for a role because they have been excluded for no good reason. This is unfair to applicants and expensive for employers.
Where do the metrics you use in your first cut land in the figure below? The percentages in the center column represent the amount of variance in role success explained by the predictor in each row. For example, statisticians would say that 42% of the variance in role success is explained by general mental ability.
It isn’t possible to determine the total predictive power of your current approach by adding these percentages together. That’s because the predictors are likely to “share variance.” For example, job experience, training, and years of education are usually related to one another—consequently, they share variance. Put job experience, training, and years of education together, and the total variance explained probably wouldn’t exceed the 3% explained by job experience.
Step 2—the second cut
This step usually involves reading resumés and checking references. Like the first cut practices, neither of these practices has a great deal of predictive power. Reference checks, on average, explain only 7% of the variance in role success, and none of the predictors provided in resumés, such as education information, GPA, work experience, or interests, explains more than 12% of the variance. It’s difficult to say how much of the variance these predictors would explain if we considered them together, but given how closely related many of them are, it is unlikely that the total variance they explain would be much greater than the 12% explained by GPA.
The typical second cut poses the same ethical issues explained in step 1. Additional suitable candidates are likely to be deprived, again on questionable evidence, of the opportunity to compete for a role, and more time and money have been wasted.
Step 3—the third cut
Around this point in the recruitment process, the candidate pool has generally become less wieldy. In this step, the remaining candidates are often interviewed and/or submitted to various forms of testing. This is the point at which many employers finally ask candidates to take some kind of mental ability test—the best predictor of role success.
The typical 3rd cut presents three ethical issues. First, mental ability is the best predictor of recruitment success by far. Yet, only a few of the original candidates will have had a fair opportunity to demonstrate their mental ability. Many, if not most, candidates with the requisite mental skills have already been eliminated on the basis of flimsy evidence—evidence with little predictive power.
Second, none of the candidates have had the opportunity to demonstrate their performance on the best predictor of role success before their ethnicity, race, and gender are known. Given the ubiquity of unconscious prejudice, this is a major ethical concern.
Finally, waiting so late in the recruitment process to implement the best predictor of role success reduces the number of suitable candidates who are seriously considered for a role. This means that employers often end up choosing from a very small pool of “okay” candidates rather than a pool of candidates who are a great fit for the targeted role. Hiring someone who is an “okay” fit for a role is bad for the hire, the people who work with the hire, the employer, and the clients of the employer.
Right person, right role
We know from 100 years of research in employment outcomes that the best predictor of role success to date has been mental ability. On average, mental ability appears to explain up to 40% of the variance in role success. The right person, right role principle, along with the ethical considerations discussed above, suggest that if we want to craft a fair recruitment process, mental ability should be measured early. It’s the right thing to do if we want to reduce the number of qualified candidates who slip through the cracks and maximize the number of employees who are a good fit for their roles.
That said, 40% is not 100%, and all predictors have their downsides. A few ethical issues remain:
- Some highly qualified people are likely to be left behind because they simply don’t do well on mental ability tests. This is a particularly important consideration when mental ability assessments measure skills that are not directly related to the kind of work involved in a role. Fairness (and prudence) demand that the skills we measure with mental ability tests are relevant.
- Sometimes highly qualified people are left behind because the way an assessment is taken is discriminatory. Mental ability assessments used in recruitment should not prevent potentially qualified candidates from demonstrating relevant skills.
- Many of the mental ability assessments used in recruitment are poorly researched and much less reliable than the assessments included in published research. Fairness (and prudence) demand that we question vendors’ claims—especially when a so-called mental ability assessment is actually a self-report survey or if a vendor claims that their mental ability assessment can be completed in a few minutes.
- There is a limit to how much mental ability, on its own, can tell us about a person’s fit with any given role. Seasoned employers can tell you from experience that hiring the candidate with the highest mental ability score isn’t necessarily a great idea. Simply choosing the candidate with the highest score is like choosing a fruit tree that has a reputation for producing the most fruit without considering the best place for that tree. Just as even the most productive fruit tree will not flourish if the soil it’s planted in is unsuitable, an applicant with high mental ability will not thrive in a role that does not require per level (or kind) of mental ability. For mental ability to perform optimally as a key predictor, it needs to be augmented with information about how an applicant’s skills dovetail with the challenges and requirements of a particular role.