An ideal recruitment process

Recruitment disrupted

Theo Dawson
6 min readOct 9, 2020

--

After decades of research and development, my colleagues and I have launched an entirely new approach to recruitment assessment. This approach will improve recruitment outcomes, reduce bias, and lower recruitment costs while giving early adopters a big advantage over competitors in the recruitment market.

The predictive validity of recruitment assessments

For nearly a century, the most predictive recruitment assessments have been multiple-choice tests of mental ability. Their popularity has waxed and waned, but the evidence has been remarkably stable. Despite accusations of bias and irrelevance, their predictive validity is strong and undeniable.

Predictive validity is important. As Hunter, Schmidt, & Judiesch (1990) point out, “the use of hiring methods with increased predictive validity leads to substantial increases in employee performance as measured in percentage increases in output, increased monetary value of output, and increased learning of job-related skills.”

The figure below shows the average predictive validities for various forms of assessment used in recruitment contexts. The percentages indicate how much of a role a particular form of assessment plays in predicting performance — it’s predictive power. When deciding which assessments to use in recruitment, the goal is to achieve the greatest possible predictive power with the fewest possible assessments.

Assessments in this figure are color-coded to indicate which dimensions or traits are targeted by each form of assessment—(1) mental ability or skills, (2) knowledge, (3) behavior, (4) personality & values, (5) emotion, and (6) training & experience. Rainbows indicate multiple dimensions. It is clear that tests of mental ability stand out as the best predictor.

Schmidt, Oh, & Schaffer, 2016

Although tests of mental ability outperform all other recruitment assessments, they carry some baggage. First, historically, they were shown to be biased against women and certain socio-economic or ethnic groups. A second problem is their apparent irrelevance. For example, it’s difficult to see how the ability to mentally rotate two-dimensional images (a common item type in aptitude tests) relates to change management or product design.

An ideal recruitment process

As in any process that involves selecting from a large pool of possibilities, the most efficient and effective place to begin is with the very best predictor available. The second-best predictor would come next, the third would follow, etc.

Based on the predictive validities shown in the figure above, I’d build a process with the following steps:

  1. mental ability & integrity
  2. background
  3. competency
  4. culture fit

Everything else in the figure either relates strongly to one of the steps in this list or has such low predictive validity that it would be a waste of money.

*I include integrity in step 1 because the combination of integrity and mental ability predicts more variance in recruitment outcome than any other combination of predictors (Schmidt, Oh, & Shaffer, 2016).

What’s wrong with this ideal process?

There are a couple of things wrong with the process just described. First of all, in the past, the test of mental ability would have been a high-quality aptitude test. But as already mentioned, aptitude tests carry some baggage. Even worse, they are expensive—way too expensive to administer to every single applicant, which would be necessary if we want to put the best predictor first.

Secondly, there is a deeper issue that’s rarely addressed when it comes to mental ability assessment in recruitment contexts, and that is the relation between mental ability and the role being filled. Conventional assessments of mental ability tell us which candidate received the highest score, but they tell us nothing about the relation between that score and the demands of the role being filled. Indeed, hiring someone because they have a high mental ability score can backfire. Recruiting employees whose mental abilities outstrip those of their teammates can lead to serious problems—as anyone who’s been confronted with a Superstar situation will attest. A robust recruitment process requires both a mental ability score and a measure of the fit between a candidate’s mental ability and the mental demands of a role.

Say hello to Lectica First™

Lectica First is designed specifically for making the first cut in recruitment processes. It measures the following dimensions of performance:

  1. Complexity level—the level of mental skill evidenced in the developmental level of responses to a relevant real-world scenario;
  2. Role fit—the fit between the complexity of a role and the complexity score awarded to that assessment;
  3. Logical coherence—the mental skill evident in the arguments in assessment responses; and
  4. Integrity—ethical awareness and salience (optional).

*Role fit is not related to person-job fit, person-organization fit, or culture fit.

Lectica First makes it possible to pre-screen every candidate for the best predictors of recruitment success before you look at a single resume or speak to a single reference. To learn more about how it works and how Lectical Assessments address some of the concerns raised about aptitude tests, visit the Lectica First page on our web site.

As part of our on-profit mission, we provide every applicant who takes a Lectical Assessment as part of Lectica First with a complementary report designed to support the optimal development of essential real-world skills.

Call to action

We’d like to invite you to experiment with Lectica First. Please feel free to contact us with your questions and ideas.

Update

Metaanalysis is an evolving discipline in which there is disagreement about methods. Since this paper was first published, alternative metaanalytical methods have been proposed that bring the size of the valididty evidence for recruitment assessments into question. (Sackett, Zhang, Berry, & Lievens, F., 2021). We’ll be keeping an eye on this debate.

References

Arthur, W., Day, E. A., McNelly, T. A., & Edens, P. S. (2003). A meta‐analysis of the criterion‐related validity of assessment center dimensions. Personnel Psychology, 56(1), 125–153.

Becker, N., Höft, S., Holzenkamp, M., & Spinath, F. M. (2011). The Predictive Validity of Assessment Centers in German-Speaking Regions. Journal of Personnel Psychology, 10(2), 61–69.

Beehr, T. A., Ivanitskaya, L., Hansen, C. P., Erofeev, D., & Gudanowski, D. M. (2001). Evaluation of 360 degree feedback ratings: relationships with each other and with performance and selection predictors. Journal of Organizational Behavior, 22(7), 775–788.

Dawson, T. L., & Stein, Z. (2004). National Leadership Study results. Prepared for the U.S. Intelligence Community.

Gaugler, B. B., Rosenthal, D. B., Thornton, G. C., & Bentson, C. (1987). Meta-analysis of assessment center validity. Journal of Applied Psychology, 72(3), 493–511.

Hunter, J. E., & Hunter, R. F. (1984). The validity and utility of alterna­tive predictors of job performance. Psychological Bulletin, 96, 72–98.

Hunter, J. E., Schmidt, F. L., & Judiesch, M. K. (1990). Individual differences in output variability as a function of job complexity. Journal of Applied Psychology, 75, 28–42.

Johnson, J. (2001). Toward a better understanding of the relationship between personality and individual job performance. In M. R. R. Barrick, Murray R. (Ed.), Personality and work: Reconsidering the role of personality in organizations (pp. 83–120).

Mcdaniel, M. A., Schmidt, F. L., & Hunter, J., E. (1988a). A Meta-analysis of the validity of training and experience ratings in personnel selection. Personnel Psychology, 41(2), 283–309.

Mcdaniel, M. A., Schmidt, F. L., & Hunter, J., E. (1988b). Job experience correlates of job performance. Journal of Applied Psychology, 73, 327–330.

McDaniel, M. A., Whetzel, D. L., Schmidt, F. L., & Maurer, S. D. (1994). Validity of employment interviews. Journal of Applied Psychology, 79, 599–616.

Rothstein, H. R., Schmidt, F. L., Erwin, F. W., Owens, W. A., & Sparks, C. P. (1990). Biographical data in employment selection: Can validities be made generalizable? Journal of Applied Psychology, 75, 175–184.

Sackett, P. R., Zhang, C., Berry, C. M., & Lievens, F. (2021). Revisiting meta-analytic estimates of validity in personnel selection: Addressing systematic overcorrection for restriction of range. J Appl Psychol. doi:10.1037/apl0000994

Schmidt, F. L., Oh, I.-S., & Shaffer, J. A. (2016). Working paper: The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 100 years of research findings.

Stein, Z., Dawson, T., Van Rossum, Z., Hill, S., & Rothaizer, S. (2013, July). Virtuous cycles of learning: using formative, embedded, and diagnostic developmental assessments in a large-scale leadership program. Proceedings from ITC, Berkeley, CA.

Tett, R. P., Jackson, D. N., & Rothstein, M. (1991). Personality measures as predictors of job performance: A meta-analytic review. Personnel Psychology, 44, 703–742.

Zeidner, M., Matthews, G., & Roberts, R. D. (2004). Emotional intelligence in the workplace: A critical review. Applied psychology: An International Review, 53(3), 371–399.

--

--

Theo Dawson
Theo Dawson

Written by Theo Dawson

Award-winning educator, scholar, & consultant, Dr. Theo Dawson, discusses a wide range of topics related to learning and development.

No responses yet