TY - JOUR
T1 - An Updated Meta-Analysis of the Interrater Reliability of Supervisory Performance Ratings
AU - Zhou, You
AU - Sackett, Paul R.
AU - Shen, Winny
AU - Beatty, Adam S.
N1 - Publisher Copyright:
© 2024 American Psychological Association
PY - 2024
Y1 - 2024
N2 - Given the centrality of the job performance construct to organizational researchers, it is critical to understand the reliability of the most common way it is operationalized in the literature. To this end, we conducted an updated meta-analysis on the interrater reliability of supervisory ratings of job performance (k = 132 independent samples) using a new meta-analytic procedure (i.e., the Morris estimator), which includes both within- and between-study variance in the calculation of study weights. An important benefit of this approach is that it prevents large-sample studies from dominating the results. In this investigation, we also examined different factors that may affect interrater reliability, including job complexity, managerial level, rating purpose, performance measure, and rater perspective. We found a higher interrater reliability estimate (r =.65) compared to previous meta-analyses on the topic, and our results converged with an important, but often neglected, finding from a previous meta-analysis by Conway and Huffcutt (1997), such that interrater reliability varies meaningfully by job type (r =.57 for managerial positions vs. r =.68 for nonmanagerial positions). Given this finding, we advise against the use of an overall grand mean of interrater reliability. Instead, we recommend using job-specific or local reliabilities for making corrections for attenuation.
AB - Given the centrality of the job performance construct to organizational researchers, it is critical to understand the reliability of the most common way it is operationalized in the literature. To this end, we conducted an updated meta-analysis on the interrater reliability of supervisory ratings of job performance (k = 132 independent samples) using a new meta-analytic procedure (i.e., the Morris estimator), which includes both within- and between-study variance in the calculation of study weights. An important benefit of this approach is that it prevents large-sample studies from dominating the results. In this investigation, we also examined different factors that may affect interrater reliability, including job complexity, managerial level, rating purpose, performance measure, and rater perspective. We found a higher interrater reliability estimate (r =.65) compared to previous meta-analyses on the topic, and our results converged with an important, but often neglected, finding from a previous meta-analysis by Conway and Huffcutt (1997), such that interrater reliability varies meaningfully by job type (r =.57 for managerial positions vs. r =.68 for nonmanagerial positions). Given this finding, we advise against the use of an overall grand mean of interrater reliability. Instead, we recommend using job-specific or local reliabilities for making corrections for attenuation.
KW - interrater reliability
KW - job performance
KW - meta-analysis
KW - supervisory ratings
UR - http://www.scopus.com/inward/record.url?scp=85188502697&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85188502697&partnerID=8YFLogxK
U2 - 10.1037/apl0001174
DO - 10.1037/apl0001174
M3 - Article
C2 - 38270992
AN - SCOPUS:85188502697
SN - 0021-9010
JO - Journal of Applied Psychology
JF - Journal of Applied Psychology
ER -