Volume 17, Issue 1
Series on Upgrading Methodology in Clinical Psychology. EVALUATION OF INTERRATER RELIABILITY IN CLINICAL RESEARCH. A BRIEF INTRODUCTION TO INTERRATER RELIABILITY
Abstract
Nowadays, estimating the degree of interrater agreement is considered essential for demonstrating the quality of clinical research studies that require the use of independent judges, or raters, in order to quantify some aspect of behavior. When judges are required to provide a rating, whether it be a nominal, ordinal or interval measurement, a suitable level of interrater reliability is necessary to establish the validity of the study. If this stage is omitted or not reported, readers may have misgivings about the quality of the data collected and, consequently, about the validity of the interpretations made and the conclusions of the study. However, there is no single, unified approach to establishing interrater reliability. This brief article reviews some of the most widely-used approaches and warns about ways in which they are sometimes used inappropriately.
Keywords
Interrater reliability, reliability and validity, intraclass correlation, Cohen's kappa