Reliability of Psychiatric Diagnosis: II. The Test/Retest Reliability of Diagnostic Classification

John E. Helzer, Paula J. Clayton, Robert Pambakian, Theodore Reich, Robert Woodruff, Michael A. Reveley

Research output: Contribution to journalArticlepeer-review

111 Scopus citations

Abstract

In a study of interrater diagnostic reliability, 101 psychiatric inpatients were independently interviewed by physicians using a structured interview. Newly admitted patients were randomly selected and examined by one of three psychiatrists. A second psychiatrist reexamined the same patient about 24 hours later. Interviews from the two examinations were evaluated independently, and diagnoses were made on the basis of objective criteria. The degree of diagnostic agreement for the two examinations was calculated using the kappa statistic. Agreement was found to be high as compared to other studies in the psychiatric literature, despite the fact that in most previous investigations diagnoses were not made independently. The results were also compared to studies of reliability of medical judgments. Possible reasons for the high interrater reliability are discussed and include the use of a structured interview and objective diagnostic criteria.

Original languageEnglish (US)
Pages (from-to)136-141
Number of pages6
JournalArchives of General Psychiatry
Volume34
Issue number2
DOIs
StatePublished - Feb 1977

Fingerprint

Dive into the research topics of 'Reliability of Psychiatric Diagnosis: II. The Test/Retest Reliability of Diagnostic Classification'. Together they form a unique fingerprint.

Cite this