Because of the wide usage and potential value of the AIS and ISS in rating severity of both vehicular and nonvehicular trauma patients, it is essential that three hitherto unexplained but crucial methodologic questions be resolved concerning the index:
In the present study, inpatient charts for 98 trauma admissions to Johns Hopkins Hospital during the period November 1976 to May 1977 were obtained from medical records. Fifty of these patients were involved in motor vehicle accidents. The remaining 48 patients were victims of nonvehicular trauma.
Three coders were chosen to participate in the present study. Coder 1 was a research worker with experience in medical abstracting and a sound knowledge of medical terminology; she had no clinical experience. The other two coders were nurses who worked in the Johns Hopkins Adult Emergency Department.
To examine comparability of AIS coding from the emergency department encounter sheet with coding from the inpatient record, coder 1 was asked to record and rate the severity of all injuries noted on the emergency department record for the subsample of fifty trauma patients. One month later she reviewed and rated injuries noted on the corresponding inpatient charts. To measure inter-rater reliability, all three coders rated injuries from the same 98 inpatient records. Differences in coding between the research worker and the nurses with clinical background were examined. Intrarater reliability was examined by having the coders review and score a subsample of the charts four months after the initial chart review. The kappa statistic, first developed by Cohen (1960, 1968) and later generalized by Fleiss (1971) to measure agreement among more than two raters, is used to measure the agreement of severity scores obtained from different sources and different coders.