Using US data for 1986–1998 fatal crashes, we employed matched-pair analysis methods to estimate that the relative risk of death among belted compared with unbelted occupants was 0.39 (95% confidence interval (CI) 0.37–0.41). This differs from relative risk estimates of about 0.55 in studies that used crash data collected prior to 1986. Using 1975–1998 data, we examined and rejected three theories that might explain the difference between our estimate and older estimates: (1) differences in the analysis methods; (2) changes related to car model year; (3) changes in crash characteristics over time. A fourth theory, that the introduction of seat belt laws would induce some survivors to claim belt use when they were not restrained, could explain part of the difference in our estimate and older estimates; but even in states without seat belt laws, from 1986 through 1998, the relative risk estimate was 0.45 (95% CI 0.39–0.52). All of the difference between our estimate and older estimates could be explained by some misclassification of seat belt use. Relative risk estimates would move away from 1, toward their true value, if misclassification of both the belted and unbelted decreased over time, or if the degree of misclassification remained constant, as the prevalence of belt use increased. We conclude that estimates of seat belt effects based upon data prior to 1986 may be biased toward 1 by misclassification.
Keywords:
Traffic accidents; Automobiles; Seat belts; Matched-pair analysis