There is a daily need to assess the quality of our work. On the crash track, the tests should be repeatable, the chosen test method should fulfil the test purpose and every result should have an explanation. The tests performed may also be used to validate mathematical models, the accuracy of which must then be assessed, or, to show whether a new design or method influences the performance or not. Regardless of which, there is a need of a quality assessment tool. By applying the Objective Rating Method on performed rear-end sled tests, Autoliv has previously shown that the BioRID II dummy allows for both repeatable and reproducible testing. Here, the ORM has been evaluated on frontal, side impact and component tests and the corresponding models.
For frontal impacts, test repeatability has been assessed, and correlation between physical tests and mathematical models are shown. For side impacts, the test repeatability, test method predictability and mathematical model predictability have been assessed. The repeatability of frontal sled tests is comparable with that presented for rear-end sled tests, while the side impact sled test repeatability is generally somewhat lower.
Although the ORM has to be used with care and knowledge, it is a useful tool, especially for assessments regarding test repeatability and reproducibility. The ORM allows for agreement, in advance, on a quality level for tests and mathematical models. Beneficial is that the ORM not only compares peak values but also curve shapes. Furthermore, the ORM compares two tests; many methods require several tests and that is normally not available in daily work.