Objective evaluation (OE) methods provide quantitative insight into how well human body models (HBMs) predict a biomechanical response. Two techniques for this purpose are CORA and the ISO/TS 18571 standard. These ostensibly objective techniques have differences in their algorithms that may lead to discrepancies when interpreting model performance. The objectives of this study were 1) to apply both techniques to a biomechanical dataset from a HBM, and compare the scores and 2) conduct a survey of subject matter experts (SMEs) to determine which OE method compares more consistently with SME interpretation. The GHBMC average male HBM was used in five simulations of biomechanics experiments, producing 58 time history curves. Because both techniques produce phase, magnitude, and shape scores, 174 pairwise comparisons were made. ISO had lower average scores for each component rating metric than CORA, indicating a stricter evaluation. Correlations between CORA and ISO were strongest for phase (R²=0.66) and weakest for shape (R²=0.27). Statistical analysis revealed significant differences between the two OE methods for each component rating metric. SMEs (n=40) were then surveyed to provide intuitive scoring of how well the computational traces matched the experiments. SME interpretation was found to statistically agree with the ISO shape and phase metrics, but was significantly different than the ISO magnitude rating. SME interpretation agreed with the CORA magnitude rating. The finding of the study suggests a mixed approach to reporting objective ratings, using the magnitude method in CORA and the ISO shape and phase methods.