Geometrical quality assurance is critical for improving manufacturing time and cost. This is more inhibiting when human operators’ visual or haptic assessment is necessary. Modern machine learning (ML) methods can solve this problem but require large datasets with diverse deformations. However, preparing those deformations using physical objects can be difficult and costly. This thesis uses Blender, an opensource simulation tool, to imitate object deformities and automate the preparation of synthetic datasets. The utility of these datasets is improved using two methods;​ data augmentation such as background randomization and domain adaptation networks. The background randomization approach provides a way to generalize the image distribution to various environments, whereas the domain-adapted approach provides a better-targeted distribution. This thesis showcases that synthetic data created in Blender can be effective for training deformation classification networks. The discrepancies between real and simulated environments can be mitigated to create models for sim-to-real deformation detection.