Drilling of composite structures is one of the most commonly used operations in the manufacturing of aircraft. Currently, most aerospace drilling operations are performed either manually or by extensive programming and fine-tuning of typical industrial robots. Developing fully autonomous robotic drilling cells powered by sensory systems can significantly improve the efficiency, reliability, and accuracy of aerospace composite manufacturing. To this end, the present study aims at developing vision-based techniques to enable two main functionalities: (1) accurate measurement of robot pose and workpiece placement for robot error correction, and (2) automated inspection of drilled holes using deep learning techniques.
Off-the-shelf industrial robots have relatively low absolute accuracy (error~1 mm) and cannot readily satisfy the strict tolerance requirements of aerospace drilling. To provide an optimum resolution, the vision system is calibrated over the robot’s workspace using a laser tracker. Circular targets on the spindle are tracked through this system. Different target localization methods for robot motion are compared with laser tracker’s accuracy in distance and pose measurements. Online robot error correction using relative motion towards part fixtures is implemented. A similar approach is used to correct errors between cooperative robots in an offline manner. The developed methodologies in both online and offline error correction methods are tested to demonstrate their accuracy and effectiveness.
Vision-based inspection systems efficiently inspect drilled holes in aerospace composites. However, detecting damage in images is challenging due to dark and textured surfaces in Carbon Fiber Reinforced Polymers (CFRPs). This study presents a fully autonomous system to detect and segment damages and cracks around drilled CFRP holes. The system includes: 1) automated multi-light imaging to illuminate holes in varying light, 2) image processing for hole profile, damage area, and crack segmentation, and 3) a deep Fully Convolutional Network (FCN) with U-Net architecture for pixel-wise image segmentation. The final solution has been tested and its ability to reliably detect the hole profile, damage area, and cracks has been demonstrated.
Vision-based measurement and inspection frameworks proposed in this thesis provide effective and robust tools for the autonomous robotic drilling of aerospace composites.