Fiberglass composites play a vital role in the manufacturing of heavy vehicles due to their exceptional corrosion resistance and lightweight characteristics. As part of the manufacturing process, trimming operations are performed after injection molding to achieve the desired final shape of the parts. The current practice, i.e. manual trimming, is inefficient, poses safety risks, and lacks consistency, presenting an ideal opportunity to introduce automation in the manufacturing cycle. However, conventional robotic automation solutions, which rely on process repeatability, are not a feasible option due to the high level of variations in fiberglass trimming arising from the high mix of parts and frequent design updates.
To overcome these challenges, this thesis presents a vision-based framework for autonomous trimming of fiberglass parts, eliminating the need for part-specific fixtures and extensive manual reprogramming of industrial robots. By leveraging advanced 3D vision technologies, the proposed framework enables autonomous trimming operations while ensuring efficiency and adaptability in the manufacturing process.
Initially, we begin by accurately determining the precise location of various parts. This is achieved by utilizing a reference CAD model and point cloud data obtained from actual parts. We explore different environmental and sensor configurations for 3D imaging to determine the optimal approach for capturing point cloud data from multiple viewpoints, thus ensuring an accurate representation of the real part.
Subsequently, we delve into the examination of various techniques, including ICP (Iterative Closest Point), RANSAC (Random Sample Consensus), and deep learning methods, with the aim of automating the part localization process. We evaluate these methods to determine which one is most effective in achieving accurate part localization within the robot cell.
Once we have successfully localized the part within the robot cell, our next step involves generating robot paths based on the CAD model. Our approach involves extracting trim line information from the CAD models and employing a series of kinematic transformations to map the path onto the real part located inside the robot cell. Additionally, we propose an alternative method wherein the path can be generated based on visible templates present on the part itself rather than relying solely on CAD models.
Finally, we present a last layer of error compensation using an on-board vision system to eliminate any remaining errors from the previous steps.
Our experimental findings demonstrate the applicability of deep learning-based techniques for point cloud registration in facilitating robust part localization. Moreover, we illustrate that the integration of our error compensation systems yields a substantial enhancement in the precision of trimming path tracking. By employing these techniques, we aim to streamline and automate the process of part localization and path generation, enabling efficient and accurate operations within the robot cell.