Minimally invasive operations have gained popularity over open surgical procedures in the recent years. These procedures, require the surgeon to perform highly specialized tasks including manipulation of tools through small incisions on the surface of the skin while looking at the images that are displayed on a screen. Therefore, effective training is required for the surgeons prior to performing such procedures on patients.
In this thesis I explored a novel idea for creating a training system for arthroscopic surgery. Previously obtained CT images of a patient model and the surgical tools are manipulated to create a library of fluoroscopy images. The surgical tools are tracked (a mechanical tracker and an electromagnetic tracker used in each iterations) in order to generate a spacial relationship between the patient model and the surgical tools. The position and orientation information from the tracking system is translated into the image coordinate frame. These homologous points in the two images (of surgical tools and the patient model), are used to co-register and overlay the two images and create a virtual fluoroscopy image.
The output image and the system performance was found to be very good and quite similar to that of a fluoroscopy system. The registration accuracy was evaluated using Root Mean Square Target Registration Error (RMS TRE). The RMS TRE for the system setup with the mechanical tracker was evaluated at 2.0 mm, 2.1 mm, and 2.5 mm, for 4, 5, and 6 control points, respectively. In the system setup with the electromagnetic tracking system the RMS TRE was evaluated at 7.6 mm, 12.4 mm, and 11.3 mm, for 5, 7, and 9 control points, respectively. The acceptable range of error for arthroscopy procedures has been proposed to be 1-2 mm. It was concluded that by using a tracking system, which is not prone to interference and allows for a wide range of motion this system can be completed to the point of manufacturing and use in training new surgeons.