Upper-limb prosthetics are typically driven exclusively by biological signals, mainly electromyography (EMG), where electrodes are placed on the residual part of an amputated limb. In this approach, amputees must control each arm joint iteratively, in a proportional manner. Research has shown that sequential control of prosthetics usually imposes a cognitive burden on amputees, leading to high abandonment rates. This thesis presents a control system for upper-limb prosthetics, leveraging a computer vision module capable of simultaneously predicting objects in a scene, their segmentation mask, and a ranked list of the optimal grasping locations. The proposed system shares control with an amputee, allowing them to only play a supervisory role, and offloads most of the work required to configure the wrist to the computer vision module. The overall system is evaluated in an object pick up, transport, and drop off experiment in realistic, cluttered environments. Results show that the proposed system enables the subject to successfully complete 95% of the trials, and confirms the benefit of having the user in the control loop.
Keywords:
prosthetics; deep learning; computer vision; robotic grasp