When a robotic visual servoing/tracking system loses sight of its target, the servo fails due to loss of input. To resolve this problem, a search method is required to generate efficient actions and bring the target back into the camera field of view (FoV) as soon as possible. For high dimensional platforms like a camera-mounted manipulator, an eye-in-hand system, such a search must address the difficult challenge of generating efficient actions in an online manner while avoiding visibility and kinematic constraints.
This work considers two common scenarios of visual servoing/tracking failure, when the target leaves the camera FoV and when visual occlusions, occlusions for brevity, disrupt the process. To handle the first scenario, a novel algorithm called lost target search (LTS) is introduced to plan online efficient sensor actions. To handle the second scenario, an improved algorithm called lost target recovery algorithm (LTRA) allows a robot to look behind an occluder during active visual search and re-acquire its target in an online manner.
Then the overall algorithm is implemented on a telepresence platform to evaluate the necessity and efficacy of autonomous occlusion handling for remote users. Occlusions can occur when users in remote locations are engaged in physical collaborative tasks. This can yield to frustration and inefficient collaboration between the collaborators. Therefore, two human-subjects experiments are conducted (N=20 and 36 respectively) to investigate the following interlinked research questions: a) what are the impacts of occlusion on telepresence collaborations, and b) can an autonomous handling of occlusions improve telepresence collaboration experience for remote users?
Results from the first experiment demonstrate that occlusions introduce a significant social interference that necessitates collaborators to reorient or reposition themselves. Subsequently, results from the second experiment indicate that the use of an autonomous controller yields a remote user experience that is more comparable (in terms of their vocal non-verbal behaviors, task performance and perceived workload) to collaborations performed by two co-located parties.
These contributions represent a step forward in making robots more autonomous and user friendly while interacting with human co-workers. This is a necessary next step for successful adoption of robots in human environments.