Unmanned aerial vehicles (UAVs) are renowned for their mobility, often deployed in multi-task missions such as search and rescue and inspection tasks. However the manoeuvrability comes at the cost of teleoperation complexity, where UAV piloting can be challenging for novices due to the difficulties associated with depth perception and the control interface. This thesis proposes shared autonomy systems where both pilot and artificial intelligence share joint control over the UAV to cooperatively complete tasks. The UAV is assumed to operate in an unknown environment where several points of interest exist. The objective of the pilot is to complete missions comprised of multiple tasks such as inspection and landing on these points of interest. The artificial intelligence in the shared autonomy system is unaware of the pilot’s goal, mission, or the global structure of the environment but must instead infer the pilot’s intent through observations of their actions within the context of the locally observable environment. The assistant provides assistance by fusing its output control actions with the pilot’s joystick input to influence the UAV’s dynamics, alongside feedback cues to communicate with the pilot.
A total of three user studies are hosted to validate the proposed approaches on human participants. The proposed solutions comprise of two modules: (1) a perception module which encodes visual information onto a compressed latent representation and (2) a policy module that is a reinforcement learning agent that augments the pilot’s actions to ensure task success. The policy module is trained in simulation against simulated users based on a parametric model, where despite being trained purely on synthetic data, the policy can be directly transferred to physical domains to assist human pilots without modification. In the final user study, an additional module is introduced to allow the reinforcement learning agent to provide communication feedback as well as supplementary information displays using augmented reality to convey additional information.
The proposed approaches showed success in greatly increasing task performance in the assisted conditions compared to the unassisted condition. All participants, regardless of skill level, were able to perform at a level exceeding the most experienced pilots whilst assisted. It was found that pilots in shared autonomy systems prefer non-invasive forms of assistance compared to aggressive forms, despite achieving task success. However, without explicit communication from the assistant, it was difficult for participants to perceive the intent of the assistant, leading to a sense of uneasiness in terms of what is expected of them. Therefore providing additional communication feedback cues alongside assistance was found to be the most preferred method of providing assistance.