Robots will become ubiquitously useful only when they can learn how to perform different tasks in an autonomous, data-efficient, and generalizable (across different body structures or environments) way. Biological systems, especially vertebrates, set a great example: they learn how to perform multiple tasks after a relatively short and sparse trial-and-error process even if their bodies are particularly difficult to control.
Vertebrate bodies are hard to control (at least from the engineering perspective) because they have musculotendon-based actuation that makes them simultaneously nonlinear, under-determined and over-determined. However, this anatomy provides very important benefits such as the ability to have the center of the mass closer to the main body. Tendon-driven actuation plays an important role in the enviable functional versatility that vertebrates possess.
It is possible to improve on the current state of robotics by finding inspiration from useful mechanisms in both anatomy and controls in vertebrates. Namely, robots can and should benefit from the principles of tendon-driven structures to efficiently and autonomously learn how to control their bodies using sparse sampling, modular and hierarchical control structures and artificial neural networks that map sensory inputs to actuation signals.
In this dissertation, I have provided a new approach that enables robots to start learning without an explicit model of their body or the environment (and therefore do not need to bridge the Sim-to-Real gap), learn from limited-experience, and adapt on the fly. This approach enables model-agnostic autonomy in robots as they can learn on the spot directly from interactions with the physics of the world, while equipping them with many of the benefits that tendon-driven anatomies provide.