Dexterous robotic manipulation of non-rigid objects is a challenging problem but necessary to explore as robots are increasingly interacting with more complex environments in which such objects are frequently present. In particular, common manipulation tasks such as molding clay to a target shape or picking fruits and vegetables for use in the kitchen, require a high-level understanding of the scene and objects. Commonly, the behavior of non-rigid objects is described by a model. Although, well-established modeling techniques are difficult to apply in robotic tasks since objects and their properties are unknown in such unstructured environments.
This work proposes a sensing and modeling framework to measure the 3D shape deformation of non-rigid objects. Unlike traditional methods, this framework explores datadriven learning techniques focused on shape representation and deformation dynamics prediction using a graph-based approach. The proposal is validated experimentally, analyzing the performance of the representation model to capture the current state of the non-rigid object shape. In addition, the performance of the prediction model is analyzed in terms of its ability to produce future states of the non-rigid object shape due to the manipulation actions of the robotic system. The results suggest that the representation model is able to produce graphs that closely capture the deformation behavior of the non-rigid object. Whereas, the prediction model produces visually plausible graphs when short-term predictions are required.