Motion control of robots in unstructured environments is a challenging task. The utilization of cameras as an information-rich sensor shows promise. In this context, image-based visual predictive controllers have gained attention due to their optimal-ity and constraint-handling capabilities. However, their performance deteriorates in presence of uncertainties in the robotic platforms, system models, and measurements. This work proposes a set of robust image-based visual predictive control methods that overcome the shortcomings of the previous visual servoing methods in the presence of uncertainties. In this dissertation, we have proposed a set of adaptive, stochastic, risk-averse, and learning-based visual servoing schemes that improve the performance and constraint compliance of visual servoing systems compared to their classical coun-terparts. The validity of the proposed control frameworks has been evaluated on a 6-DOF serial industrial manipulator and a model unmanned aerial vehicles via various experiments and simulations.