Autonomous mobile robots typically have limited computational power and memory available on-board, and often require information in real-time to make real-time decisions. Thus, a vision framework for such robots must provide information about the robots surroundings in real-time while being computationally efficient and memory conscious. This thesis presents the development of a novel real-time vision framework that incorporates direction invariant person detection and recognition, and object detection and state recognition using deep learning and computer vision algorithms. The framework was integrated into two mobile robot architectures and was validated using real-world experiments on interactive mobile robots.