The last decade has witnessed the introduction of several driver assistance and active safety systems in modern vehicles. Considering only systems that depend on computer vision, several independent applications have emerged such as lane tracking, road/traffic sign recognition, and pedestrian/vehicle detection. Although these methods can assist the driver for lane keeping, navigation, and collision avoidance with vehicles/pedestrians, conflict warnings of individual systems may expose the driver to greater risk due to information overload, especially in cluttered city driving conditions. As a solution to this problem, these individual systems can be combined to form an overall higher level of knowledge on traffic scenarios in real time. The integration of these computer vision modules for a ‘context-aware’ vehicle is desired to resolve conflicts between sub-systems as well as simplifying in-vehicle computer vision system design with a low cost approach. In this study, the video database is a subset of the UTDrive Corpus, which contains driver monitoring video, road scene video, driver audio capture and CAN-Bus modalities for vehicle dynamics. The corpus includes at present 77 drivers’ realistic driving data under neutral and distracted conditions. In this study, a monocular color camera output is used to serve as a single sensor for lane tracking and road sign recognition. Finally, higher level traffic scene analysis will be demonstrated, reporting on the integrated system in terms of reliability and accuracy.