A technological change is an important part of the modem world. Systems must become more reliable and yield accurate data when they come into direct contact with their environment. Intelligent sensor processing helps researchers in developing systems that can “see” and comprehend their environment. However, the ability of one isolated device to provide this kind of data is highly limited. Integrating independent sensors and handling their disagreements aids to overcome this limitation.
In the same manner, the ability of robotic systems to interact with their environment has been influenced by the state o f modem sensor technologies. Moreover, robot mobility is necessary in modem industrial environments for a number o f functions, including transport and handling of individual parts, as well as assembly. Intercepting maneuvering objects has been another important research problem in the filed of autonomous robotic systems. Quite frequently these activities require the coordination of many distributed machines and their processors to sense the external events and to integrate the gained information in a meaningful way.
The main objective of this thesis is the development of a reconfigurable and reprogrammable sensing-system for dynamic dispatching purpose in a surveillance environment using vision sensors. This competitive multisensor network consists of three main modules to function properly.
In a surveillance system, different sensors are used that are capable of detecting, and tracking moving targets in their observation space. Thus, the proposed architecture includes a motion estimation unit as its first module to provide tracking and prediction ability for the system. This became possible by implementing a Kalman Filter. However, using vision sensors (cameras) required us to develop image acquisition and processing algorithms for data extraction and processing.
For dynamic dispatching unit, as the second module, a preliminary algorithm was developed to continuously dispatch the reconfigurable sensors on-line, as motion estimation results become available. The overall dispatching architecture is developed in a way to be replaceable by any optimal sensing-system planning methodology.
Finally, as our third module, a complete vision unit including data acquisition, processing, and fusion algorithms was developed to process the images taken by multiple cameras and to fuse them into more reliable data. A centralized fusion architecture using weighted least-squares criterion in a batch estimation approach was utilized in this module.