This thesis investigates the limits of vibrato tactile haptic feedback when interacting with 3D virtual scenes. In this study, the spatial locations of the objects are mapped to the work-volume of the user using a Kinect sensor. In addition, the location of the hand of the user is determined using marker-based visual processing. The depth information is used to build a vibrotactile map on a haptic glove enhanced with vibration motors. The users can perceive the location and dimension of remote objects by moving their hand inside a scanning region. A marker detection camera provides the location and orientation of the user’s hand (glove) to map the corresponding tactile message.
A preliminary study was conducted to explore how different users can perceive such haptic experiences. Factors such as total number of objects detected, object separation resolution, dimension-based and shape-based discrimination were evaluated. The preliminary results showed that the localization and counting of objects can be attained with a high degree of success. The users were able to classify groups of objects of different dimensions based on the perceived haptic feedback.