Disabling hearing loss has a profound impact on an individual’s ability to interact and engage with those around them. Sign languages enable members of the Deaf community to communicate easily and effectively with one another. However, barriers exist for members of these populations when attempting to communicate with those who are not proficient in their language. The primary objective of this research was to develop a robust and user-oriented sign language recognition (SLR) system. This goal was accomplished through a comprehensive literature review of the wearable sensor-based SLR field, a national questionnaire to gather insights from members of the Deaf community, and the development of a novel bio-acoustic sensor system to recognize sign language gestures. Our comprehensive review of existing methods revealed many disadvantages of previous SLR systems making them unsuitable for practical applications. This literature review also revealed a disconnect between researchers and Deaf individuals, who are routinely noted as the potential end-users for SLR devices. Using this knowledge, a questionnaire was developed to gather insights and perspectives related to the SLR field directly from members of the Deaf community, family members and friends of Deaf individuals, and those who provide services targeted towards the Deaf community. Attributes including design specifications, potential applications, differences between American Sign Language and English, and other key considerations were revealed. Synthesizing our knowledge of the SLR field with the insights gathered from the questionnaire, we developed an unobtrusive wrist-worn SLR prototype. The wearable system is comprised of four high-frequency accelerometers capable of capturing mechanical micro-vibrations when placed over the major tendons of the wrist. Machine learning models were compared to demonstrate the feasibility of this system for the recognition of 15 American Sign Language gestures performed by multiple subjects. A series of validation experiments were carried out to demonstrate the effectiveness of the system. We also experimented with features to determine optimal sensor placement and the value of the high-frequency signals. Future iterations of this research will involve testing the system on larger lexicons of signs and adapting deep learning models to recognize continuous sign language sentences.