dc.contributor.author | Liang, Rung-Huei | en_US |
dc.contributor.author | Ouhyoung, Ming | en_US |
dc.date.accessioned | 2014-10-21T07:37:54Z | |
dc.date.available | 2014-10-21T07:37:54Z | |
dc.date.issued | 1995 | en_US |
dc.identifier.issn | 1467-8659 | en_US |
dc.identifier.uri | http://dx.doi.org/10.1111/j.1467-8659.1995.cgf143-0067.x | en_US |
dc.description.abstract | Many ways of communications are used between human and computer, while using gesture is considered to be one of the most natural way in a virtual reality system. Because of its intuitiveness and its capability of helping the hearing impaired or speaking impaired, we develop a gesture recognition system. Considering the world-wide use of ASL (American Sign Language), this system focuses on the recognition of a continuous flow of alphabets in ASL to spell a word followed by the speech synthesis, and adopts a simple and efficient windowed template matching recognition strategy to achieve the goal of a real-time and continuous recognition. In addition to the abduction and the flex information in a gesture, we introduce a concept of contact-point into our system to solve the intrinsic ambiguities of some gestures in ASL. Five tact switches, served as contact-points and sensed by an analogue to digital board, are sewn on a glove cover to enhance the functions of a traditional data glove. | en_US |
dc.publisher | Blackwell Science Ltd and the Eurographics Association | en_US |
dc.title | A Real-time Continuous Alphabetic Sign Language to Speech Conversion VR System | en_US |
dc.description.seriesinformation | Computer Graphics Forum | en_US |
dc.description.volume | 14 | en_US |
dc.description.number | 3 | en_US |
dc.identifier.doi | 10.1111/j.1467-8659.1995.cgf143-0067.x | en_US |
dc.identifier.pages | 67-76 | en_US |