My Mantra

"Opportunities are only limited by the constraints imposed by oneself." Copyright 2003 - 2017

Thursday, September 11, 2008

Sign to Speech Project - Gesture Recognition

The goal of the Sign to Speech project is to create a system that identifies human gestures in this case the single hand sign language alphabet and phrase “I Love You” utilizing a glove with bend gauge sensors and an accelerometer to capture the electronic characteristics patterns of each letter and the phrase. Both biologist and sociologist are working together to define “gestures” and the “encoded patterns of gestures. An overview of gesture recognition can be found at the following URL http://homepages.inf.ed.ac.uk/rbf/CVonline/LOCAL_COPIES/COHEN/gesture_overview.html.

A number of control memory and display devices have been designed based upon the following block diagram.

Figure 1: Block Diagram of Architecture for Gestural Control of Memory and Display.

Further research examined servo and robotic systems to conceptualize local control of mechanisms based upon the following block diagram.


Figure 2: Block Diagram of Architecture for Local Control of Actuated Mechanisms.

The issues that arise is the delination between gestures i.e., noting the start and ending points of each gesture. The translation to speech is somewhat straight forward by comparing the captured pattern to the letter or phrase and then performing the display of the text on a console monitor or through Apple’s PlainTalk or Say speech synthesis applications.

References

Apple Computer, Inc. (2008). Apple Speech Recognition. http://www.apple.com/macosx/features/300.html#universalaccess

Thomas Baudel and Michel Beaudouin-Lafon. CHARADE: Remote control of objects using free-hand gestures. Communications of the ACM, 36(7):28-35, July 1993.

Charles J. Cohen, Lynn Conway, and Dan Koditschek. "Dynamical System Representation, Generation, and Recognition of Basic Oscillatory Motion Gestures," 2nd International Conference on Automatic Face- and Gesture-Recognition, Killington, Vermont, October 1996.

Jill Crisman and Charles E. Thorpe. SCARF: A color vision system that tracks roads and intersections. Robotics and Automation, 9(1):49-58, February 1993.

Trevor J. Darrell and Alex P. Penland, Space-time gestures. In IEEE Conference on Vision and Pattern Recognition, NY, NY, June 1993.

E. D. Dickmanns and V. Graefe. Dynamic Monocular machine vision. Machine Vision and Applications, pages 223-240, 1988.

David Kortenkamp, Eric Huber, and R. Peter Bonasso. Recognizing and interpreting gestures on a mobile robot. Proceedings of the Thirteenth National Conference on Artificial Intelligence (AAAI ’96), 1996.

Pattie Maes, Trevor Darrell, Bruce Blumberg, and Alex Pentland. The Alive System: Full-body interaction with autonomous agents. In Computer Animation ’95 Conference, IEEE Press, Geneva, Switzerland, April 1995.

K. V. Mardia, N. M. Ghali, T. J. Hainsworth, M. Howes, and N. Sheehy. Techniques for online gesture recognition on workstations. Image and Vision Computing, 11(5):283-294, June 1993.

Kouichi Murakami and Hitomi Taguchi. Gesture recognition using recurrent neural networks. Journal of the ACM, 1(1):237-242, January 1991.

Alfred A. Rizzi, Louis L. Whitcomb, and D. E. Koditschek. Distributed Real-Time Control of a Spatial Robot Juggler. IEEE Computer, 25(5), May 1992.

Dean Rubine. Specifying gestures by example. Computer Graphics, 25(4):329-337, July 1991.

Thad Starner and Alex Pentland. Visual recognition of American Sigh Language using Hidden Markov Models. IEEE International Symposium on Computer Vision, November 1995.

1 comment:

Lyr Lobo said...

Ed, there is a research project on this topic that would benefit from your expertise.

Let me know if you are interested. *cheers* I'll try to reach you through other channels as well.