Vision Based Hand Gesture Recognition
1Vision Based.pdf (Size: 314.92 KB / Downloads: 106)
With the development of ubiquitous computing,
current user interaction approaches with keyboard, mouse and pen
are not sufficient. Due to the limitation of these devices the useable
command set is also limited. Direct use of hands as an input device is
an attractive method for providing natural Human Computer
Interaction which has evolved from text-based interfaces through 2D
graphical-based interfaces, multimedia-supported interfaces, to fully
fledged multi-participant Virtual Environment (VE) systems.
Imagine the human-computer interaction of the future: A 3Dapplication
where you can move and rotate objects simply by moving
and rotating your hand - all without touching any input device. In this
paper a review of vision based hand gesture recognition is presented.
The existing approaches are categorized into 3D model based
approaches and appearance based approaches, highlighting their
advantages and shortcomings and identifying the open issues.
N the future of Steven Spielberg's Minority Report, Tom
Cruise turns on a wall-sized digital display simply by
raising his hands, which are covered with black, wireless
gloves. Like an orchestra's conductor, he gestures in empty
space to pause, play, magnify and pull apart videos with
sweeping hand motions and turns of his wrist. Minority
Report takes place in the year 2054. The touchless technology
it demonstrates may arrive many decades sooner as is evident
from the attention that Vision Based Interfaces have gained in
the recent years.
3 D Hand Model based Approach
Three dimensional hand model based approaches rely on
the 3 D kinematic hand model with considerable DOF’s, and
try to estimate the hand parameters by comparison between
the input images and the possible 2 D appearance projected by
the 3-D hand model. Such an approach is ideal for realistic
interactions in virtual environments.
One of the earliest model based approaches to the problem
of bare hand tracking was proposed by Rehg and Kanade .
This article describes a model-based hand tracking system,
called DigitEyes, which can recover the state of a 27 DOF
hand model from ordinary gray scale images at speeds of up to
10 Hz. The hand tracking problem is posed as an inverse
problem: given an image frame (e.g. edge map) find the
underlying parameters of the model.
Hand gesture recognition finds applications in varied
domains including virtual environments, smart surveillance,
sign language translation, medical systems etc. The following
section gives a brief overview of few of the application areas.
Hand gestures can be used for analyzing and annotating
video sequences of technical talks. Such a system is presented
in . Speaker’s gestures such as pointing or writing are
automatically tracked and recognized to provide a rich
annotation of the sequence that can be used to access a
condensed version of the talk. Given the constrained domain a
simple ``vocabulary'' of actions is defined, which can easily be
recognized based on the active contour shape and motion. The
recognized actions provide a rich annotation of the sequence
that can be used to access a condensed version of the talk from
a web page.
In today’s digitized world, processing speeds have
increased dramatically, with computers being advanced to the
levels where they can assist humans in complex tasks. Yet,
input technologies seem to cause a major bottleneck in
performing some of the tasks, under-utilizing the available
resources and restricting the expressiveness of application use.
Hand Gesture recognition comes to rescue here. Computer
Vision methods for hand gesture interfaces must surpass
current performance in terms of robustness and speed to
achieve interactivity and usability. A review of vision-based
hand gesture recognition methods has been presented.
Considering the relative infancy of research related to visionbased
gesture recognition remarkable progress has been made.
To continue this momentum it is clear that further research in
the areas of feature extraction, classification methods and
gesture representation are required to realize the ultimate goal
of humans interfacing with machines on their own natural