Hand Gesture Recognition Using Fusion of Features

Mandal, Swati (2017) Hand Gesture Recognition Using Fusion of Features. MTech thesis.

[img]PDF (Fulltext is restricted upto 22.01.2020)
Restricted to Repository staff only



As the world proceeds with an increase in digitization, Human-computer interface has a wide variety of scope in this digital world. Hand gesture recognition is one such domain which has a number of applications in Human-computer interface(HCI). Hand gestures recognition can be used in application such as automobile interface, medicine industries, gaming zones, public services etc. Among several gestures based recognition system, Hand Gesture is one of the easiest and convenient ways to communicate with the computer. Hand gestures are basically used by deaf and dumb people, so various sign languages are used. In this project, American Sign Language is used. This work is carried out in three steps: i) preprocessing, ii) Feature Extraction, iii) Classification. In the preprocessing step the input image which is exposed to illumination, rotation, and scaling is dealt with. The extraction of hand region out of the entire image is a challenging task, in this work YCbCr color space based skin color detection is done for hand gesture image for both uniform background and complex background. Various Morphological operations are also performed on the binary image for removing the unwanted redundancies from the image. For feature extraction, Edge Orientation Histogram (EOH), Block Based features (BBF), and Scale Invariant Feature Transform (SIFT)is used. The former is a contour based feature extraction approach in which only hand gesture boundary is considered for extraction of features. The second one is the block based feature extraction algorithm; in this, the shape of the gesture image is taken for generating the feature vector. The later one is Scale-invariant feature transform, this algorithm is mainly used in object and scene recognition due to its flexibility towards images which are prone to scaling and rotation variance. Fusion of edge orientation histogram and block based features are also done to make the system more efficient. The third step in hand gesture recognition is the Classification stage. In this part, the feature extracted from all the feature extraction algorithms are used. The two classifiers used in this project are Dynamic time warping(DTW)inspired by the time sequence matching and multi-class Support vector Machine(SVM).The dataset is divided into two equal parts for training and testing of the classifier. The system is made real time applicable by taking gesture database of multiple users under varying light conditions. The proposed work shows that the scale-invariant feature transform shows better results as compared to other two features extraction algorithm. For classifiers, the Support Vector Machine shows to be the better classifier as compared to dynamic time warping due to its time constraints.

Item Type:Thesis (MTech)
Uncontrolled Keywords:American Sign language; Edge orientation Histogram; Block based; SIFT; Dynamic Time warping; Support Vector Machine
Subjects:Engineering and Technology > Electronics and Communication Engineering > Signal Processing
Engineering and Technology > Electronics and Communication Engineering > Image Processing
Divisions: Engineering and Technology > Department of Electronics and Communication Engineering
ID Code:8878
Deposited By:Mr. Kshirod Das
Deposited On:29 Mar 2018 14:19
Last Modified:29 Mar 2018 14:19
Supervisor(s):Ari, Samit

Repository Staff Only: item control page