Gesture Based Computing

    4 Votes

Gesture Recognition is a technology which helps to interpret human gestures and use them to perform to important tasks in a device without physical touch. This technology uses various mathematical algorithms to interpret gestures.Gesture based interfaces played a significant role in redefining human machine interaction, giving us flexible option in controlling devices. Gesture based computing is mainly used Media and File Browsing, simulation and video games. Objective of this project is to develop a User Interface which helps users to communicate with system, using hand gestures. Color rings placed on users finger is used for tracking.

Glove based and Vision based approach are the two main approaches used for gesture recognition. For hand pose tracking, sensors are placed on the glove to provide inputs to system about orientation, position and flex of the fingers using magnetic or inertial tracking devices. First commercially available hand tracker was Data Glove. Even though the Data Glove was efficient in collecting hand movement, it was very expensive and cumbersome. Also No of cables attached to user to make it work, restricts the movement of the user. In case of vision based approach, no wires are needed. A no of Camera's are placed on a fixed or mobile platform or capturing the input image usually at a frame rate of 30 HZ or more.

Kinect Sensor
This sensor was introduced by Microsoft, which use intuitive and relatively simple gestures to execute various tasks. Kinect sensor is a motion sensing input deviceĀ  developed in November 2010. It was developed for Xbox 360. Now a days, it is being used in windows PCs for commercial purposes.If we look at the Architecture of Kinect Sensor, it have a 3D camera which captures a stream of colored pixels. From each pixel it retrieves the depth of each pixel. Each pixel have information that represents distance from the sensor to an object in that direction. Skeleton tracking is generally handled by the SDK with gesture recognition left to the developer, though multiple libraries exist to aid in recognition of gestures. Speech recognition is done by Microsoft Speech Platform SDKs. Major components of the sensor are

  • RGB Camera
  • Infrared emitter and depth sensor
  • Microphones
  • Three axis accelerometer
  • Tilt motor

Major steps involved in the development of HCI system are

  • Tracking skeletal features of a user to detect the hand before recognizing and processing any gestures
  • Proper recognition of gesture
  • Interpreting the gestures and to execute action related to the gesture

C# (WPF) or C++ are used to develop Kinect enabled applications

References

http://micsymposium.org/mics_2013_Proceedings/submissions/mics20130_submission_18.pdf

Popular Videos

communication

How to improve your Interview, Salary Negotiation, Communication & Presentation Skills.

Got a tip or Question?
Let us know

Related Articles

Travel Planner using Genetic Algorithm
Data Recovery and Undeletion using RecoverE2
PC CONTROLLED ROBOTIC CAR
Routino Router Algorithm
Data Leakage Detection
Scene Animation System Project
Data Structures and Algorithms Visualization Tool
Paint Program in C
Solving 0-1 Knapsack Problem using Genetic Algorithm
Software Watermarking Project
Android Gesture Recognition
Internet working between OSI and TCP/IP Network Managements with Security Features Requirements
Web Image Searching Engine Using SIFT Algorithm
Remote Wireless Sensor Networks for Water Quality Monitoring Requirements
Ranking Spatial Data by Quality Preferences
Scalable Learning Of Collective Behaviour
Computational Metaphor Extraction And Interpretation
Designing a domain independent Rules Engine For Business Intelligence
Graph Colouring Algorithm
Facial Expression Detection