User-adaptive hand gesture recognition system with interactive training (original) (raw)


The paper proposes a simple, economical and efficient real-time, visual hand gesture recognition system for human-computer interaction. The system is designed to work in critical lighting conditions, for example when an image is projected onto a wall (conferences, offices, etc). Given the low level of computational complexity, the system can be used instead of a mouse. The extreme naturalness of the interaction makes previous training unnecessary and the robustness of the algorithms used guarantees a high level of reliability. The architecture of the system is described, and experimental results obtained in a large number of tests with different, untrained users are presented and commented.

Gesture recognition technology based on visual detection to acquire gestures information is obtained in a non-contact manner. There are two types of gesture recognition: independent and continuous gesture recognition. The former aims to classify videos or other types of gesture sequences that only contain one isolated gesture instance in each sequence (e.g., RGB-D or skeleton data). In this study, we review existing research methods of visual gesture recognition and will be grouped according to the following family: static, dynamic, based on the supports (Kinect, Leap…etc), works that focus on the application of gesture recognition on robots and works on dealing with gesture recognition at the browser level. Following that, we take a look at the most common JavaScript-based deep learning frameworks. Then we present the idea of defining a process for improving user interface control based on gesture recognition to streamline the implementation of this mechanism.

Development of user interfaces influences the changes in the Human-Computer Interaction (HCI). Tremendous development in input devices, many people still find the interaction with computers an uncomfortable experience. Efforts should be made to adapt computers to our natural means of communication: speech and body language. Human hand gestures have been a mode of non verbal interaction widely used. Naturalistic and intuitiveness of the hand gesture has been a great motivating factor for the researchers in the area of HCI to put their efforts to research and develop the more promising means of interaction between human and computers. The aim of this paper is to propose a real-time vision system within visual interaction with computer environments through hand gesture recognition, using general-purpose hardware and low-cost sensors, like a simple computer and an USB web camera, so any user could make use of it in his office or at home. The gesture recognition system uses image process...

— Gesture is a way of Human Computer Interaction. It is a technique that is mainly carried out to make impaired community communicate with normal humans. It's main aim is to convey human gestures to computing device. They are efficient for natural and intuitive human-computer interaction. To achieve this goal, computers should be able to visually recognize hand gestures from video input. We propose a new architecture to solve the problem of real-time gesture recognition. The fundamental idea is to use a combination of hand gestures to control various functionalities. This will include two parts, first is to track the movement of the hand and secondly is to track the orientation of the hand. This paper presents a robust and efficient technique for gesture recognition. The OpenCV library provides us a greatly interesting demonstration for object detection. Furthermore, it provides programs (or functions) that they used to train classifiers for their gesture detection system, called HaarTraining, so that we can create our own object classifiers using these functions. Our working environment is Visual Studio on Windows 7. The objective of this project is to develop an application for recognition of hand gestures with reasonable accuracy and thus creating windows media player compatible remote control.

The use of a physical controller such as a mouse, a keyboard for human computer interaction hinders the natural interface since the user and computer have a high barrier. Our aim is to create an application that controls some basic features of computers using hand gestures through an integrated webcam to resolve this issue. A Hand Gesture Recognition system detects gestures and translates them into specific actions to make our work easier. This can be pursued using OpenCV to capture the gestures which will be interfaced using Django, React.Js and Electron. An algorithm named YOLO is used to train the system accordingly. The gestures will get saved inside the DBMS. The main result expected is that the user will be able to control the basic functions of the system using his/her hand gestures and hence providing them utmost comfort.