The Study and Implementation of Natural User Interface using Kinect

Abstract

In this paper Natural user interface (NUI), its adaptations, and its implementations are presented. NUI is a humanmachine interface which has evolved from the way humans interact with each other and their surrounding environment. Gesture, color, speech recognition are used in NUI to identify the user and his actions, which can subsequently be interpreted to perform the relevant task. The task can be any of the various applications of NUI such as control of heavy machinery, robot, computer, etc. During the study it was found that the best way to approach the design of NUI is using a depth capturing system, an image capturing system and an audio capturing system. During the study a combination of the three systems was found to be the best solution and is found in form of a commercial sensor called the Microsoft Kinect sensor. The sensor is also advantageous as it provides access to raw digital data streams and is equipped with a configurable audio digital signal processor (DSP). The control of machines requires a communication interface between the controlling unit and the machine. The TI MSP430 Launchpad toolkit is used for this purpose. The toolkit is economically optimal, as it includes a programmer, on board USB, two MSP430 microcontrollers and a free integrated development environment (IDE). The main aim of this work is to develop, test and demonstrate a robot platform that works on gesture based, color based and speech based NUI. The paper comprises of the study on basic user interface primarily focusing on NUI. The approach to find suitable sensors, their libraries, actuator interface, communication interface and their implementation is presented. Finally the result of the testing of the developed robot platform is demonstrated in this work.

Publication
In INDICON 2012, IEEE.
Date
Links