SlideShare une entreprise Scribd logo
1  sur  5
Télécharger pour lire hors ligne
Journal for Research | Volume 04 | Issue 01 | March 2018
ISSN: 2395-7549
All rights reserved by www.journal4research.org 48
HCI based Application for Playing Computer
Games
Mr. K. S. Chandrasekaran A. Varun Ignatius
Assistant Professor UG Student
Department of Computer Science & Engineering Department of Computer Science & Engineering
Saranathan College of Engineering-620012, India Saranathan College of Engineering-620012, India
R. Vasanthaguru B. Vengataramanan
UG Student UG Student
Department of Computer Science & Engineering Department of Computer Science & Engineering
Saranathan College of Engineering-620012, India Saranathan College of Engineering-620012, India
B. Vishnu Shankar
UG Student
Department of Computer Science & Engineering
Saranathan College of Engineering-620012, India
Abstract
This paper describes a command interface for games based on hand gestures and voice command defined by postures, movement
and location. The system uses computer vision requiring no sensors or markers by the user. In voice command the speech
recognizer, recognize the input from the user. It stores and passes command to the game, action takes place. We propose a simple
architecture for performing real time colour detection and motion tracking using a webcam. The next step is to track the motion of
the specified colours and the resulting actions are given as input commands to the system. We specify blue colour for motion
tracking and green colour for mouse pointer. The speech recognition is the process of automatically recognizing a certain word
spoken by a particular speaker based on individual information included in speech waves. This application will help in reduction
in hardware requirement and can be implemented in other electronic devices also.
Keywords: Computer Vision, Gesture Recognition, Voice Command, Human Computer Interaction
_______________________________________________________________________________________________________
I. INTRODUCTION
Computer games are one of the most successful application domains in the history of interactive systems even with conventional
input systems like mouse and keyboard. The existing system, mouse for instance seriously limits the way humans interact with
computers. Introduction of HCI techniques to the gaming world would revolutionize the way humans play games. It is a command
interface for games based on hand gestures and voice command defined by postures, movement and location. The proposed system
uses a simple webcam and a PC for recognizing the input from the user and thus uses natural hand movements for playing games.
This effectively reduces the cost of implementing HCI in conventional PCs. We propose a simple architecture for performing real
time colour detection and motion tracking using a webcam. Since many colours are detected it is important to distinguish the
specified colours distinctly. The next step is to track the motion of the specified colours and the resulting actions are given as input
commands to the system.
Speech technology is a very popular term right now. And Speech Recognition is the process of automatically recognizing a
certain word spoken by a particular speaker based on individual information included in speech waves. Using speech recognition,
we can give commands to computer and the computer will perform the given task. The main objective of this project is to construct
and develop a system to execute commands of operating system by using speech recognition system that is capable of recognizing
and responding to speech inputs rather than using traditional means of input(e.g. computer keyboard, mouse), thus saving time and
effort of the user. The proposed system is easier to increase the interaction between people and computers by using speech
recognition; especially for those who suffer from health problems, for example, the proposed system helps physically challenged
persons. This application will help in reduction in hardware requirement and can be implemented in other electronic devices also.
In this case, we are using TIC-TAC-TOE GAME.
II. RELATED WORK
A few works have been proposed recently to use free hand gestures in games using computer vision. A multimodal multiplayer
gaming system combines a small number of postures, their location on a table-based interaction system and speech commands to
interact with games and discusses results of using this platform to interact with popular games. In this study, a colour pointer has
HCI based Application for Playing Computer Games
(J4R/ Volume 04 / Issue 01 / 010)
All rights reserved by www.journal4research.org 49
been used for object recognition and tracking. Instead of conventional finger tips a colour pointer has been used to make object
detection easy and fast. Other tools facilitate the use of gesture recognition for applications in general, not only games.
Speech technology is a very popular term right now. Speech recognition is highly demanded and has many useful applications.
Conventional system users use pervasive devices such a mouse and keyboard to interact with the system. Further more people with
physical challenges find conventional system hand to use. A good system has minimal restrictions in interacting with its user. The
speed of typing and hand writing is usually one word per second, so speaking may be the fastest communication form with a
computer. The applications with voice recognition can also be a very helpful to for handicapped people who have difficulties
with typing.
III. SPECIFIC REQUIREMENTS
For Hand Gestures, we are using the OpenCV. In this study, a colour pointer has been used for object recognition and tracking.
Blue and green colours are used as colour pointers. Colour detection is performed using inbuild functions in OpenCV.
OpenCV (Open Source Computer Vision) is a Library of programming functions mainly aimed at real-time computer vision.
Originally developed by Intel’s research centre. OpenCV is written in C++ and its primary interface is in C++, but it still retains a
less comprehensive though extensive older C interface. OpenCV’s applications areas include: Facial recognition system, Gesture
recognition, Human-computer interaction (HCI), Mobile robotics. OpenCV is an open source computer vision and machine
learning software library. OpenCV was built to provide common infrastructure for computer vision applications and to accelerate
the use of machine perception in the commercial products. Being a BSD-Licensed products, OpenCV makes it easy for business
to utilize and modify the code. It has C, C++, Python, Java and MATLAB interfaces and supports Windows, Linux, Android and
Mac OS. OpenCV means mostly towards real-time vision applications and takes advantage of MMX and SSE instructions when
available. A full-featured CUDA AND OpenCV interfaces are being actively developed right now. There are over 500 algorithms
and above 10 times as many functions that compose or support those algorithms. OpenCV is written natively in C++ and has a
template interface that works seamlessly with STL containers.
For Voice Input, we are using the Sphinx package. CMU Sphinx, also called Sphinx in short, is the general term to describe a
group of speech recognition systems. SPIHNX the first large-vocabulary speaker-independent continuous-speech recognizer using
multiple code books of various LPC-derived features. Two types of HMMs are used in SPHINX: context-independent phone model
and function-word-dependent phone model. On a task using bigram grammar, SPHINX achieved word accuracy. This demonstrates
the feasibility of speaker-independent continuous-speech recognition, and the feasibility of appropriateness of hidden Markov
models for such a task. These include a series of speech recognizers. The speech decoders come with acoustic models and sample
applications. The available resources include in addition software for acoustic model training.
These two specific requirements are used in the TIC-TAC-TOE GAME.
IV. DESCRIPTION
Hand Gestures
In the TicTacToe Game, the OpenCV works:
The user use one color for the movement and another color for click options in their fingers. The user move the hands into the
webcam and the images are captured by openCV tool.
The captured images go to the segment analysis by BG Removal, and the segmented image is recognizes by the recognizer. The
recognized movement is mapped to the mouse tracker, and if the another color will identified the action takes placed.
Both the use of gestures and having games as an application brings specific requirements to an interface and analyzing these
requirements was one of the most important steps in designing Gestures. Gestures are most often used to relay singular commands
or actions to the system, instead of tasks that may require continuous control, such as navigation. Therefore, it is recommended that
gestures be part of a multimodal interface. This also brings other advantages, such as decoupling different tasks in different
interaction modalities, which may reduce the user's cognitive load. So, while gestures have been used for other interaction tasks in
the past, including navigation. All this leads to the requirement that the vocabulary of gestures in each context of the interface,
while small, must be as simply and quickly modifiable as possible. Systems that require retraining for each set of possible gestures,
for instance, could prove problematic in this case, unless such training could be easily automated. The interface should also accept
small variations for each gesture. Demanding that postures and movements be precise, while possibly making the recognition task
easier, makes the interaction considerably harder to use and learn, demanding not only that the user remember the gestures and
their meanings but also train how to do them precisely, greatly reducing usability. It could be argued that, for particular games,
reducing the usability could actually be part of the challenge presented to the player (the challenge could be remembering a large
number of gestures, or learning how to execute them precisely, for instance). While the discussion of whether that is a good game
design practice or not is beyond the scope of this paper, Gestures opts for the more general goal of increasing usability as much as
possible. This agrees with the principle that, for home and entertainment applications, ease of learning, reducing user errors,
satisfaction and low cost are among the most important design goals. The system should also allow playing at home with minimal
setup time required. Players prefer games where they can be introduced to the action as soon as possible, even while still learning
the game and the interface. Therefore, the system should not require specific background or lighting conditions, complex
HCI based Application for Playing Computer Games
(J4R/ Volume 04 / Issue 01 / 010)
All rights reserved by www.journal4research.org 50
calibration or repeated training. Allowing the use of the gesture-based interface with conventional games is also advantageous to
the user, providing new options to enjoy a larger number of games. From the developer point of view, the system should be as easy
as possible to integrate within a game, without requiring specific knowledge of areas such as computer vision or machine learning.
The Abstract Framework
Figure 1 shows a UML Activity Diagram Representing Gesture object flow model. It is responsible for the gesture model, while
GestureAnalysis and GestureRecognition define the interfaces for the classes that will implement gesture analysis and recognition.
To these activities are added image capture and segmentation. GestureCapture provides an interface for capturing 2D images from
one or multiple cameras or prerecorded video streams (mostly for testing). The images must have the same size, but not necessarily
the same color depth. A device could provide, for instance, one or more color images and a grayscale image to represent a dense
depth map. GestureSegmentation should usually find in the original image(s) one or both hands and possibly the head (to determine
relative hand position).
Fig. 1: Hand Gesture Process
Figure 1 shows that the usual flow of information in Gestures in each time step is as follows: one or more images serve as input
to the image capture model, which makes these images available as an OpenCV's Image object. The segmentation uses this image
and provides a segmented image as an object of the same class (and same image size, but not necessarily color depth). Based on
the segmented image, the analysis provides a collection of features as a GestureFeatureCol object which is in turn used by the
recognition to output a gesture.
GestreFeatureCol is a collection of GestureFeature objects. GestureFeature contains an identifier string to describe the feature
and either a scalar or an array of values (more often used) or an image (useful, for instance, for features in the frequency domain).
GestureFeature already defines several identifiers, for those features most often found in the gesture recognition literature, to
facilitate the interface between analysis and recognition, but user-created identifiers may also be used. Input is an optional module
that accompanies but is actually separate from Gestures. It is responsible for facilitating, in a very simple way, both multimodal
input and integration with games or engines not necessarily aware of Gestures.
It simply translates its input, which is a description (a numerical or string ID or a XML description, for instance) that may be
supplied either by Gestures or any other system (and here lies the possibility of multimodal interaction), into another type of input,
such as a system input (like a key down event) or input data to a particular game engine. In one of the tests, for instance, gestures
are used for commands and a dancing mat is used for navigation. Because this architecture consists mostly of interfaces, it is
possible to create a single class that, through multiple inheritance, implements the entire system functionality. This is usually
considered a bad practice in object orientation (should be avoided) and is actually one of the reasons why aggregation is preferred
to inheritance. There are design patterns that could have been used to force the use of aggregation and avoid multiple inheritance,
but Gestures opts for allowing it for a reason. Gesture recognition may be a very costly task in terms of processing and must be
done in real time for the purpose of interaction. Many algorithms may be better optimized for speed when performing more than
one task (such as segmentation and analysis) together.
Furthermore, analysis and recognition are very tightly coupled in some algorithms and forcing their separation could be difficult.
So, while it is usually recommended to avoid using multiple inheritance and to implement each task in a different class, making it
much easier to exchange one module for the other or to develop modules in parallel and in teams, the option to do otherwise exists,
and for good reason.
HCI based Application for Playing Computer Games
(J4R/ Volume 04 / Issue 01 / 010)
All rights reserved by www.journal4research.org 51
Speech Recognition
For Voice Input, we are using the Sphinx Library. Sphinx consists of in-build functions which are used to detect the speech input
given by a particular user. In this case, we are giving the position number as the input for TicTacToe Game through voice command.
The Speech Recognizer, recognize the user input and store it, so that it can be used as an input in TicTacToe Game.
Fig. 2: Speech Recognition Process
For Voice Input, human voice which is sampled at rate of 16,000per second. It should be given in live mode. It should be noted
that the system would have difficulty in recognizing accented English, hence it is recommended to give input as native English
speakers would do. In Speech Recognizer, Platform speed directly affected our choice of a speech recognition system for our work.
It is used to recognize the voice commands as spoken by user. The voice input is supposed to be given from its microphone. The
speech recognition process decodes these input files, identifies the command and generates output accordingly. The Dictionary file
is a separate file that describes the phonics of each word. It is a collection of pre-defined words that are relevant to various activities.
Each word is separated into syllables. The output in from the recognizer(i.e.) Recognition result is cross referenced with the
dictionary file and correct word is returned as the result. The Decoder module then parses this word and converts into equivalent
text. This text is then used by the command executer to execute the required functions. The decoder module is inbuild the SPHINX
System. The class Microphone from the sphinx for microphone control using java provides solve the essential functions for working
with microphone connected to the system. The StartRecording() function in the microphone class can be used to capture the audio
from the microphone connected to the system. It returns an object of the result class which can be converted to text command using
the recognize() function in Recognizer class.
Speech Recognition is the process of identifying the speech in the recorded audio by using the phonetics dictionary. The
phonetics for each word is stored in the dictionary file. The recognize() function in the recognizer class uses the grammar file to
recognize speech if any in the audio recorded by the microphone.
A different approach that has been studied very well is the analysis of the human voice for means of human computer interaction.
The topic of speech processing has been studied since the 1960’s and is very well researched. Even though the process of speech
processing, and more precise speech recognition. Speaker Recognition: The goal of a speaker recognition system is to determine,
to whom a recorded voice sample belongs. To achieve this, possible users of this system need first to enroll their voice. This is
used to calculate a so-called model of the user. This model is then later again used to perform the matching, when the system needs
to decide the owner of a recorded voice section. Such a system can either be built text-dependent, meaning that the user needs to
speak the same phrase during enrollment and usage of the system. This can be seen similar to the setting a password for a computer,
which is then required to gain access. A recognition system can also operate text independent, so that the user may speak a different
phrase during enrollment and usage. This is more challenging to perform but provides a more flexible way the system can be used.
Speaker recognition is mainly used for two tasks: Speaker verification assumes that a user claims to be of a certain identity. The
HCI based Application for Playing Computer Games
(J4R/ Volume 04 / Issue 01 / 010)
All rights reserved by www.journal4research.org 52
system is then used to verify or refute this claim. A possible use case is to control access for a building. Here, the identity claim
could be provided by the usage of a secondary system like smart cards or fingerprints. Speaker identification: It does not assume
a prior identity claim and tries to determine to whom a certain voice belongs. These systems can either be built for a closed group,
where all possible users are known to the system beforehand. Or it can be used for an open group, meaning that not all possible
users of the system are already enrolled. Most of the time this is achieved by building a model for an “unknown speaker”. If the
recorded voice sample fits this “unknown model” the speaker is not yet enrolled. The system can then either just ignore the voice
or build a new model for this speaker, so that he later can be identified.
Speech Recognition
The idea behind speech recognition is to provide a means to transcribe spoken phrases into written text. Such a system has many
versatile capabilities. From controlling home appliances as well as light and heating in a home automation system, where only
certain commands and keywords need to be recognized, to full speech transcription for note keeping or dictation. There exist many
approaches to achieve this goal. The most simple technique is to build a model for every word that needs to be recognized. As
shown in the section pattern matching this is not feasible for bigger vocabularies like a whole language. Apart from constraining
the accepted phrases, it must be considered if the system is only used by a single individual, so-called speaker dependent, or if it
should perform equally well for a broad spectrum of people, called speaker independent.
Speech Processing
Although speaker recognition and speech recognition systems achieve different goals, both are built from the same general
structure. This structure can be divided into following stages:
 Signal generation models the process of speaking in the human body.
 Signal capturing & preconditioning deals with the digitalization of voice. This stage can also apply certain filters and
techniques to reduce noise or echo.
 Feature extraction takes the captured signal and extracts only the information that is of interest to the system.
 Pattern matching then tries to determine to what word or phrase the extracted featured belong and concludes on the system
output.
V. CONCLUSION
The System architecture proposed will completely revolutionize the way gaming applications are performed. The system makes
use of web camera and which are an integral part of any standard system, eliminating the necessity of additional peripheral devices.
Our endeavor foe object detection and image processing in OpenCV for the implementation of the gaming console provide to be
practically successful. Here a person’s motions tracted and interpreted as commands. Most gaming application required additional
hardware which is often very costly. The motive was to create this technology in the cheapest possible way under a standardized
operating system using a set of wearable gestureable interfaces. This technique can be further extended to other services also. In
Speech Recognition, the proposed system provides a better performance and experience I user interaction. It reduces potential time
involved in conventional interactive methods. Voice recognition is irrefutably the feature of human computer interaction and our
study proposes a new approach to utilize this approach for enhanced user experience and extend in ability of computer interaction
to motor impaired users.
REFERENCES
[1] D. Bowman, E. Kruijff, J. LaViola, I. Poupyrev, 3D User Interface: Theory and Practice, Addison-Wesley, 2005.
[2] T. Stamer, B. Leibe, B. Singletary, J. Pair, "Mind-warping: towards creating a compelling collaborative augmented reality game", Proc. of the 5th international
conference on Intelligent user interfaces (IUI '00), pp. 256-259, 2000.
[3] M. Snider, "Microsoft unveils hands-free gaming", USA Today, June 2009, [online] Available: http://www.usatoday.com/tech/gaming/2009-06-01-hands-
free-microsoft_N.htm.
[4] V. Dobnik, Surgeons may err less by playing video games, 2004, [online] Available: http://www.msnbc.msn.com/id/4685909.
[5] S. LAAKSO, M. LAAKSO, "Design of a Body-Driven Multiplayer Game System", ACM Comput. Entertain. vol. 4, no. 4, 2006.
[6] A. Camurri, B. Mazzarino, G. Volpe, "Analysis of Expressive Gesture: The EyesWeb Expressive Gesture Processing Library" in Gesture-Based
Communication in Human-Computer Interaction, Springer, pp. 469-470, 2004.
[7] B. Shneiderman, Designing the User Interface: Strategies for Effective Human-Computer Interaction, Addison-Wesley, 1998.
[8] Cinneide, A., Linear Prediction: The Technique, Its Solution and Application to Speech. Dublin Institute of Technology, 2008.
[9] Gutierrez-Osuna, R. CSCE 689-604: Special Topics in Speech Processing, Texas A&M University, 2011, courses.cs.tamu.edu/rgutier/csce689 s11.
[10] Schalkwyk, J., et al., Google Search by Voice: A case study. In Advances in Speech Recognition. Springer, 2010.
[11] Rabiner, L., A tutorial on hidden Markov models and selected applications in speech recognition. In Proceedings of the IEEE, vol. 77, no. 2 pp. 257-286,
1989.
[12] Campbell, J.P. Jr., Speaker recognition: a tutorial. In Proceedings of the IEEE, vol. 85, no. 9 pp. 1437-1462, 1997.

Contenu connexe

Tendances

Voice controlled wheel chair
Voice controlled wheel chairVoice controlled wheel chair
Voice controlled wheel chairmonu singh
 
Blue Eyes Technology PPT
Blue Eyes Technology PPTBlue Eyes Technology PPT
Blue Eyes Technology PPTHRIDHYAJOY
 
Smart Assistant for Blind Humans using Rashberry PI
Smart Assistant for Blind Humans using Rashberry PISmart Assistant for Blind Humans using Rashberry PI
Smart Assistant for Blind Humans using Rashberry PIijtsrd
 
Hand gesture recognition
Hand gesture recognitionHand gesture recognition
Hand gesture recognitionBhawana Singh
 
IRJET - Sign Language Recognition using Neural Network
IRJET - Sign Language Recognition using Neural NetworkIRJET - Sign Language Recognition using Neural Network
IRJET - Sign Language Recognition using Neural NetworkIRJET Journal
 
IRJET- Hand Gesture Recognition System using Convolutional Neural Networks
IRJET- Hand Gesture Recognition System using Convolutional Neural NetworksIRJET- Hand Gesture Recognition System using Convolutional Neural Networks
IRJET- Hand Gesture Recognition System using Convolutional Neural NetworksIRJET Journal
 
Gesture recognition adi
Gesture recognition adiGesture recognition adi
Gesture recognition adiaditya verma
 
Iaetsd artificial intelligence
Iaetsd artificial intelligenceIaetsd artificial intelligence
Iaetsd artificial intelligenceIaetsd Iaetsd
 
sign language recognition using HMM
sign language recognition using HMMsign language recognition using HMM
sign language recognition using HMMveegrrl
 
IRJET- Communication System for Blind, Deaf and Dumb People using Internet of...
IRJET- Communication System for Blind, Deaf and Dumb People using Internet of...IRJET- Communication System for Blind, Deaf and Dumb People using Internet of...
IRJET- Communication System for Blind, Deaf and Dumb People using Internet of...IRJET Journal
 
IRJET- Talking Receptionist Robot
IRJET-  	  Talking Receptionist RobotIRJET-  	  Talking Receptionist Robot
IRJET- Talking Receptionist RobotIRJET Journal
 
Sixth sense
Sixth senseSixth sense
Sixth senseShilpa S
 
IRJET - Mutecom using Tensorflow-Keras Model
IRJET - Mutecom using Tensorflow-Keras ModelIRJET - Mutecom using Tensorflow-Keras Model
IRJET - Mutecom using Tensorflow-Keras ModelIRJET Journal
 
Reading System for the Blind PPT
Reading System for the Blind PPTReading System for the Blind PPT
Reading System for the Blind PPTBinayak Ghosh
 
Gesture recognition technology
Gesture recognition technology Gesture recognition technology
Gesture recognition technology Nagamani Gurram
 
Natural Hand Gestures Recognition System for Intelligent HCI: A Survey
Natural Hand Gestures Recognition System for Intelligent HCI: A SurveyNatural Hand Gestures Recognition System for Intelligent HCI: A Survey
Natural Hand Gestures Recognition System for Intelligent HCI: A SurveyEditor IJCATR
 

Tendances (20)

Voice controlled wheel chair
Voice controlled wheel chairVoice controlled wheel chair
Voice controlled wheel chair
 
Template abstract_book
Template  abstract_bookTemplate  abstract_book
Template abstract_book
 
Blue Eyes Technology PPT
Blue Eyes Technology PPTBlue Eyes Technology PPT
Blue Eyes Technology PPT
 
Smart Assistant for Blind Humans using Rashberry PI
Smart Assistant for Blind Humans using Rashberry PISmart Assistant for Blind Humans using Rashberry PI
Smart Assistant for Blind Humans using Rashberry PI
 
gesture-recognition
gesture-recognitiongesture-recognition
gesture-recognition
 
Hand gesture recognition
Hand gesture recognitionHand gesture recognition
Hand gesture recognition
 
IRJET - Sign Language Recognition using Neural Network
IRJET - Sign Language Recognition using Neural NetworkIRJET - Sign Language Recognition using Neural Network
IRJET - Sign Language Recognition using Neural Network
 
IRJET- Hand Gesture Recognition System using Convolutional Neural Networks
IRJET- Hand Gesture Recognition System using Convolutional Neural NetworksIRJET- Hand Gesture Recognition System using Convolutional Neural Networks
IRJET- Hand Gesture Recognition System using Convolutional Neural Networks
 
Hf3113931397
Hf3113931397Hf3113931397
Hf3113931397
 
Gesture recognition adi
Gesture recognition adiGesture recognition adi
Gesture recognition adi
 
Iaetsd artificial intelligence
Iaetsd artificial intelligenceIaetsd artificial intelligence
Iaetsd artificial intelligence
 
sign language recognition using HMM
sign language recognition using HMMsign language recognition using HMM
sign language recognition using HMM
 
IRJET- Communication System for Blind, Deaf and Dumb People using Internet of...
IRJET- Communication System for Blind, Deaf and Dumb People using Internet of...IRJET- Communication System for Blind, Deaf and Dumb People using Internet of...
IRJET- Communication System for Blind, Deaf and Dumb People using Internet of...
 
IRJET- Talking Receptionist Robot
IRJET-  	  Talking Receptionist RobotIRJET-  	  Talking Receptionist Robot
IRJET- Talking Receptionist Robot
 
Sixth sense
Sixth senseSixth sense
Sixth sense
 
IRJET - Mutecom using Tensorflow-Keras Model
IRJET - Mutecom using Tensorflow-Keras ModelIRJET - Mutecom using Tensorflow-Keras Model
IRJET - Mutecom using Tensorflow-Keras Model
 
Reading System for the Blind PPT
Reading System for the Blind PPTReading System for the Blind PPT
Reading System for the Blind PPT
 
HAND GESTURE RECOGNITION FOR HCI (HUMANCOMPUTER INTERACTION) USING ARTIFICIAL...
HAND GESTURE RECOGNITION FOR HCI (HUMANCOMPUTER INTERACTION) USING ARTIFICIAL...HAND GESTURE RECOGNITION FOR HCI (HUMANCOMPUTER INTERACTION) USING ARTIFICIAL...
HAND GESTURE RECOGNITION FOR HCI (HUMANCOMPUTER INTERACTION) USING ARTIFICIAL...
 
Gesture recognition technology
Gesture recognition technology Gesture recognition technology
Gesture recognition technology
 
Natural Hand Gestures Recognition System for Intelligent HCI: A Survey
Natural Hand Gestures Recognition System for Intelligent HCI: A SurveyNatural Hand Gestures Recognition System for Intelligent HCI: A Survey
Natural Hand Gestures Recognition System for Intelligent HCI: A Survey
 

Similaire à HCI BASED APPLICATION FOR PLAYING COMPUTER GAMES | J4RV4I1014

Controlling Computer using Hand Gestures
Controlling Computer using Hand GesturesControlling Computer using Hand Gestures
Controlling Computer using Hand GesturesIRJET Journal
 
A Survey Paper on Controlling Computer using Hand Gestures
A Survey Paper on Controlling Computer using Hand GesturesA Survey Paper on Controlling Computer using Hand Gestures
A Survey Paper on Controlling Computer using Hand GesturesIRJET Journal
 
1.Usability Engineering.pptx
1.Usability Engineering.pptx1.Usability Engineering.pptx
1.Usability Engineering.pptxDr.Saranya K.G
 
HAND GESTURE RECOGNITION.ppt (1).pptx
HAND GESTURE RECOGNITION.ppt (1).pptxHAND GESTURE RECOGNITION.ppt (1).pptx
HAND GESTURE RECOGNITION.ppt (1).pptxDeepakkumaragrahari1
 
IRJET- Sign Language Interpreter
IRJET- Sign Language InterpreterIRJET- Sign Language Interpreter
IRJET- Sign Language InterpreterIRJET Journal
 
Sign Language Detection using Action Recognition
Sign Language Detection using Action RecognitionSign Language Detection using Action Recognition
Sign Language Detection using Action RecognitionIRJET Journal
 
Sign Language Recognition using Mediapipe
Sign Language Recognition using MediapipeSign Language Recognition using Mediapipe
Sign Language Recognition using MediapipeIRJET Journal
 
SIXTH SENSE TECHNOLOGY REPORT
SIXTH SENSE TECHNOLOGY REPORTSIXTH SENSE TECHNOLOGY REPORT
SIXTH SENSE TECHNOLOGY REPORTJISMI JACOB
 
Real Time Head & Hand Tracking Using 2.5D Data
Real Time Head & Hand Tracking Using 2.5D Data Real Time Head & Hand Tracking Using 2.5D Data
Real Time Head & Hand Tracking Using 2.5D Data Harin Veera
 
Demonstration of visual based and audio-based hci system
Demonstration of visual based and audio-based hci systemDemonstration of visual based and audio-based hci system
Demonstration of visual based and audio-based hci systemeSAT Journals
 
Gestures Based Sign Interpretation System using Hand Glove
Gestures Based Sign Interpretation System using Hand GloveGestures Based Sign Interpretation System using Hand Glove
Gestures Based Sign Interpretation System using Hand GloveIRJET Journal
 
IRJET- Virtual Vision for Blinds
IRJET- Virtual Vision for BlindsIRJET- Virtual Vision for Blinds
IRJET- Virtual Vision for BlindsIRJET Journal
 
Wake-up-word speech recognition using GPS on smart phone
Wake-up-word speech recognition using GPS on smart phoneWake-up-word speech recognition using GPS on smart phone
Wake-up-word speech recognition using GPS on smart phoneIJERA Editor
 
A Translation Device for the Vision Based Sign Language
A Translation Device for the Vision Based Sign LanguageA Translation Device for the Vision Based Sign Language
A Translation Device for the Vision Based Sign Languageijsrd.com
 
Virtual Mouse Control Using Hand Gestures
Virtual Mouse Control Using Hand GesturesVirtual Mouse Control Using Hand Gestures
Virtual Mouse Control Using Hand GesturesIRJET Journal
 
“SKYE : Voice Based AI Desktop Assistant”
“SKYE : Voice Based AI Desktop Assistant”“SKYE : Voice Based AI Desktop Assistant”
“SKYE : Voice Based AI Desktop Assistant”IRJET Journal
 
HAND GESTURE BASED SPEAKING SYSTEM FOR THE MUTE PEOPLE
HAND GESTURE BASED SPEAKING SYSTEM FOR THE MUTE PEOPLEHAND GESTURE BASED SPEAKING SYSTEM FOR THE MUTE PEOPLE
HAND GESTURE BASED SPEAKING SYSTEM FOR THE MUTE PEOPLEIRJET Journal
 
Gesture control algorithm for personal computers
Gesture control algorithm for personal computersGesture control algorithm for personal computers
Gesture control algorithm for personal computerseSAT Journals
 

Similaire à HCI BASED APPLICATION FOR PLAYING COMPUTER GAMES | J4RV4I1014 (20)

Sixth sense technology
Sixth sense technologySixth sense technology
Sixth sense technology
 
Controlling Computer using Hand Gestures
Controlling Computer using Hand GesturesControlling Computer using Hand Gestures
Controlling Computer using Hand Gestures
 
HGR-thesis
HGR-thesisHGR-thesis
HGR-thesis
 
A Survey Paper on Controlling Computer using Hand Gestures
A Survey Paper on Controlling Computer using Hand GesturesA Survey Paper on Controlling Computer using Hand Gestures
A Survey Paper on Controlling Computer using Hand Gestures
 
1.Usability Engineering.pptx
1.Usability Engineering.pptx1.Usability Engineering.pptx
1.Usability Engineering.pptx
 
HAND GESTURE RECOGNITION.ppt (1).pptx
HAND GESTURE RECOGNITION.ppt (1).pptxHAND GESTURE RECOGNITION.ppt (1).pptx
HAND GESTURE RECOGNITION.ppt (1).pptx
 
IRJET- Sign Language Interpreter
IRJET- Sign Language InterpreterIRJET- Sign Language Interpreter
IRJET- Sign Language Interpreter
 
Sign Language Detection using Action Recognition
Sign Language Detection using Action RecognitionSign Language Detection using Action Recognition
Sign Language Detection using Action Recognition
 
Sign Language Recognition using Mediapipe
Sign Language Recognition using MediapipeSign Language Recognition using Mediapipe
Sign Language Recognition using Mediapipe
 
SIXTH SENSE TECHNOLOGY REPORT
SIXTH SENSE TECHNOLOGY REPORTSIXTH SENSE TECHNOLOGY REPORT
SIXTH SENSE TECHNOLOGY REPORT
 
Real Time Head & Hand Tracking Using 2.5D Data
Real Time Head & Hand Tracking Using 2.5D Data Real Time Head & Hand Tracking Using 2.5D Data
Real Time Head & Hand Tracking Using 2.5D Data
 
Demonstration of visual based and audio-based hci system
Demonstration of visual based and audio-based hci systemDemonstration of visual based and audio-based hci system
Demonstration of visual based and audio-based hci system
 
Gestures Based Sign Interpretation System using Hand Glove
Gestures Based Sign Interpretation System using Hand GloveGestures Based Sign Interpretation System using Hand Glove
Gestures Based Sign Interpretation System using Hand Glove
 
IRJET- Virtual Vision for Blinds
IRJET- Virtual Vision for BlindsIRJET- Virtual Vision for Blinds
IRJET- Virtual Vision for Blinds
 
Wake-up-word speech recognition using GPS on smart phone
Wake-up-word speech recognition using GPS on smart phoneWake-up-word speech recognition using GPS on smart phone
Wake-up-word speech recognition using GPS on smart phone
 
A Translation Device for the Vision Based Sign Language
A Translation Device for the Vision Based Sign LanguageA Translation Device for the Vision Based Sign Language
A Translation Device for the Vision Based Sign Language
 
Virtual Mouse Control Using Hand Gestures
Virtual Mouse Control Using Hand GesturesVirtual Mouse Control Using Hand Gestures
Virtual Mouse Control Using Hand Gestures
 
“SKYE : Voice Based AI Desktop Assistant”
“SKYE : Voice Based AI Desktop Assistant”“SKYE : Voice Based AI Desktop Assistant”
“SKYE : Voice Based AI Desktop Assistant”
 
HAND GESTURE BASED SPEAKING SYSTEM FOR THE MUTE PEOPLE
HAND GESTURE BASED SPEAKING SYSTEM FOR THE MUTE PEOPLEHAND GESTURE BASED SPEAKING SYSTEM FOR THE MUTE PEOPLE
HAND GESTURE BASED SPEAKING SYSTEM FOR THE MUTE PEOPLE
 
Gesture control algorithm for personal computers
Gesture control algorithm for personal computersGesture control algorithm for personal computers
Gesture control algorithm for personal computers
 

Plus de Journal For Research

Design and Analysis of Hydraulic Actuator in a Typical Aerospace vehicle | J4...
Design and Analysis of Hydraulic Actuator in a Typical Aerospace vehicle | J4...Design and Analysis of Hydraulic Actuator in a Typical Aerospace vehicle | J4...
Design and Analysis of Hydraulic Actuator in a Typical Aerospace vehicle | J4...Journal For Research
 
Experimental Verification and Validation of Stress Distribution of Composite ...
Experimental Verification and Validation of Stress Distribution of Composite ...Experimental Verification and Validation of Stress Distribution of Composite ...
Experimental Verification and Validation of Stress Distribution of Composite ...Journal For Research
 
Image Binarization for the uses of Preprocessing to Detect Brain Abnormality ...
Image Binarization for the uses of Preprocessing to Detect Brain Abnormality ...Image Binarization for the uses of Preprocessing to Detect Brain Abnormality ...
Image Binarization for the uses of Preprocessing to Detect Brain Abnormality ...Journal For Research
 
A Research Paper on BFO and PSO Based Movie Recommendation System | J4RV4I1016
A Research Paper on BFO and PSO Based Movie Recommendation System | J4RV4I1016A Research Paper on BFO and PSO Based Movie Recommendation System | J4RV4I1016
A Research Paper on BFO and PSO Based Movie Recommendation System | J4RV4I1016Journal For Research
 
IoT based Digital Agriculture Monitoring System and Their Impact on Optimal U...
IoT based Digital Agriculture Monitoring System and Their Impact on Optimal U...IoT based Digital Agriculture Monitoring System and Their Impact on Optimal U...
IoT based Digital Agriculture Monitoring System and Their Impact on Optimal U...Journal For Research
 
A REVIEW PAPER ON BFO AND PSO BASED MOVIE RECOMMENDATION SYSTEM | J4RV4I1015
A REVIEW PAPER ON BFO AND PSO BASED MOVIE RECOMMENDATION SYSTEM | J4RV4I1015A REVIEW PAPER ON BFO AND PSO BASED MOVIE RECOMMENDATION SYSTEM | J4RV4I1015
A REVIEW PAPER ON BFO AND PSO BASED MOVIE RECOMMENDATION SYSTEM | J4RV4I1015Journal For Research
 
A REVIEW ON DESIGN OF PUBLIC TRANSPORTATION SYSTEM IN CHANDRAPUR CITY | J4RV4...
A REVIEW ON DESIGN OF PUBLIC TRANSPORTATION SYSTEM IN CHANDRAPUR CITY | J4RV4...A REVIEW ON DESIGN OF PUBLIC TRANSPORTATION SYSTEM IN CHANDRAPUR CITY | J4RV4...
A REVIEW ON DESIGN OF PUBLIC TRANSPORTATION SYSTEM IN CHANDRAPUR CITY | J4RV4...Journal For Research
 
A REVIEW ON LIFTING AND ASSEMBLY OF ROTARY KILN TYRE WITH SHELL BY FLEXIBLE G...
A REVIEW ON LIFTING AND ASSEMBLY OF ROTARY KILN TYRE WITH SHELL BY FLEXIBLE G...A REVIEW ON LIFTING AND ASSEMBLY OF ROTARY KILN TYRE WITH SHELL BY FLEXIBLE G...
A REVIEW ON LIFTING AND ASSEMBLY OF ROTARY KILN TYRE WITH SHELL BY FLEXIBLE G...Journal For Research
 
LABORATORY STUDY OF STRONG, MODERATE AND WEAK SANDSTONES | J4RV4I1012
LABORATORY STUDY OF STRONG, MODERATE AND WEAK SANDSTONES | J4RV4I1012LABORATORY STUDY OF STRONG, MODERATE AND WEAK SANDSTONES | J4RV4I1012
LABORATORY STUDY OF STRONG, MODERATE AND WEAK SANDSTONES | J4RV4I1012Journal For Research
 
DESIGN ANALYSIS AND FABRICATION OF MANUAL RICE TRANSPLANTING MACHINE | J4RV4I...
DESIGN ANALYSIS AND FABRICATION OF MANUAL RICE TRANSPLANTING MACHINE | J4RV4I...DESIGN ANALYSIS AND FABRICATION OF MANUAL RICE TRANSPLANTING MACHINE | J4RV4I...
DESIGN ANALYSIS AND FABRICATION OF MANUAL RICE TRANSPLANTING MACHINE | J4RV4I...Journal For Research
 
AN OVERVIEW: DAKNET TECHNOLOGY - BROADBAND AD-HOC CONNECTIVITY | J4RV4I1009
AN OVERVIEW: DAKNET TECHNOLOGY - BROADBAND AD-HOC CONNECTIVITY | J4RV4I1009AN OVERVIEW: DAKNET TECHNOLOGY - BROADBAND AD-HOC CONNECTIVITY | J4RV4I1009
AN OVERVIEW: DAKNET TECHNOLOGY - BROADBAND AD-HOC CONNECTIVITY | J4RV4I1009Journal For Research
 
CHATBOT FOR COLLEGE RELATED QUERIES | J4RV4I1008
CHATBOT FOR COLLEGE RELATED QUERIES | J4RV4I1008CHATBOT FOR COLLEGE RELATED QUERIES | J4RV4I1008
CHATBOT FOR COLLEGE RELATED QUERIES | J4RV4I1008Journal For Research
 
AN INTEGRATED APPROACH TO REDUCE INTRA CITY TRAFFIC AT COIMBATORE | J4RV4I1002
AN INTEGRATED APPROACH TO REDUCE INTRA CITY TRAFFIC AT COIMBATORE | J4RV4I1002AN INTEGRATED APPROACH TO REDUCE INTRA CITY TRAFFIC AT COIMBATORE | J4RV4I1002
AN INTEGRATED APPROACH TO REDUCE INTRA CITY TRAFFIC AT COIMBATORE | J4RV4I1002Journal For Research
 
A REVIEW STUDY ON GAS-SOLID CYCLONE SEPARATOR USING LAPPLE MODEL | J4RV4I1001
A REVIEW STUDY ON GAS-SOLID CYCLONE SEPARATOR USING LAPPLE MODEL | J4RV4I1001A REVIEW STUDY ON GAS-SOLID CYCLONE SEPARATOR USING LAPPLE MODEL | J4RV4I1001
A REVIEW STUDY ON GAS-SOLID CYCLONE SEPARATOR USING LAPPLE MODEL | J4RV4I1001Journal For Research
 
IMAGE SEGMENTATION USING FCM ALGORITM | J4RV3I12021
IMAGE SEGMENTATION USING FCM ALGORITM | J4RV3I12021IMAGE SEGMENTATION USING FCM ALGORITM | J4RV3I12021
IMAGE SEGMENTATION USING FCM ALGORITM | J4RV3I12021Journal For Research
 
USE OF GALVANIZED STEELS FOR AUTOMOTIVE BODY- CAR SURVEY RESULTS AT COASTAL A...
USE OF GALVANIZED STEELS FOR AUTOMOTIVE BODY- CAR SURVEY RESULTS AT COASTAL A...USE OF GALVANIZED STEELS FOR AUTOMOTIVE BODY- CAR SURVEY RESULTS AT COASTAL A...
USE OF GALVANIZED STEELS FOR AUTOMOTIVE BODY- CAR SURVEY RESULTS AT COASTAL A...Journal For Research
 
UNMANNED AERIAL VEHICLE FOR REMITTANCE | J4RV3I12023
UNMANNED AERIAL VEHICLE FOR REMITTANCE | J4RV3I12023UNMANNED AERIAL VEHICLE FOR REMITTANCE | J4RV3I12023
UNMANNED AERIAL VEHICLE FOR REMITTANCE | J4RV3I12023Journal For Research
 
SURVEY ON A MODERN MEDICARE SYSTEM USING INTERNET OF THINGS | J4RV3I12024
SURVEY ON A MODERN MEDICARE SYSTEM USING INTERNET OF THINGS | J4RV3I12024SURVEY ON A MODERN MEDICARE SYSTEM USING INTERNET OF THINGS | J4RV3I12024
SURVEY ON A MODERN MEDICARE SYSTEM USING INTERNET OF THINGS | J4RV3I12024Journal For Research
 
AN IMPLEMENTATION FOR FRAMEWORK FOR CHEMICAL STRUCTURE USING GRAPH GRAMMAR | ...
AN IMPLEMENTATION FOR FRAMEWORK FOR CHEMICAL STRUCTURE USING GRAPH GRAMMAR | ...AN IMPLEMENTATION FOR FRAMEWORK FOR CHEMICAL STRUCTURE USING GRAPH GRAMMAR | ...
AN IMPLEMENTATION FOR FRAMEWORK FOR CHEMICAL STRUCTURE USING GRAPH GRAMMAR | ...Journal For Research
 

Plus de Journal For Research (20)

Design and Analysis of Hydraulic Actuator in a Typical Aerospace vehicle | J4...
Design and Analysis of Hydraulic Actuator in a Typical Aerospace vehicle | J4...Design and Analysis of Hydraulic Actuator in a Typical Aerospace vehicle | J4...
Design and Analysis of Hydraulic Actuator in a Typical Aerospace vehicle | J4...
 
Experimental Verification and Validation of Stress Distribution of Composite ...
Experimental Verification and Validation of Stress Distribution of Composite ...Experimental Verification and Validation of Stress Distribution of Composite ...
Experimental Verification and Validation of Stress Distribution of Composite ...
 
Image Binarization for the uses of Preprocessing to Detect Brain Abnormality ...
Image Binarization for the uses of Preprocessing to Detect Brain Abnormality ...Image Binarization for the uses of Preprocessing to Detect Brain Abnormality ...
Image Binarization for the uses of Preprocessing to Detect Brain Abnormality ...
 
A Research Paper on BFO and PSO Based Movie Recommendation System | J4RV4I1016
A Research Paper on BFO and PSO Based Movie Recommendation System | J4RV4I1016A Research Paper on BFO and PSO Based Movie Recommendation System | J4RV4I1016
A Research Paper on BFO and PSO Based Movie Recommendation System | J4RV4I1016
 
IoT based Digital Agriculture Monitoring System and Their Impact on Optimal U...
IoT based Digital Agriculture Monitoring System and Their Impact on Optimal U...IoT based Digital Agriculture Monitoring System and Their Impact on Optimal U...
IoT based Digital Agriculture Monitoring System and Their Impact on Optimal U...
 
A REVIEW PAPER ON BFO AND PSO BASED MOVIE RECOMMENDATION SYSTEM | J4RV4I1015
A REVIEW PAPER ON BFO AND PSO BASED MOVIE RECOMMENDATION SYSTEM | J4RV4I1015A REVIEW PAPER ON BFO AND PSO BASED MOVIE RECOMMENDATION SYSTEM | J4RV4I1015
A REVIEW PAPER ON BFO AND PSO BASED MOVIE RECOMMENDATION SYSTEM | J4RV4I1015
 
A REVIEW ON DESIGN OF PUBLIC TRANSPORTATION SYSTEM IN CHANDRAPUR CITY | J4RV4...
A REVIEW ON DESIGN OF PUBLIC TRANSPORTATION SYSTEM IN CHANDRAPUR CITY | J4RV4...A REVIEW ON DESIGN OF PUBLIC TRANSPORTATION SYSTEM IN CHANDRAPUR CITY | J4RV4...
A REVIEW ON DESIGN OF PUBLIC TRANSPORTATION SYSTEM IN CHANDRAPUR CITY | J4RV4...
 
A REVIEW ON LIFTING AND ASSEMBLY OF ROTARY KILN TYRE WITH SHELL BY FLEXIBLE G...
A REVIEW ON LIFTING AND ASSEMBLY OF ROTARY KILN TYRE WITH SHELL BY FLEXIBLE G...A REVIEW ON LIFTING AND ASSEMBLY OF ROTARY KILN TYRE WITH SHELL BY FLEXIBLE G...
A REVIEW ON LIFTING AND ASSEMBLY OF ROTARY KILN TYRE WITH SHELL BY FLEXIBLE G...
 
LABORATORY STUDY OF STRONG, MODERATE AND WEAK SANDSTONES | J4RV4I1012
LABORATORY STUDY OF STRONG, MODERATE AND WEAK SANDSTONES | J4RV4I1012LABORATORY STUDY OF STRONG, MODERATE AND WEAK SANDSTONES | J4RV4I1012
LABORATORY STUDY OF STRONG, MODERATE AND WEAK SANDSTONES | J4RV4I1012
 
DESIGN ANALYSIS AND FABRICATION OF MANUAL RICE TRANSPLANTING MACHINE | J4RV4I...
DESIGN ANALYSIS AND FABRICATION OF MANUAL RICE TRANSPLANTING MACHINE | J4RV4I...DESIGN ANALYSIS AND FABRICATION OF MANUAL RICE TRANSPLANTING MACHINE | J4RV4I...
DESIGN ANALYSIS AND FABRICATION OF MANUAL RICE TRANSPLANTING MACHINE | J4RV4I...
 
AN OVERVIEW: DAKNET TECHNOLOGY - BROADBAND AD-HOC CONNECTIVITY | J4RV4I1009
AN OVERVIEW: DAKNET TECHNOLOGY - BROADBAND AD-HOC CONNECTIVITY | J4RV4I1009AN OVERVIEW: DAKNET TECHNOLOGY - BROADBAND AD-HOC CONNECTIVITY | J4RV4I1009
AN OVERVIEW: DAKNET TECHNOLOGY - BROADBAND AD-HOC CONNECTIVITY | J4RV4I1009
 
LINE FOLLOWER ROBOT | J4RV4I1010
LINE FOLLOWER ROBOT | J4RV4I1010LINE FOLLOWER ROBOT | J4RV4I1010
LINE FOLLOWER ROBOT | J4RV4I1010
 
CHATBOT FOR COLLEGE RELATED QUERIES | J4RV4I1008
CHATBOT FOR COLLEGE RELATED QUERIES | J4RV4I1008CHATBOT FOR COLLEGE RELATED QUERIES | J4RV4I1008
CHATBOT FOR COLLEGE RELATED QUERIES | J4RV4I1008
 
AN INTEGRATED APPROACH TO REDUCE INTRA CITY TRAFFIC AT COIMBATORE | J4RV4I1002
AN INTEGRATED APPROACH TO REDUCE INTRA CITY TRAFFIC AT COIMBATORE | J4RV4I1002AN INTEGRATED APPROACH TO REDUCE INTRA CITY TRAFFIC AT COIMBATORE | J4RV4I1002
AN INTEGRATED APPROACH TO REDUCE INTRA CITY TRAFFIC AT COIMBATORE | J4RV4I1002
 
A REVIEW STUDY ON GAS-SOLID CYCLONE SEPARATOR USING LAPPLE MODEL | J4RV4I1001
A REVIEW STUDY ON GAS-SOLID CYCLONE SEPARATOR USING LAPPLE MODEL | J4RV4I1001A REVIEW STUDY ON GAS-SOLID CYCLONE SEPARATOR USING LAPPLE MODEL | J4RV4I1001
A REVIEW STUDY ON GAS-SOLID CYCLONE SEPARATOR USING LAPPLE MODEL | J4RV4I1001
 
IMAGE SEGMENTATION USING FCM ALGORITM | J4RV3I12021
IMAGE SEGMENTATION USING FCM ALGORITM | J4RV3I12021IMAGE SEGMENTATION USING FCM ALGORITM | J4RV3I12021
IMAGE SEGMENTATION USING FCM ALGORITM | J4RV3I12021
 
USE OF GALVANIZED STEELS FOR AUTOMOTIVE BODY- CAR SURVEY RESULTS AT COASTAL A...
USE OF GALVANIZED STEELS FOR AUTOMOTIVE BODY- CAR SURVEY RESULTS AT COASTAL A...USE OF GALVANIZED STEELS FOR AUTOMOTIVE BODY- CAR SURVEY RESULTS AT COASTAL A...
USE OF GALVANIZED STEELS FOR AUTOMOTIVE BODY- CAR SURVEY RESULTS AT COASTAL A...
 
UNMANNED AERIAL VEHICLE FOR REMITTANCE | J4RV3I12023
UNMANNED AERIAL VEHICLE FOR REMITTANCE | J4RV3I12023UNMANNED AERIAL VEHICLE FOR REMITTANCE | J4RV3I12023
UNMANNED AERIAL VEHICLE FOR REMITTANCE | J4RV3I12023
 
SURVEY ON A MODERN MEDICARE SYSTEM USING INTERNET OF THINGS | J4RV3I12024
SURVEY ON A MODERN MEDICARE SYSTEM USING INTERNET OF THINGS | J4RV3I12024SURVEY ON A MODERN MEDICARE SYSTEM USING INTERNET OF THINGS | J4RV3I12024
SURVEY ON A MODERN MEDICARE SYSTEM USING INTERNET OF THINGS | J4RV3I12024
 
AN IMPLEMENTATION FOR FRAMEWORK FOR CHEMICAL STRUCTURE USING GRAPH GRAMMAR | ...
AN IMPLEMENTATION FOR FRAMEWORK FOR CHEMICAL STRUCTURE USING GRAPH GRAMMAR | ...AN IMPLEMENTATION FOR FRAMEWORK FOR CHEMICAL STRUCTURE USING GRAPH GRAMMAR | ...
AN IMPLEMENTATION FOR FRAMEWORK FOR CHEMICAL STRUCTURE USING GRAPH GRAMMAR | ...
 

Dernier

Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...Christo Ananth
 
chapter 5.pptx: drainage and irrigation engineering
chapter 5.pptx: drainage and irrigation engineeringchapter 5.pptx: drainage and irrigation engineering
chapter 5.pptx: drainage and irrigation engineeringmulugeta48
 
Thermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.pptThermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.pptDineshKumar4165
 
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...ranjana rawat
 
notes on Evolution Of Analytic Scalability.ppt
notes on Evolution Of Analytic Scalability.pptnotes on Evolution Of Analytic Scalability.ppt
notes on Evolution Of Analytic Scalability.pptMsecMca
 
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756dollysharma2066
 
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordCCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordAsst.prof M.Gokilavani
 
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptx
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptxBSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptx
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptxfenichawla
 
Double rodded leveling 1 pdf activity 01
Double rodded leveling 1 pdf activity 01Double rodded leveling 1 pdf activity 01
Double rodded leveling 1 pdf activity 01KreezheaRecto
 
Double Revolving field theory-how the rotor develops torque
Double Revolving field theory-how the rotor develops torqueDouble Revolving field theory-how the rotor develops torque
Double Revolving field theory-how the rotor develops torqueBhangaleSonal
 
University management System project report..pdf
University management System project report..pdfUniversity management System project report..pdf
University management System project report..pdfKamal Acharya
 
PVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELL
PVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELLPVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELL
PVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELLManishPatel169454
 
UNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its PerformanceUNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its Performancesivaprakash250
 
VIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 BookingVIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 Bookingdharasingh5698
 
Call Girls In Bangalore ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Bangalore ☎ 7737669865 🥵 Book Your One night StandCall Girls In Bangalore ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Bangalore ☎ 7737669865 🥵 Book Your One night Standamitlee9823
 
Java Programming :Event Handling(Types of Events)
Java Programming :Event Handling(Types of Events)Java Programming :Event Handling(Types of Events)
Java Programming :Event Handling(Types of Events)simmis5
 
Bhosari ( Call Girls ) Pune 6297143586 Hot Model With Sexy Bhabi Ready For ...
Bhosari ( Call Girls ) Pune  6297143586  Hot Model With Sexy Bhabi Ready For ...Bhosari ( Call Girls ) Pune  6297143586  Hot Model With Sexy Bhabi Ready For ...
Bhosari ( Call Girls ) Pune 6297143586 Hot Model With Sexy Bhabi Ready For ...tanu pandey
 

Dernier (20)

Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
 
chapter 5.pptx: drainage and irrigation engineering
chapter 5.pptx: drainage and irrigation engineeringchapter 5.pptx: drainage and irrigation engineering
chapter 5.pptx: drainage and irrigation engineering
 
Thermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.pptThermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.ppt
 
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
 
notes on Evolution Of Analytic Scalability.ppt
notes on Evolution Of Analytic Scalability.pptnotes on Evolution Of Analytic Scalability.ppt
notes on Evolution Of Analytic Scalability.ppt
 
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
 
(INDIRA) Call Girl Bhosari Call Now 8617697112 Bhosari Escorts 24x7
(INDIRA) Call Girl Bhosari Call Now 8617697112 Bhosari Escorts 24x7(INDIRA) Call Girl Bhosari Call Now 8617697112 Bhosari Escorts 24x7
(INDIRA) Call Girl Bhosari Call Now 8617697112 Bhosari Escorts 24x7
 
Roadmap to Membership of RICS - Pathways and Routes
Roadmap to Membership of RICS - Pathways and RoutesRoadmap to Membership of RICS - Pathways and Routes
Roadmap to Membership of RICS - Pathways and Routes
 
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordCCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
 
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptx
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptxBSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptx
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptx
 
Double rodded leveling 1 pdf activity 01
Double rodded leveling 1 pdf activity 01Double rodded leveling 1 pdf activity 01
Double rodded leveling 1 pdf activity 01
 
Double Revolving field theory-how the rotor develops torque
Double Revolving field theory-how the rotor develops torqueDouble Revolving field theory-how the rotor develops torque
Double Revolving field theory-how the rotor develops torque
 
University management System project report..pdf
University management System project report..pdfUniversity management System project report..pdf
University management System project report..pdf
 
PVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELL
PVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELLPVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELL
PVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELL
 
UNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its PerformanceUNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its Performance
 
Water Industry Process Automation & Control Monthly - April 2024
Water Industry Process Automation & Control Monthly - April 2024Water Industry Process Automation & Control Monthly - April 2024
Water Industry Process Automation & Control Monthly - April 2024
 
VIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 BookingVIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 Booking
 
Call Girls In Bangalore ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Bangalore ☎ 7737669865 🥵 Book Your One night StandCall Girls In Bangalore ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Bangalore ☎ 7737669865 🥵 Book Your One night Stand
 
Java Programming :Event Handling(Types of Events)
Java Programming :Event Handling(Types of Events)Java Programming :Event Handling(Types of Events)
Java Programming :Event Handling(Types of Events)
 
Bhosari ( Call Girls ) Pune 6297143586 Hot Model With Sexy Bhabi Ready For ...
Bhosari ( Call Girls ) Pune  6297143586  Hot Model With Sexy Bhabi Ready For ...Bhosari ( Call Girls ) Pune  6297143586  Hot Model With Sexy Bhabi Ready For ...
Bhosari ( Call Girls ) Pune 6297143586 Hot Model With Sexy Bhabi Ready For ...
 

HCI BASED APPLICATION FOR PLAYING COMPUTER GAMES | J4RV4I1014

  • 1. Journal for Research | Volume 04 | Issue 01 | March 2018 ISSN: 2395-7549 All rights reserved by www.journal4research.org 48 HCI based Application for Playing Computer Games Mr. K. S. Chandrasekaran A. Varun Ignatius Assistant Professor UG Student Department of Computer Science & Engineering Department of Computer Science & Engineering Saranathan College of Engineering-620012, India Saranathan College of Engineering-620012, India R. Vasanthaguru B. Vengataramanan UG Student UG Student Department of Computer Science & Engineering Department of Computer Science & Engineering Saranathan College of Engineering-620012, India Saranathan College of Engineering-620012, India B. Vishnu Shankar UG Student Department of Computer Science & Engineering Saranathan College of Engineering-620012, India Abstract This paper describes a command interface for games based on hand gestures and voice command defined by postures, movement and location. The system uses computer vision requiring no sensors or markers by the user. In voice command the speech recognizer, recognize the input from the user. It stores and passes command to the game, action takes place. We propose a simple architecture for performing real time colour detection and motion tracking using a webcam. The next step is to track the motion of the specified colours and the resulting actions are given as input commands to the system. We specify blue colour for motion tracking and green colour for mouse pointer. The speech recognition is the process of automatically recognizing a certain word spoken by a particular speaker based on individual information included in speech waves. This application will help in reduction in hardware requirement and can be implemented in other electronic devices also. Keywords: Computer Vision, Gesture Recognition, Voice Command, Human Computer Interaction _______________________________________________________________________________________________________ I. INTRODUCTION Computer games are one of the most successful application domains in the history of interactive systems even with conventional input systems like mouse and keyboard. The existing system, mouse for instance seriously limits the way humans interact with computers. Introduction of HCI techniques to the gaming world would revolutionize the way humans play games. It is a command interface for games based on hand gestures and voice command defined by postures, movement and location. The proposed system uses a simple webcam and a PC for recognizing the input from the user and thus uses natural hand movements for playing games. This effectively reduces the cost of implementing HCI in conventional PCs. We propose a simple architecture for performing real time colour detection and motion tracking using a webcam. Since many colours are detected it is important to distinguish the specified colours distinctly. The next step is to track the motion of the specified colours and the resulting actions are given as input commands to the system. Speech technology is a very popular term right now. And Speech Recognition is the process of automatically recognizing a certain word spoken by a particular speaker based on individual information included in speech waves. Using speech recognition, we can give commands to computer and the computer will perform the given task. The main objective of this project is to construct and develop a system to execute commands of operating system by using speech recognition system that is capable of recognizing and responding to speech inputs rather than using traditional means of input(e.g. computer keyboard, mouse), thus saving time and effort of the user. The proposed system is easier to increase the interaction between people and computers by using speech recognition; especially for those who suffer from health problems, for example, the proposed system helps physically challenged persons. This application will help in reduction in hardware requirement and can be implemented in other electronic devices also. In this case, we are using TIC-TAC-TOE GAME. II. RELATED WORK A few works have been proposed recently to use free hand gestures in games using computer vision. A multimodal multiplayer gaming system combines a small number of postures, their location on a table-based interaction system and speech commands to interact with games and discusses results of using this platform to interact with popular games. In this study, a colour pointer has
  • 2. HCI based Application for Playing Computer Games (J4R/ Volume 04 / Issue 01 / 010) All rights reserved by www.journal4research.org 49 been used for object recognition and tracking. Instead of conventional finger tips a colour pointer has been used to make object detection easy and fast. Other tools facilitate the use of gesture recognition for applications in general, not only games. Speech technology is a very popular term right now. Speech recognition is highly demanded and has many useful applications. Conventional system users use pervasive devices such a mouse and keyboard to interact with the system. Further more people with physical challenges find conventional system hand to use. A good system has minimal restrictions in interacting with its user. The speed of typing and hand writing is usually one word per second, so speaking may be the fastest communication form with a computer. The applications with voice recognition can also be a very helpful to for handicapped people who have difficulties with typing. III. SPECIFIC REQUIREMENTS For Hand Gestures, we are using the OpenCV. In this study, a colour pointer has been used for object recognition and tracking. Blue and green colours are used as colour pointers. Colour detection is performed using inbuild functions in OpenCV. OpenCV (Open Source Computer Vision) is a Library of programming functions mainly aimed at real-time computer vision. Originally developed by Intel’s research centre. OpenCV is written in C++ and its primary interface is in C++, but it still retains a less comprehensive though extensive older C interface. OpenCV’s applications areas include: Facial recognition system, Gesture recognition, Human-computer interaction (HCI), Mobile robotics. OpenCV is an open source computer vision and machine learning software library. OpenCV was built to provide common infrastructure for computer vision applications and to accelerate the use of machine perception in the commercial products. Being a BSD-Licensed products, OpenCV makes it easy for business to utilize and modify the code. It has C, C++, Python, Java and MATLAB interfaces and supports Windows, Linux, Android and Mac OS. OpenCV means mostly towards real-time vision applications and takes advantage of MMX and SSE instructions when available. A full-featured CUDA AND OpenCV interfaces are being actively developed right now. There are over 500 algorithms and above 10 times as many functions that compose or support those algorithms. OpenCV is written natively in C++ and has a template interface that works seamlessly with STL containers. For Voice Input, we are using the Sphinx package. CMU Sphinx, also called Sphinx in short, is the general term to describe a group of speech recognition systems. SPIHNX the first large-vocabulary speaker-independent continuous-speech recognizer using multiple code books of various LPC-derived features. Two types of HMMs are used in SPHINX: context-independent phone model and function-word-dependent phone model. On a task using bigram grammar, SPHINX achieved word accuracy. This demonstrates the feasibility of speaker-independent continuous-speech recognition, and the feasibility of appropriateness of hidden Markov models for such a task. These include a series of speech recognizers. The speech decoders come with acoustic models and sample applications. The available resources include in addition software for acoustic model training. These two specific requirements are used in the TIC-TAC-TOE GAME. IV. DESCRIPTION Hand Gestures In the TicTacToe Game, the OpenCV works: The user use one color for the movement and another color for click options in their fingers. The user move the hands into the webcam and the images are captured by openCV tool. The captured images go to the segment analysis by BG Removal, and the segmented image is recognizes by the recognizer. The recognized movement is mapped to the mouse tracker, and if the another color will identified the action takes placed. Both the use of gestures and having games as an application brings specific requirements to an interface and analyzing these requirements was one of the most important steps in designing Gestures. Gestures are most often used to relay singular commands or actions to the system, instead of tasks that may require continuous control, such as navigation. Therefore, it is recommended that gestures be part of a multimodal interface. This also brings other advantages, such as decoupling different tasks in different interaction modalities, which may reduce the user's cognitive load. So, while gestures have been used for other interaction tasks in the past, including navigation. All this leads to the requirement that the vocabulary of gestures in each context of the interface, while small, must be as simply and quickly modifiable as possible. Systems that require retraining for each set of possible gestures, for instance, could prove problematic in this case, unless such training could be easily automated. The interface should also accept small variations for each gesture. Demanding that postures and movements be precise, while possibly making the recognition task easier, makes the interaction considerably harder to use and learn, demanding not only that the user remember the gestures and their meanings but also train how to do them precisely, greatly reducing usability. It could be argued that, for particular games, reducing the usability could actually be part of the challenge presented to the player (the challenge could be remembering a large number of gestures, or learning how to execute them precisely, for instance). While the discussion of whether that is a good game design practice or not is beyond the scope of this paper, Gestures opts for the more general goal of increasing usability as much as possible. This agrees with the principle that, for home and entertainment applications, ease of learning, reducing user errors, satisfaction and low cost are among the most important design goals. The system should also allow playing at home with minimal setup time required. Players prefer games where they can be introduced to the action as soon as possible, even while still learning the game and the interface. Therefore, the system should not require specific background or lighting conditions, complex
  • 3. HCI based Application for Playing Computer Games (J4R/ Volume 04 / Issue 01 / 010) All rights reserved by www.journal4research.org 50 calibration or repeated training. Allowing the use of the gesture-based interface with conventional games is also advantageous to the user, providing new options to enjoy a larger number of games. From the developer point of view, the system should be as easy as possible to integrate within a game, without requiring specific knowledge of areas such as computer vision or machine learning. The Abstract Framework Figure 1 shows a UML Activity Diagram Representing Gesture object flow model. It is responsible for the gesture model, while GestureAnalysis and GestureRecognition define the interfaces for the classes that will implement gesture analysis and recognition. To these activities are added image capture and segmentation. GestureCapture provides an interface for capturing 2D images from one or multiple cameras or prerecorded video streams (mostly for testing). The images must have the same size, but not necessarily the same color depth. A device could provide, for instance, one or more color images and a grayscale image to represent a dense depth map. GestureSegmentation should usually find in the original image(s) one or both hands and possibly the head (to determine relative hand position). Fig. 1: Hand Gesture Process Figure 1 shows that the usual flow of information in Gestures in each time step is as follows: one or more images serve as input to the image capture model, which makes these images available as an OpenCV's Image object. The segmentation uses this image and provides a segmented image as an object of the same class (and same image size, but not necessarily color depth). Based on the segmented image, the analysis provides a collection of features as a GestureFeatureCol object which is in turn used by the recognition to output a gesture. GestreFeatureCol is a collection of GestureFeature objects. GestureFeature contains an identifier string to describe the feature and either a scalar or an array of values (more often used) or an image (useful, for instance, for features in the frequency domain). GestureFeature already defines several identifiers, for those features most often found in the gesture recognition literature, to facilitate the interface between analysis and recognition, but user-created identifiers may also be used. Input is an optional module that accompanies but is actually separate from Gestures. It is responsible for facilitating, in a very simple way, both multimodal input and integration with games or engines not necessarily aware of Gestures. It simply translates its input, which is a description (a numerical or string ID or a XML description, for instance) that may be supplied either by Gestures or any other system (and here lies the possibility of multimodal interaction), into another type of input, such as a system input (like a key down event) or input data to a particular game engine. In one of the tests, for instance, gestures are used for commands and a dancing mat is used for navigation. Because this architecture consists mostly of interfaces, it is possible to create a single class that, through multiple inheritance, implements the entire system functionality. This is usually considered a bad practice in object orientation (should be avoided) and is actually one of the reasons why aggregation is preferred to inheritance. There are design patterns that could have been used to force the use of aggregation and avoid multiple inheritance, but Gestures opts for allowing it for a reason. Gesture recognition may be a very costly task in terms of processing and must be done in real time for the purpose of interaction. Many algorithms may be better optimized for speed when performing more than one task (such as segmentation and analysis) together. Furthermore, analysis and recognition are very tightly coupled in some algorithms and forcing their separation could be difficult. So, while it is usually recommended to avoid using multiple inheritance and to implement each task in a different class, making it much easier to exchange one module for the other or to develop modules in parallel and in teams, the option to do otherwise exists, and for good reason.
  • 4. HCI based Application for Playing Computer Games (J4R/ Volume 04 / Issue 01 / 010) All rights reserved by www.journal4research.org 51 Speech Recognition For Voice Input, we are using the Sphinx Library. Sphinx consists of in-build functions which are used to detect the speech input given by a particular user. In this case, we are giving the position number as the input for TicTacToe Game through voice command. The Speech Recognizer, recognize the user input and store it, so that it can be used as an input in TicTacToe Game. Fig. 2: Speech Recognition Process For Voice Input, human voice which is sampled at rate of 16,000per second. It should be given in live mode. It should be noted that the system would have difficulty in recognizing accented English, hence it is recommended to give input as native English speakers would do. In Speech Recognizer, Platform speed directly affected our choice of a speech recognition system for our work. It is used to recognize the voice commands as spoken by user. The voice input is supposed to be given from its microphone. The speech recognition process decodes these input files, identifies the command and generates output accordingly. The Dictionary file is a separate file that describes the phonics of each word. It is a collection of pre-defined words that are relevant to various activities. Each word is separated into syllables. The output in from the recognizer(i.e.) Recognition result is cross referenced with the dictionary file and correct word is returned as the result. The Decoder module then parses this word and converts into equivalent text. This text is then used by the command executer to execute the required functions. The decoder module is inbuild the SPHINX System. The class Microphone from the sphinx for microphone control using java provides solve the essential functions for working with microphone connected to the system. The StartRecording() function in the microphone class can be used to capture the audio from the microphone connected to the system. It returns an object of the result class which can be converted to text command using the recognize() function in Recognizer class. Speech Recognition is the process of identifying the speech in the recorded audio by using the phonetics dictionary. The phonetics for each word is stored in the dictionary file. The recognize() function in the recognizer class uses the grammar file to recognize speech if any in the audio recorded by the microphone. A different approach that has been studied very well is the analysis of the human voice for means of human computer interaction. The topic of speech processing has been studied since the 1960’s and is very well researched. Even though the process of speech processing, and more precise speech recognition. Speaker Recognition: The goal of a speaker recognition system is to determine, to whom a recorded voice sample belongs. To achieve this, possible users of this system need first to enroll their voice. This is used to calculate a so-called model of the user. This model is then later again used to perform the matching, when the system needs to decide the owner of a recorded voice section. Such a system can either be built text-dependent, meaning that the user needs to speak the same phrase during enrollment and usage of the system. This can be seen similar to the setting a password for a computer, which is then required to gain access. A recognition system can also operate text independent, so that the user may speak a different phrase during enrollment and usage. This is more challenging to perform but provides a more flexible way the system can be used. Speaker recognition is mainly used for two tasks: Speaker verification assumes that a user claims to be of a certain identity. The
  • 5. HCI based Application for Playing Computer Games (J4R/ Volume 04 / Issue 01 / 010) All rights reserved by www.journal4research.org 52 system is then used to verify or refute this claim. A possible use case is to control access for a building. Here, the identity claim could be provided by the usage of a secondary system like smart cards or fingerprints. Speaker identification: It does not assume a prior identity claim and tries to determine to whom a certain voice belongs. These systems can either be built for a closed group, where all possible users are known to the system beforehand. Or it can be used for an open group, meaning that not all possible users of the system are already enrolled. Most of the time this is achieved by building a model for an “unknown speaker”. If the recorded voice sample fits this “unknown model” the speaker is not yet enrolled. The system can then either just ignore the voice or build a new model for this speaker, so that he later can be identified. Speech Recognition The idea behind speech recognition is to provide a means to transcribe spoken phrases into written text. Such a system has many versatile capabilities. From controlling home appliances as well as light and heating in a home automation system, where only certain commands and keywords need to be recognized, to full speech transcription for note keeping or dictation. There exist many approaches to achieve this goal. The most simple technique is to build a model for every word that needs to be recognized. As shown in the section pattern matching this is not feasible for bigger vocabularies like a whole language. Apart from constraining the accepted phrases, it must be considered if the system is only used by a single individual, so-called speaker dependent, or if it should perform equally well for a broad spectrum of people, called speaker independent. Speech Processing Although speaker recognition and speech recognition systems achieve different goals, both are built from the same general structure. This structure can be divided into following stages:  Signal generation models the process of speaking in the human body.  Signal capturing & preconditioning deals with the digitalization of voice. This stage can also apply certain filters and techniques to reduce noise or echo.  Feature extraction takes the captured signal and extracts only the information that is of interest to the system.  Pattern matching then tries to determine to what word or phrase the extracted featured belong and concludes on the system output. V. CONCLUSION The System architecture proposed will completely revolutionize the way gaming applications are performed. The system makes use of web camera and which are an integral part of any standard system, eliminating the necessity of additional peripheral devices. Our endeavor foe object detection and image processing in OpenCV for the implementation of the gaming console provide to be practically successful. Here a person’s motions tracted and interpreted as commands. Most gaming application required additional hardware which is often very costly. The motive was to create this technology in the cheapest possible way under a standardized operating system using a set of wearable gestureable interfaces. This technique can be further extended to other services also. In Speech Recognition, the proposed system provides a better performance and experience I user interaction. It reduces potential time involved in conventional interactive methods. Voice recognition is irrefutably the feature of human computer interaction and our study proposes a new approach to utilize this approach for enhanced user experience and extend in ability of computer interaction to motor impaired users. REFERENCES [1] D. Bowman, E. Kruijff, J. LaViola, I. Poupyrev, 3D User Interface: Theory and Practice, Addison-Wesley, 2005. [2] T. Stamer, B. Leibe, B. Singletary, J. Pair, "Mind-warping: towards creating a compelling collaborative augmented reality game", Proc. of the 5th international conference on Intelligent user interfaces (IUI '00), pp. 256-259, 2000. [3] M. Snider, "Microsoft unveils hands-free gaming", USA Today, June 2009, [online] Available: http://www.usatoday.com/tech/gaming/2009-06-01-hands- free-microsoft_N.htm. [4] V. Dobnik, Surgeons may err less by playing video games, 2004, [online] Available: http://www.msnbc.msn.com/id/4685909. [5] S. LAAKSO, M. LAAKSO, "Design of a Body-Driven Multiplayer Game System", ACM Comput. Entertain. vol. 4, no. 4, 2006. [6] A. Camurri, B. Mazzarino, G. Volpe, "Analysis of Expressive Gesture: The EyesWeb Expressive Gesture Processing Library" in Gesture-Based Communication in Human-Computer Interaction, Springer, pp. 469-470, 2004. [7] B. Shneiderman, Designing the User Interface: Strategies for Effective Human-Computer Interaction, Addison-Wesley, 1998. [8] Cinneide, A., Linear Prediction: The Technique, Its Solution and Application to Speech. Dublin Institute of Technology, 2008. [9] Gutierrez-Osuna, R. CSCE 689-604: Special Topics in Speech Processing, Texas A&M University, 2011, courses.cs.tamu.edu/rgutier/csce689 s11. [10] Schalkwyk, J., et al., Google Search by Voice: A case study. In Advances in Speech Recognition. Springer, 2010. [11] Rabiner, L., A tutorial on hidden Markov models and selected applications in speech recognition. In Proceedings of the IEEE, vol. 77, no. 2 pp. 257-286, 1989. [12] Campbell, J.P. Jr., Speaker recognition: a tutorial. In Proceedings of the IEEE, vol. 85, no. 9 pp. 1437-1462, 1997.