2. Signs-to-Speech: A Mobile App
For Translating Sign Language to
Speech Using Convolutional
Nueral Network is a mobile app
that captures hand sign
languages and convert it by letter
and read it as words with an
audio.
3. OBJECTIVES
a. Gather and build an image dataset of the American Language alphabet.
b. Train a CNN model to classify sign language alphabet on the gathered image dataset.
c. Compare the performance of using ADAM and SGD optimization algorithm on the
performance of the CNN model.
d. Develop an andriod application using the trained CNN model to translate the real-life
sign language letters into speech.
4. Ahmed KASAPBASI (2021)
More than 5% of the world's population is affected by hearing impairment.
To overcome the challenges faced by these individuals, various sign
languages have been developed as an easy and efficient means of
communication. Sign language depends on signs and gestures which give
meaning to something during communication.
Ankit Ojha (2020)
Having to communicate between deaf people and normal public has
become a difficult task now days and to implement a such as the society
lacks a good translator for it and having an app for it in our mobile phones
is like having a dream at day.
8. CONCLUSION RECOMMENDATION
1. The gather and build image dataset
of ASL alphabet were given by the
researchers and used in this study.
2. The train CNN model to classify SL
alphabet on the gathered image dateset
is able to fit hand sign language.
3. That using SGD and Adam
optimization algorithm for training the
dataset and the highest validation
accuracy is Adam optimization.
4. Developing an android application
using the trained CNN model to
translate real-life SL letters into speech
is exemplary.
1. The project would be more heplful if
there will be a spacing so that it will
construct a sentence.
2. The use of more data with high
quality image for more accuracy.
3. The project would be more helpful if it
is able ti identify sign language words so
that you dont need to do it by letters.