IRJET - For(E)Sight :A Perceptive Device to Assist Blind People

IRJET Journal
IRJET JournalFast Track Publications

https://irjet.net/archives/V7/i3/IRJET-V7I3567.pdf

International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 07 Issue: 03 | Mar 2020 www.irjet.net p-ISSN: 2395-0072
© 2020, IRJET | Impact Factor value: 7.34 | ISO 9001:2008 Certified Journal | Page 2842
For(E)Sight : A Perceptive Device to Assist Blind People
Jajini M1, Udith Swetha S2, Gayathri k3, Harishma A4
1 Assistant Professor, Velammal College of Engineering and Technology
Viraganoor, Madurai, Tamil Nadu.
2,3,4Student, Electrical and Electronics Engineering, Velammal College of Engineering and Technology
Viraganoor, Madurai, Tamil Nadu.
---------------------------------------------------------------***---------------------------------------------------------------
Abstract- Blind mobility is one of the major challenges
faced by the visually impaired people in their daily lives.
Many electronic gadgets have been developed to help them
but they are reluctant to use those gadgets as they are
expensive, large in size and very complex to operate. This
paper presents a novel device called FOR(E)SIGHT to
facilitate normal life to the visually impaired people.
FOR(E)SIGHT is specifically designed to assist visually
challenged people in identifying the objects, text ,action with
the help of ultrasonic sensor and Camera modules as it
converts them to voice with the help of RASPBERRY PI 3 and
OPENCV. FOR(E)SIGHT helps the blind person to
communicate with the mute people effectively, as it
identifies the gesture language of mute using RASPBERRY PI
3 and its camera module. It also locates the position of
visually challenged people with the help of GPS of mobiles
interfaced with RASPBERRY PI 3. While performing the
above mentioned processes, the data is collected and stored
in cloud. The components used in this device are of low cost,
small size and easy incorporation, making it possible for the
device to be widely used as consumer and user friendly
device.
Keywords- Object detection, OPENCV, Deep learning,
Text detection, Gesture.
1. INTRODUCTION
Visual impairment is a prevalent disorder among people of
different age groups. The spread of visual impairment
throughout the world is an issue of greater significance.
According to the World Health Organization (WHO), about
8 percent of the population in the eastern Mediterranean
region is suffering from vision problems, including
blindness, poor vision and some sort of visual impairment.
They face many challenges and difficulties. It includes the
complexity of moving in complete autonomy and the
ability to identify objects. According to Foulke "Mobility is
the ability to travel safely, comfortably, gracefully and
independently through the world."Mobility is an
important key to personal independence. As visually
challenged people face a problem for safe and independent
mobility, a great deal of work has been devoted to the
development of technical aids in this field to enhance their
safe travel. Outdoor Travelling can be very demanding for
blind people. Nonetheless, if a blind person is able to move
outdoors, they gain confidence and will be more likely to
travel outdoors often leading to improved quality of their
life. Travelling can be made easier if the visually
challenged people are able to do the following tasks. They
are:
- Identify objects that lie as an obstacle in
the path.
- Identify any form of texts that convey
traffic rules.
Most blind people and people with vision difficulties were
not in a position to complete their studies. Special schools
for people with special needs are not available everywhere
and most of them are private and expensive thus it also
becomes a major obstacle for them to live a scholastic life.
Communication between people with different defects has
been impossible decades ago. Particularly the
communication between visually challenged people and
deaf-mute people is a challenging task. If complete
progress is made to enable such communication, the
people with impairment forget their inability and can live
an ambitious life in this competitive world.
The rest of the paper is organized as follows. Section II will
provide an overview of the various related works in the
area of using deep learning based computer vision
techniques for implementing smart devices for the visually
impaired. The detail of the methodology used is discussed
in section III. Proposed system design and implementation
will be presented in Section IV. Experimental results are
presented in Section V. Future scope is discussed in
Section VI. Conclusion is provided at section VII.
2. RELATED WORK
The application of technology to aid the blind has been a
post-World War II development. Navigation and
positioning techniques in mobile robotics have been
explored for blind assistance to improve mobility. Many
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 07 Issue: 03 | Mar 2020 www.irjet.net p-ISSN: 2395-0072
© 2020, IRJET | Impact Factor value: 7.34 | ISO 9001:2008 Certified Journal | Page 2843
systems rely on heavy, complex, and expensive handheld
devices or ambient infrastructure, which limit their
portability and feasibility for daily uses. Various electronic
devices emerged to replace traditional tools with
augmented functions of distance measurement and
obstacle avoidance, however, the functionality or
flexibility of the devices are still very limited. They act as
aids for vision substitution. Some of such devices are:
2.1 Long cane:
The most popular mobility aid is, of course, the long cane.
It is designed primarily as a mobility tool to detect objects
in the path of user. Its height depends on the height of the
user.
2.2 eSight 3:
It was developed by CNET. It has High resolution camera
for capturing image and video to help people with poor
vision. The demerit is that it does not improve vision as it
just an aid. Further improvements are made to produce a
Water proof version of eSight 3.
2.3 Aira:
It was developed by SumanKanuganti. Aira uses smart
glasses to scan the environment Aira agents help users to
interpret their surroundings by smart glasses. But it takes
large waiting time for connecting it to the Aira agents in
order to be able to sense to include language translation
features.
2.4 Google Glasses:
It was developed by Google Inc. Google Glasses show
information without using hands gestures. Users can
communicate with the Internet via normal voice
commands. It can capture images and videos, get
directions, send messages, audio calling and real- time
translation using word lens app. But It is expensive
(1,349.99$).Efforts are made to reduce costs to make it
more affordable for the consumers.
2.5 Oton Glass:
It was developed by Keisuke Shimakage, Japan. It has
Support for dyslexic People as it Converts images to words
and then to audio. It looks like a normal glasses and it also
supports English and Japanese languages. But it supports
only people with reading difficulty and no support for
blind people. It can be improved to support blind people
also by including proximity sensors.
3. METHODOLOGY
FOR(E)SIGHT uses OPENCV(OPENSOURCE COMPUTER
VISION). Computer Vision Technology has played a vital
role in helping visually challenged people to carry out
their routine activities without much dependency on other
people. Smart devices are the solution which enables blind
or visually challenged people to live normal life.
FOR(E)SIGHT is an attempt in this direction to build a
novel smart device which has the ability to extract and
recognize object, text, action captured from an image and
convert it to speech. Detection is achieved using the
OpenCV software and open source Optical Character
Recognition (OCR) tools Tesseract based on Deep Learning
techniques. The image is converted into text and the
recognized text is further processed to convert to an
audible signal for the user. The novelty of the
implemented solution lies in providing the basic facilities
which is cost effective, small-sized and accurate. This
solution can be potentially used for both educational and
commercial applications. It consists of a Raspberry Pi 3 B+
microcontroller which processes the image captured from
a camera super-imposed on the glasses of the blind
person.
Fig.1
The block diagram indicates the basic process taking place
in FOR(E)SIGHT.(Fig.1)
The components used in FOR(E)SIGHT can be classified
into hardware and Software. They are listed and described
below,
3.1 Hardware Requirements:
It includes
1) Raspberry Pi 3B+
2) camera module
3) ultrasonic sensor
4) Headphones.
1) Raspberry Pi 3B+:
Raspberry Pi 3B+ is a credit card-sized computer that
contains several important functions on a single chip. It is
CAMERA
SUPPLY
RASPBERRY PI
USER
SPEECH
ULTRASONIC
SENSOR
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 07 Issue: 03 | Mar 2020 www.irjet.net p-ISSN: 2395-0072
© 2020, IRJET | Impact Factor value: 7.34 | ISO 9001:2008 Certified Journal | Page 2844
a system on a chip (SoC).It needs to be connected with a
keyboard, mouse, display, power supply, SD card and
installed operating system. It is an embedded system that
can run as a no-frills PC, a pocket coding computer, a hub
for homemade hardware and more. It includes 40 GPIO
(General Purpose Input/output) pins to control various
sensors and actuators. Raspberry Pi 3 used for many
purposes such as education, coding, and building
hardware projects. It uses 1Python through which it can
accomplish many important tasks. The Raspberry Pi 3
uses Broadcom BCM2837 SoC Multimedia processor. The
Raspberry Pi’s CPU is the 4x ARM Cortex-A53, 1.2GHz
processor. It has internal memory of 1GB LPDDR RAM.
2) Camera Module:
The Raspberry Pi camera is 5MP and has a resolution of
2592x1944 pixels. It is used to capture stationary objects.
On the other hand IPcam can be used to capture actions.
IPcam of mobilephones can also be used for this purpose.
The IPcam has a view angle of 60° with a fixed focus. It can
capture images with maximum resolutions of 1289x720
pixels. It is compatible with most operating systems
platforms such as Linux, Windows, and MacOS.
3) Ultrasonic Sensor:
The ultrasonic sensor is used to measure the distance
using ultrasonic waves. It emits the ultrasonic waves and
receives back the echo. So, by measuring the time, the
ultrasonic sensor will measure the distance to the object.
It can sense distances in the range from 2-400 cm. In
FOR(E)SIGHT, the ultrasonic sensor measures the distance
between the camera and an object or text or action to
decode it into an image. It was observed based on
experimentation that the distance to the object should be
from 10 cm to 150 cm to capture a clear image.
4) Headphones:
Wired headphones was used since Raspberry Pi 3 B+ come
with Audio jack of 3.5mm.The headphones will be used to
help the user listen to the text that is been converted to
audio after it has been captured by the camera or to listen
to the translation of the text. The headphone used is small,
lightweight so the user will not feel uncomfortable
wearing them.
1
The code is available at :https://github.com/Udithswetha/foresight/tree/master/.gitignore
3.2 Software Requirements:
1) OpenCV Libraries:
OpenCV is a library of programming functions for real-
time computer vision. The library is cross-platform that is,
it is available on different platforms including Windows,
Linux, OS X, Android, iOSetc and it supports a wide variety
of programming languages like C++, Python ,Java etc.
OpenCV is free for use under the open-source BSD license
It performs real-time applications like image processing,
Human–computer interaction (HCI), Facial recognition
system, Gesture recognition, Mobile robotics, Motion
tracking, Motion understanding, Object identification,
Segmentation and recognition, Augmented reality and
many more.
2) OCR:
OCR (Optical Character Recognition) is used to convert
typed, printed or handwritten text into machine-encoded
text. There are some OCR software engines which try to
recognize any text in images such as Tesseract and EAST.
FOR(E)SIGHT uses Tesseract version 4, because it is the
best open source OCR engines. It is used directly, or using
an API to extract data from images. It supports a wide
variety of languages. It is open source software that has a
framework for building efficient speech synthesis systems.
3) TTS API:
Last function is text to voice conversion. In order to
implement this task, TTS (Text-to-Speech) is installed. It is
a python library that interfaced with certain API .TTS has
many features such as conversion of text to voice, provide
error pronunciation using customizable text pre-
processors.
4. PROPOSED SYSTEM
FOR(E)SIGHT is a device that is designed to use the
computer vision technology to capture an image and
extract English text and convert it into audio signal with
the help of speech synthesis. The overall system can be
viewed in three stages. They are,
4.1 Primary Unit:
1) Sensing:
The first unit is sensing unit. It comprises of ultrasonic
sensor and camera. Whenever the object or text or action
is noticed within a limited region of space, the camera
captures the image.
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 07 Issue: 03 | Mar 2020 www.irjet.net p-ISSN: 2395-0072
© 2020, IRJET | Impact Factor value: 7.34 | ISO 9001:2008 Certified Journal | Page 2845
2) Image acquisition:
In this step, the camera captures the images of the object
or text or action. The quality of the image captured
depends on the camera used.
4.2 Processing unit:
It is a technique through which data of Image is obtained
in the form of numbers. It is done using OpenCV and OCR
tools. The libraries which are commonly used for this
are numpy, matplotlib and pillow.
This step consists of
1) Gray scale conversion:
The image is converted to gray scale because OpenCV
functions tend to work on the gray scale image.
NumPy includes array objects for representing vectors,
matrices, images and linear algebra functions. The array
object allows the important operations such as matrix
multiplication, solving equation systems, vector
multiplication, transposition and normalization, that are
needed to align and warp the images, model the variations
and classify the images. Arrays in NumPy are multi-
dimensional and can represent vectors, matrices, and
images. After reading images to NumPy arrays,
mathematical operations are performed.
2) Supplementary Process:
Noise is removed using bilateral filter. Canny edge
detection is performed on the gray scale image for better
detection of the contours. The warping and cropping of the
image are performed to remove the unwanted
background. In the end, Thresholding is done to make the
image look like a scanned document to allow the OCR to
convert the image to text efficiently.
NO
YES
FOR(E)SIGHT implements a sequential flow of algorithms.
The flow of the process is depicted.
4.3 Final unit:
The last step is to convert the obtained Text to speech
(TTS). TTS program is created in python by installing the
TTS engine (Espeak).The quality of the spoken voice
depends on the speech engine used.
5. EXPERIMENTAL RESULTS
The figure (Fig.2) shows the final prototype of
FOR(E)SIGHT.
Fig.2
It is noted that the hardware circuit (Raspberry Pi) is
placed on the glass head piece and the glasses can be worn
on the face which consists of the cameraand the ultrasonic
sensor. The sensor senses the object or text or gesture and
the pi camera captures it as an image. Though it is not
highly comfortable, it is comparatively cost effective and
beneficial.
5.1 Object Detection:
The main goal was to detect the presence of any obstacle
in front of the user by using ultrasonic sensing and image
processing techniques. FOR(E)SIGHT has successfully
completed the task.(Fig.3)
START
SENSE OBSTACLE
DIST<
10
CAPTURE AS IMAGE
PROCESS IMAGE-TEXT
SSTEXT
TEXT-> SPEECH
STOP
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 07 Issue: 03 | Mar 2020 www.irjet.net p-ISSN: 2395-0072
© 2020, IRJET | Impact Factor value: 7.34 | ISO 9001:2008 Certified Journal | Page 2846
Fig.4
5.2 Text Detection:
In this result, the text detectorTessaract that was used in
this solutionwas checked whether it was working well.
The test showed mostly good results on big clear texts. It is
found that the recognition depends on the clarity of the
text in the picture, its font theme, size, and spacing
between the words. (Fig.5)
Fig.5
5.3 Gesture Detection:
In this test, the aim is to check the gesture is identified
with the help OCR tools.FOR(E)SIGHT came out with
positive results by detecting the gestures. But it is found
that FOR(E)SIGHT identifies only the still images of
gestures.(Fig.6)
Fig.6
6. FUTURE SCOPE
However, there are some limitations in the proposed
solution which can be addressed in the future
implementations. Hence, the following features are
recommended to be incorporated in the future versions.
- Time taken for each process can be
reduced for a better usage.
- Translation module can be developed, in
order to cater to a wide variety of users,
as it would be highly beneficial if it were a
multi-lingual featured device.
- To improve the direction and warning
messages to the user, GPS-based
navigation and alert system can be
included.
- Recognizing more amounts of gestures
would be helpful for performing more
tasks.
- High space visibility can be provided by
including a wide angle camera 2700
degrees as compared to 600 currently
used.
- To provide for more real-time experience,
video processing instead of still images
can be used instead of still images.
7. CONCLUSION
A novel device FOR(E)SIGHT for helping the visually
challenged has been proposed in this paper. It is able to
capture an image, extract and recognize the embedded
text and convert it into speech. Despite certain limitations,
the proposed design and implementation served as a
practical demonstration of how open source hardware and
software tools can be integrated to provide a solution
which is cost effective, portable and user-friendly. Thus
the device discussed in this paper has helped the visually
challenged by providing maximum benefits at a minimal
costs.
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 07 Issue: 03 | Mar 2020 www.irjet.net p-ISSN: 2395-0072
© 2020, IRJET | Impact Factor value: 7.34 | ISO 9001:2008 Certified Journal | Page 2847
REFERENCES
[1] Saransh Sharma, Samyak Jain, Khushboo,” A Static
Hand Gesture and Face Recognition System For Blind
People”, 6th International Conference on Signal
Processing and Integrated Networks (SPIN), pp.534-
539, 2019.
[2] Jinqiang Bai, Shiguo Lian, Zhaoxiang Liu, Kai Wang,
Dijun Liu,” Virtual-blind Road following based
wearable navigation device for blind people ”, IEEE
Transactions on consumer electronics, vol. 31, pp.
132-140, February 8, 2018.
[3] Arun Kumar,Anurag Pandey,” Hand Gestures and
Speech Mapping of a Digital Camera For Differently
Abled”, IEEE 6th International Conference on
Advanced Computing, pp.392-397, 2016.
[4] H. Zhang and C. Ye, “An indoor navigation aid for the
visually impaired,”in 2016 IEEE Int. Conf. Robot.
Biomimetics (ROBIO), Qingdao, pp.467-472, 2016.
[5] Hanene Elleuch, Ali Wali, Anis Samet, Adel M. Alimi, “A
static hand gesture recognition system for real time
mobile device monitoring”, 2015 15th International
Conference on Intelligent Systems Design and
Applications (ISDA), pp. 195 – 200, 2015.
[6] Hongsheng he, Van li, Vong Guan, and Jindong Tan, ”
Wearable Ego-motion Tracking for blind navigation in
indoor environments”, IEEE transactions on
Automation science and engineering, vol. 12, no.4,
pp.1181- 1190, October 2015.
[7] X. C. Yin, X. Yin, K. Huang, H. W. Hao, "Robust text
detection in natural scene images " IEEE transactions
on Pattern Analysis and Machine Intelligence, 36(5),
pp. 970-980, 2014.
[8] Samleo L. Joseph, Jizhong Xiao, Xiaochen Zhang,
Bhupesh Chawda, Kanika Narang, Nitendra Rajput,
Sameep Mehta, and L. Venkata Subramaniam,” Being
Aware of the World: Toward Using Social Media to
Support the blind With Navigation”, IEEE Transactions
on Human-machine systems, vol. 33, no. 4, pp.1-7,
November 30, 2014.
[9] H. He and J. Tan, “Adaptive ego-motion tracking using
visual-inertial sensors for wearable blind navigation,”
in ACM Proceedings of Wireless Health Conerence,
pp.1-7, 2014.
[10] L. Neumann, J. Matas, "Scene text localization and
recognition with oriented stroke detection",
Proceedings of IEEE International Conference on
Computer Vision, pp. 97-104, 2013.
[11] Bissacco, M. Cummins, Y. Netzer, H. Neven, "Reading
text in uncontrolled conditions", Proceedings of the
IEEE International Conference on Computer Vision,
pp. 785-792, 2013.
[12] Y. T. Chucai Yi, “Text extraction from scene images by
character appearance and structure modeling,”
Computer Visual Image Understanding, vol. 117, no. 2,
pp. 182–194, 2013.
[13] World Health Organization—Visual Impairment and
Blindness. [Online]. Available:
http://www.who.int/mediacentre/factsheets/fs282/
en/ (2013).
[14] L. Neumann, J. Matas, "Real-time scene text
localization and recognition ", Proceedings of IEEE
International Conference onComputer Vision and
Pattern Recognition (CVPR), pp. 3538-3545, 2012.
[15] Dimitrios Dakopoulos and Nikolaos G. Bourbakis,”
Wearable obstacle avoidance electronic travel aids for
blind: A survey,” IEEE Transactions on systems, man,
and cybernetics-part c: applications and reviews”, vol.
40, no. 1, pp. 25-35, January 2010.
[16] David J. Calder,” Assistive technology interfaces for the
Blind”, 3rd IEEE International conference on Digital
Ecosystems and Technologies,pp.318-323,2009.
[17] B. Andò ,” A Smart Multisensor Approach to Assist
Blind People in Specific Urban Navigation Tasks”, IEEE
Transactions on Neural Systems and Rehabilitation
Engineering, vol. 16, no. 6, pp 592-594, December
2008.
[18] Hui Tang and David J. Beebe,” An Oral tactile interface
for blind navigation”, IEEE Transactions on neural
systems and rehabilitation engineering, vol. 14, no.1,
pp.116-123, March 2006.
[19] Mohammad Shorif Uddin and Tadayoshi Shioyama,”
Detection of Pedestrian Crossing Using Bipolarity
Feature-An Image-Based Technique”, IEEE
Transactions on intelligent transportation systems,
vol. 6, no. 4, pp.439-445, December 2005
[20] Y. J. Szeto, “A navigational aid for persons with severe
visual impairments: A project in progress,” presented
at the Proceedings of the 25th Annual International
Conference on IEEE Engineering Medical Biology
Society., Cancun, Mexico, Sep. 17–21, 2003.
[21] Andò, “Electronic sensory systems for the visually
impaired,” IEEE Magazine on Instrumentation and
Measurement, vol. 6, no. 2, pp. 62–67, Jun. 2003.

Recommandé

IRJET- Healthy Beat par
IRJET- Healthy BeatIRJET- Healthy Beat
IRJET- Healthy BeatIRJET Journal
30 vues6 diapositives
Visual, navigation and communication aid for visually impaired person par
Visual, navigation and communication aid for visually impaired person Visual, navigation and communication aid for visually impaired person
Visual, navigation and communication aid for visually impaired person IJECEIAES
24 vues8 diapositives
IRJET- Wide Angle View for Visually Impaired par
IRJET- Wide Angle View for Visually ImpairedIRJET- Wide Angle View for Visually Impaired
IRJET- Wide Angle View for Visually ImpairedIRJET Journal
22 vues4 diapositives
IRJET- Voice Assistant for Visually Impaired People par
IRJET-  	  Voice Assistant for Visually Impaired PeopleIRJET-  	  Voice Assistant for Visually Impaired People
IRJET- Voice Assistant for Visually Impaired PeopleIRJET Journal
236 vues2 diapositives
IRJET- Artificial Vision for Blind Person par
IRJET- Artificial Vision for Blind PersonIRJET- Artificial Vision for Blind Person
IRJET- Artificial Vision for Blind PersonIRJET Journal
50 vues4 diapositives
IRJET- A Survey on Indoor Navigation for Blind People par
IRJET- A Survey on Indoor Navigation for Blind PeopleIRJET- A Survey on Indoor Navigation for Blind People
IRJET- A Survey on Indoor Navigation for Blind PeopleIRJET Journal
56 vues3 diapositives

Contenu connexe

Tendances

Smart Cane for Blind Person Assisted with Android Application and Save Our So... par
Smart Cane for Blind Person Assisted with Android Application and Save Our So...Smart Cane for Blind Person Assisted with Android Application and Save Our So...
Smart Cane for Blind Person Assisted with Android Application and Save Our So...Dr. Amarjeet Singh
92 vues6 diapositives
IRJET- Smart Wheelchair with Object Detection using Deep Learning par
IRJET- Smart Wheelchair with Object Detection using Deep LearningIRJET- Smart Wheelchair with Object Detection using Deep Learning
IRJET- Smart Wheelchair with Object Detection using Deep LearningIRJET Journal
33 vues3 diapositives
IRJET- Indoor Shopping System for Visually Impaired People par
IRJET- Indoor Shopping System for Visually Impaired PeopleIRJET- Indoor Shopping System for Visually Impaired People
IRJET- Indoor Shopping System for Visually Impaired PeopleIRJET Journal
41 vues4 diapositives
ASSISTIVE TECHNOLOGY FOR STUDENTS WITH VISUAL IMPAIRMENT ANDAUTISTIC DISORDER... par
ASSISTIVE TECHNOLOGY FOR STUDENTS WITH VISUAL IMPAIRMENT ANDAUTISTIC DISORDER...ASSISTIVE TECHNOLOGY FOR STUDENTS WITH VISUAL IMPAIRMENT ANDAUTISTIC DISORDER...
ASSISTIVE TECHNOLOGY FOR STUDENTS WITH VISUAL IMPAIRMENT ANDAUTISTIC DISORDER...Hamzah Meraj, Faculty of Architecture, Jamia Millia Islamia, New delhi
7K vues33 diapositives
Obstacle Detection and Navigation system for Visually Impaired using Smart Shoes par
Obstacle Detection and Navigation system for Visually Impaired using Smart ShoesObstacle Detection and Navigation system for Visually Impaired using Smart Shoes
Obstacle Detection and Navigation system for Visually Impaired using Smart ShoesIRJET Journal
85 vues4 diapositives
IRJET- Advanced Voice Operating Wheelchair using Arduino par
IRJET- Advanced Voice Operating Wheelchair using ArduinoIRJET- Advanced Voice Operating Wheelchair using Arduino
IRJET- Advanced Voice Operating Wheelchair using ArduinoIRJET Journal
28 vues10 diapositives

Tendances(20)

Smart Cane for Blind Person Assisted with Android Application and Save Our So... par Dr. Amarjeet Singh
Smart Cane for Blind Person Assisted with Android Application and Save Our So...Smart Cane for Blind Person Assisted with Android Application and Save Our So...
Smart Cane for Blind Person Assisted with Android Application and Save Our So...
IRJET- Smart Wheelchair with Object Detection using Deep Learning par IRJET Journal
IRJET- Smart Wheelchair with Object Detection using Deep LearningIRJET- Smart Wheelchair with Object Detection using Deep Learning
IRJET- Smart Wheelchair with Object Detection using Deep Learning
IRJET Journal33 vues
IRJET- Indoor Shopping System for Visually Impaired People par IRJET Journal
IRJET- Indoor Shopping System for Visually Impaired PeopleIRJET- Indoor Shopping System for Visually Impaired People
IRJET- Indoor Shopping System for Visually Impaired People
IRJET Journal41 vues
Obstacle Detection and Navigation system for Visually Impaired using Smart Shoes par IRJET Journal
Obstacle Detection and Navigation system for Visually Impaired using Smart ShoesObstacle Detection and Navigation system for Visually Impaired using Smart Shoes
Obstacle Detection and Navigation system for Visually Impaired using Smart Shoes
IRJET Journal85 vues
IRJET- Advanced Voice Operating Wheelchair using Arduino par IRJET Journal
IRJET- Advanced Voice Operating Wheelchair using ArduinoIRJET- Advanced Voice Operating Wheelchair using Arduino
IRJET- Advanced Voice Operating Wheelchair using Arduino
IRJET Journal28 vues
Godeye An Efficient System for Blinds par ijtsrd
Godeye An Efficient System for BlindsGodeye An Efficient System for Blinds
Godeye An Efficient System for Blinds
ijtsrd56 vues
IRJET- Navigation and Camera Reading System for Visually Impaired par IRJET Journal
IRJET- Navigation and Camera Reading System for Visually ImpairedIRJET- Navigation and Camera Reading System for Visually Impaired
IRJET- Navigation and Camera Reading System for Visually Impaired
IRJET Journal14 vues
IRJET-Human Face Detection and Identification using Deep Metric Learning par IRJET Journal
IRJET-Human Face Detection and Identification using Deep Metric LearningIRJET-Human Face Detection and Identification using Deep Metric Learning
IRJET-Human Face Detection and Identification using Deep Metric Learning
IRJET Journal28 vues
An HCI Principles based Framework to Support Deaf Community par IJEACS
An HCI Principles based Framework to Support Deaf CommunityAn HCI Principles based Framework to Support Deaf Community
An HCI Principles based Framework to Support Deaf Community
IJEACS113 vues
Multi-modal Asian Conversation Mobile Video Dataset for Recognition Task par IJECEIAES
Multi-modal Asian Conversation Mobile Video Dataset for Recognition TaskMulti-modal Asian Conversation Mobile Video Dataset for Recognition Task
Multi-modal Asian Conversation Mobile Video Dataset for Recognition Task
IJECEIAES10 vues
IRJET- Gesture Drawing Android Application for Visually-Impaired People par IRJET Journal
IRJET- Gesture Drawing Android Application for Visually-Impaired PeopleIRJET- Gesture Drawing Android Application for Visually-Impaired People
IRJET- Gesture Drawing Android Application for Visually-Impaired People
IRJET Journal25 vues
IRJET- Review on Raspberry Pi based Assistive Communication System for Blind,... par IRJET Journal
IRJET- Review on Raspberry Pi based Assistive Communication System for Blind,...IRJET- Review on Raspberry Pi based Assistive Communication System for Blind,...
IRJET- Review on Raspberry Pi based Assistive Communication System for Blind,...
IRJET Journal62 vues
Wearable Computing and Human Computer Interfaces par Jeffrey Funk
Wearable Computing and Human Computer InterfacesWearable Computing and Human Computer Interfaces
Wearable Computing and Human Computer Interfaces
Jeffrey Funk6.1K vues
Assistance Application for Visually Impaired - VISION par IJSRED
Assistance Application for Visually  Impaired - VISIONAssistance Application for Visually  Impaired - VISION
Assistance Application for Visually Impaired - VISION
IJSRED27 vues
Navigation Assistance for Visually Challenged People par IRJET Journal
Navigation Assistance for Visually Challenged PeopleNavigation Assistance for Visually Challenged People
Navigation Assistance for Visually Challenged People
IRJET Journal92 vues
Face recognition smart cane using haar-like features and eigenfaces par TELKOMNIKA JOURNAL
Face recognition smart cane using haar-like features and eigenfacesFace recognition smart cane using haar-like features and eigenfaces
Face recognition smart cane using haar-like features and eigenfaces
IRJET- An Ameliorated Methodology of Smart Assistant to Physically Challenged... par IRJET Journal
IRJET- An Ameliorated Methodology of Smart Assistant to Physically Challenged...IRJET- An Ameliorated Methodology of Smart Assistant to Physically Challenged...
IRJET- An Ameliorated Methodology of Smart Assistant to Physically Challenged...
IRJET Journal9 vues
02 smart wheelchair11 (edit a) par IAESIJEECS
02 smart wheelchair11 (edit a)02 smart wheelchair11 (edit a)
02 smart wheelchair11 (edit a)
IAESIJEECS59 vues

Similaire à IRJET - For(E)Sight :A Perceptive Device to Assist Blind People

An Assistive System for Visually Impaired People par
An Assistive System for Visually Impaired PeopleAn Assistive System for Visually Impaired People
An Assistive System for Visually Impaired PeopleIRJET Journal
13 vues3 diapositives
IRJET- VI Spectacle – A Visual Aid for the Visually Impaired par
IRJET- VI Spectacle – A Visual Aid for the Visually ImpairedIRJET- VI Spectacle – A Visual Aid for the Visually Impaired
IRJET- VI Spectacle – A Visual Aid for the Visually ImpairedIRJET Journal
13 vues3 diapositives
IRJET- An Aid for Visually Impaired Pedestrain par
IRJET- An Aid for Visually Impaired PedestrainIRJET- An Aid for Visually Impaired Pedestrain
IRJET- An Aid for Visually Impaired PedestrainIRJET Journal
37 vues8 diapositives
Smart Stick for Blind People with Live Video Feed par
Smart Stick for Blind People with Live Video FeedSmart Stick for Blind People with Live Video Feed
Smart Stick for Blind People with Live Video FeedIRJET Journal
234 vues5 diapositives
Smart Gloves for Blind par
Smart Gloves for BlindSmart Gloves for Blind
Smart Gloves for BlindIRJET Journal
1.2K vues4 diapositives
DRISHTI – A PORTABLE PROTOTYPE FOR VISUALLY IMPAIRED par
DRISHTI – A PORTABLE PROTOTYPE FOR VISUALLY IMPAIREDDRISHTI – A PORTABLE PROTOTYPE FOR VISUALLY IMPAIRED
DRISHTI – A PORTABLE PROTOTYPE FOR VISUALLY IMPAIREDIRJET Journal
15 vues5 diapositives

Similaire à IRJET - For(E)Sight :A Perceptive Device to Assist Blind People(20)

An Assistive System for Visually Impaired People par IRJET Journal
An Assistive System for Visually Impaired PeopleAn Assistive System for Visually Impaired People
An Assistive System for Visually Impaired People
IRJET Journal13 vues
IRJET- VI Spectacle – A Visual Aid for the Visually Impaired par IRJET Journal
IRJET- VI Spectacle – A Visual Aid for the Visually ImpairedIRJET- VI Spectacle – A Visual Aid for the Visually Impaired
IRJET- VI Spectacle – A Visual Aid for the Visually Impaired
IRJET Journal13 vues
IRJET- An Aid for Visually Impaired Pedestrain par IRJET Journal
IRJET- An Aid for Visually Impaired PedestrainIRJET- An Aid for Visually Impaired Pedestrain
IRJET- An Aid for Visually Impaired Pedestrain
IRJET Journal37 vues
Smart Stick for Blind People with Live Video Feed par IRJET Journal
Smart Stick for Blind People with Live Video FeedSmart Stick for Blind People with Live Video Feed
Smart Stick for Blind People with Live Video Feed
IRJET Journal234 vues
DRISHTI – A PORTABLE PROTOTYPE FOR VISUALLY IMPAIRED par IRJET Journal
DRISHTI – A PORTABLE PROTOTYPE FOR VISUALLY IMPAIREDDRISHTI – A PORTABLE PROTOTYPE FOR VISUALLY IMPAIRED
DRISHTI – A PORTABLE PROTOTYPE FOR VISUALLY IMPAIRED
IRJET Journal15 vues
IRJET-Voice Assisted Blind Stick using Ultrasonic Sensor par IRJET Journal
IRJET-Voice Assisted Blind Stick using Ultrasonic SensorIRJET-Voice Assisted Blind Stick using Ultrasonic Sensor
IRJET-Voice Assisted Blind Stick using Ultrasonic Sensor
IRJET Journal312 vues
RASPBERRY PI BASED SMART WALKING STICK FOR VISUALLY IMPAIRED PERSON par IRJET Journal
RASPBERRY PI BASED SMART WALKING STICK FOR VISUALLY IMPAIRED PERSONRASPBERRY PI BASED SMART WALKING STICK FOR VISUALLY IMPAIRED PERSON
RASPBERRY PI BASED SMART WALKING STICK FOR VISUALLY IMPAIRED PERSON
IRJET Journal19 vues
IRJET- IoT based Portable Hand Gesture Recognition System par IRJET Journal
IRJET-  	  IoT based Portable Hand Gesture Recognition SystemIRJET-  	  IoT based Portable Hand Gesture Recognition System
IRJET- IoT based Portable Hand Gesture Recognition System
IRJET Journal80 vues
ARTIFICIAL INTELLIGENCE BASED SMART NAVIGATION SYSTEM FOR BLIND PEOPLE par IRJET Journal
ARTIFICIAL INTELLIGENCE BASED SMART NAVIGATION SYSTEM FOR BLIND PEOPLEARTIFICIAL INTELLIGENCE BASED SMART NAVIGATION SYSTEM FOR BLIND PEOPLE
ARTIFICIAL INTELLIGENCE BASED SMART NAVIGATION SYSTEM FOR BLIND PEOPLE
IRJET Journal11 vues
SMART BLIND STICK USING VOICE MODULE par IRJET Journal
SMART BLIND STICK USING VOICE MODULESMART BLIND STICK USING VOICE MODULE
SMART BLIND STICK USING VOICE MODULE
IRJET Journal27 vues
ALTERNATE EYES FOR BLIND advanced wearable for visually impaired people par IRJET Journal
ALTERNATE EYES FOR BLIND advanced wearable for visually impaired peopleALTERNATE EYES FOR BLIND advanced wearable for visually impaired people
ALTERNATE EYES FOR BLIND advanced wearable for visually impaired people
IRJET Journal2 vues
IRJET- Sign Language Converter for Deaf and Dumb People par IRJET Journal
IRJET- Sign Language Converter for Deaf and Dumb PeopleIRJET- Sign Language Converter for Deaf and Dumb People
IRJET- Sign Language Converter for Deaf and Dumb People
IRJET Journal60 vues
Soundscape for Visually Impaired par IRJET Journal
Soundscape for Visually ImpairedSoundscape for Visually Impaired
Soundscape for Visually Impaired
IRJET Journal89 vues
IRJET- Email Framework for Blinds using Speech Recognition par IRJET Journal
IRJET- Email Framework for Blinds using Speech RecognitionIRJET- Email Framework for Blinds using Speech Recognition
IRJET- Email Framework for Blinds using Speech Recognition
IRJET Journal18 vues
IRJET- Object Detection and Recognition for Blind Assistance par IRJET Journal
IRJET- Object Detection and Recognition for Blind AssistanceIRJET- Object Detection and Recognition for Blind Assistance
IRJET- Object Detection and Recognition for Blind Assistance
IRJET Journal626 vues
INDOOR AND OUTDOOR NAVIGATION ASSISTANCE SYSTEM FOR VISUALLY IMPAIRED PEOPLE ... par IRJET Journal
INDOOR AND OUTDOOR NAVIGATION ASSISTANCE SYSTEM FOR VISUALLY IMPAIRED PEOPLE ...INDOOR AND OUTDOOR NAVIGATION ASSISTANCE SYSTEM FOR VISUALLY IMPAIRED PEOPLE ...
INDOOR AND OUTDOOR NAVIGATION ASSISTANCE SYSTEM FOR VISUALLY IMPAIRED PEOPLE ...
IRJET Journal258 vues

Plus de IRJET Journal

SOIL STABILIZATION USING WASTE FIBER MATERIAL par
SOIL STABILIZATION USING WASTE FIBER MATERIALSOIL STABILIZATION USING WASTE FIBER MATERIAL
SOIL STABILIZATION USING WASTE FIBER MATERIALIRJET Journal
20 vues7 diapositives
Sol-gel auto-combustion produced gamma irradiated Ni1-xCdxFe2O4 nanoparticles... par
Sol-gel auto-combustion produced gamma irradiated Ni1-xCdxFe2O4 nanoparticles...Sol-gel auto-combustion produced gamma irradiated Ni1-xCdxFe2O4 nanoparticles...
Sol-gel auto-combustion produced gamma irradiated Ni1-xCdxFe2O4 nanoparticles...IRJET Journal
8 vues7 diapositives
Identification, Discrimination and Classification of Cotton Crop by Using Mul... par
Identification, Discrimination and Classification of Cotton Crop by Using Mul...Identification, Discrimination and Classification of Cotton Crop by Using Mul...
Identification, Discrimination and Classification of Cotton Crop by Using Mul...IRJET Journal
7 vues5 diapositives
“Analysis of GDP, Unemployment and Inflation rates using mathematical formula... par
“Analysis of GDP, Unemployment and Inflation rates using mathematical formula...“Analysis of GDP, Unemployment and Inflation rates using mathematical formula...
“Analysis of GDP, Unemployment and Inflation rates using mathematical formula...IRJET Journal
13 vues11 diapositives
MAXIMUM POWER POINT TRACKING BASED PHOTO VOLTAIC SYSTEM FOR SMART GRID INTEGR... par
MAXIMUM POWER POINT TRACKING BASED PHOTO VOLTAIC SYSTEM FOR SMART GRID INTEGR...MAXIMUM POWER POINT TRACKING BASED PHOTO VOLTAIC SYSTEM FOR SMART GRID INTEGR...
MAXIMUM POWER POINT TRACKING BASED PHOTO VOLTAIC SYSTEM FOR SMART GRID INTEGR...IRJET Journal
14 vues6 diapositives
Performance Analysis of Aerodynamic Design for Wind Turbine Blade par
Performance Analysis of Aerodynamic Design for Wind Turbine BladePerformance Analysis of Aerodynamic Design for Wind Turbine Blade
Performance Analysis of Aerodynamic Design for Wind Turbine BladeIRJET Journal
7 vues5 diapositives

Plus de IRJET Journal(20)

SOIL STABILIZATION USING WASTE FIBER MATERIAL par IRJET Journal
SOIL STABILIZATION USING WASTE FIBER MATERIALSOIL STABILIZATION USING WASTE FIBER MATERIAL
SOIL STABILIZATION USING WASTE FIBER MATERIAL
IRJET Journal20 vues
Sol-gel auto-combustion produced gamma irradiated Ni1-xCdxFe2O4 nanoparticles... par IRJET Journal
Sol-gel auto-combustion produced gamma irradiated Ni1-xCdxFe2O4 nanoparticles...Sol-gel auto-combustion produced gamma irradiated Ni1-xCdxFe2O4 nanoparticles...
Sol-gel auto-combustion produced gamma irradiated Ni1-xCdxFe2O4 nanoparticles...
IRJET Journal8 vues
Identification, Discrimination and Classification of Cotton Crop by Using Mul... par IRJET Journal
Identification, Discrimination and Classification of Cotton Crop by Using Mul...Identification, Discrimination and Classification of Cotton Crop by Using Mul...
Identification, Discrimination and Classification of Cotton Crop by Using Mul...
IRJET Journal7 vues
“Analysis of GDP, Unemployment and Inflation rates using mathematical formula... par IRJET Journal
“Analysis of GDP, Unemployment and Inflation rates using mathematical formula...“Analysis of GDP, Unemployment and Inflation rates using mathematical formula...
“Analysis of GDP, Unemployment and Inflation rates using mathematical formula...
IRJET Journal13 vues
MAXIMUM POWER POINT TRACKING BASED PHOTO VOLTAIC SYSTEM FOR SMART GRID INTEGR... par IRJET Journal
MAXIMUM POWER POINT TRACKING BASED PHOTO VOLTAIC SYSTEM FOR SMART GRID INTEGR...MAXIMUM POWER POINT TRACKING BASED PHOTO VOLTAIC SYSTEM FOR SMART GRID INTEGR...
MAXIMUM POWER POINT TRACKING BASED PHOTO VOLTAIC SYSTEM FOR SMART GRID INTEGR...
IRJET Journal14 vues
Performance Analysis of Aerodynamic Design for Wind Turbine Blade par IRJET Journal
Performance Analysis of Aerodynamic Design for Wind Turbine BladePerformance Analysis of Aerodynamic Design for Wind Turbine Blade
Performance Analysis of Aerodynamic Design for Wind Turbine Blade
IRJET Journal7 vues
Heart Failure Prediction using Different Machine Learning Techniques par IRJET Journal
Heart Failure Prediction using Different Machine Learning TechniquesHeart Failure Prediction using Different Machine Learning Techniques
Heart Failure Prediction using Different Machine Learning Techniques
IRJET Journal7 vues
Experimental Investigation of Solar Hot Case Based on Photovoltaic Panel par IRJET Journal
Experimental Investigation of Solar Hot Case Based on Photovoltaic PanelExperimental Investigation of Solar Hot Case Based on Photovoltaic Panel
Experimental Investigation of Solar Hot Case Based on Photovoltaic Panel
IRJET Journal3 vues
Metro Development and Pedestrian Concerns par IRJET Journal
Metro Development and Pedestrian ConcernsMetro Development and Pedestrian Concerns
Metro Development and Pedestrian Concerns
IRJET Journal2 vues
Mapping the Crashworthiness Domains: Investigations Based on Scientometric An... par IRJET Journal
Mapping the Crashworthiness Domains: Investigations Based on Scientometric An...Mapping the Crashworthiness Domains: Investigations Based on Scientometric An...
Mapping the Crashworthiness Domains: Investigations Based on Scientometric An...
IRJET Journal3 vues
Data Analytics and Artificial Intelligence in Healthcare Industry par IRJET Journal
Data Analytics and Artificial Intelligence in Healthcare IndustryData Analytics and Artificial Intelligence in Healthcare Industry
Data Analytics and Artificial Intelligence in Healthcare Industry
IRJET Journal3 vues
DESIGN AND SIMULATION OF SOLAR BASED FAST CHARGING STATION FOR ELECTRIC VEHIC... par IRJET Journal
DESIGN AND SIMULATION OF SOLAR BASED FAST CHARGING STATION FOR ELECTRIC VEHIC...DESIGN AND SIMULATION OF SOLAR BASED FAST CHARGING STATION FOR ELECTRIC VEHIC...
DESIGN AND SIMULATION OF SOLAR BASED FAST CHARGING STATION FOR ELECTRIC VEHIC...
IRJET Journal44 vues
Efficient Design for Multi-story Building Using Pre-Fabricated Steel Structur... par IRJET Journal
Efficient Design for Multi-story Building Using Pre-Fabricated Steel Structur...Efficient Design for Multi-story Building Using Pre-Fabricated Steel Structur...
Efficient Design for Multi-story Building Using Pre-Fabricated Steel Structur...
IRJET Journal10 vues
Development of Effective Tomato Package for Post-Harvest Preservation par IRJET Journal
Development of Effective Tomato Package for Post-Harvest PreservationDevelopment of Effective Tomato Package for Post-Harvest Preservation
Development of Effective Tomato Package for Post-Harvest Preservation
IRJET Journal4 vues
“DYNAMIC ANALYSIS OF GRAVITY RETAINING WALL WITH SOIL STRUCTURE INTERACTION” par IRJET Journal
“DYNAMIC ANALYSIS OF GRAVITY RETAINING WALL WITH SOIL STRUCTURE INTERACTION”“DYNAMIC ANALYSIS OF GRAVITY RETAINING WALL WITH SOIL STRUCTURE INTERACTION”
“DYNAMIC ANALYSIS OF GRAVITY RETAINING WALL WITH SOIL STRUCTURE INTERACTION”
IRJET Journal5 vues
Understanding the Nature of Consciousness with AI par IRJET Journal
Understanding the Nature of Consciousness with AIUnderstanding the Nature of Consciousness with AI
Understanding the Nature of Consciousness with AI
IRJET Journal12 vues
Augmented Reality App for Location based Exploration at JNTUK Kakinada par IRJET Journal
Augmented Reality App for Location based Exploration at JNTUK KakinadaAugmented Reality App for Location based Exploration at JNTUK Kakinada
Augmented Reality App for Location based Exploration at JNTUK Kakinada
IRJET Journal6 vues
Smart Traffic Congestion Control System: Leveraging Machine Learning for Urba... par IRJET Journal
Smart Traffic Congestion Control System: Leveraging Machine Learning for Urba...Smart Traffic Congestion Control System: Leveraging Machine Learning for Urba...
Smart Traffic Congestion Control System: Leveraging Machine Learning for Urba...
IRJET Journal18 vues
Enhancing Real Time Communication and Efficiency With Websocket par IRJET Journal
Enhancing Real Time Communication and Efficiency With WebsocketEnhancing Real Time Communication and Efficiency With Websocket
Enhancing Real Time Communication and Efficiency With Websocket
IRJET Journal4 vues
Textile Industrial Wastewater Treatability Studies by Soil Aquifer Treatment ... par IRJET Journal
Textile Industrial Wastewater Treatability Studies by Soil Aquifer Treatment ...Textile Industrial Wastewater Treatability Studies by Soil Aquifer Treatment ...
Textile Industrial Wastewater Treatability Studies by Soil Aquifer Treatment ...
IRJET Journal4 vues

Dernier

DevOps-ITverse-2023-IIT-DU.pptx par
DevOps-ITverse-2023-IIT-DU.pptxDevOps-ITverse-2023-IIT-DU.pptx
DevOps-ITverse-2023-IIT-DU.pptxAnowar Hossain
9 vues45 diapositives
Design of machine elements-UNIT 3.pptx par
Design of machine elements-UNIT 3.pptxDesign of machine elements-UNIT 3.pptx
Design of machine elements-UNIT 3.pptxgopinathcreddy
32 vues31 diapositives
START Newsletter 3 par
START Newsletter 3START Newsletter 3
START Newsletter 3Start Project
5 vues25 diapositives
Control Systems Feedback.pdf par
Control Systems Feedback.pdfControl Systems Feedback.pdf
Control Systems Feedback.pdfLGGaming5
5 vues39 diapositives
Pull down shoulder press final report docx (1).pdf par
Pull down shoulder press final report docx (1).pdfPull down shoulder press final report docx (1).pdf
Pull down shoulder press final report docx (1).pdfComsat Universal Islamabad Wah Campus
12 vues25 diapositives
Activated sludge process .pdf par
Activated sludge process .pdfActivated sludge process .pdf
Activated sludge process .pdf8832RafiyaAltaf
9 vues32 diapositives

Dernier(20)

Design of machine elements-UNIT 3.pptx par gopinathcreddy
Design of machine elements-UNIT 3.pptxDesign of machine elements-UNIT 3.pptx
Design of machine elements-UNIT 3.pptx
gopinathcreddy32 vues
Control Systems Feedback.pdf par LGGaming5
Control Systems Feedback.pdfControl Systems Feedback.pdf
Control Systems Feedback.pdf
LGGaming55 vues
Update 42 models(Diode/General ) in SPICE PARK(DEC2023) par Tsuyoshi Horigome
Update 42 models(Diode/General ) in SPICE PARK(DEC2023)Update 42 models(Diode/General ) in SPICE PARK(DEC2023)
Update 42 models(Diode/General ) in SPICE PARK(DEC2023)
Machine Element II Course outline.pdf par odatadese1
Machine Element II Course outline.pdfMachine Element II Course outline.pdf
Machine Element II Course outline.pdf
odatadese18 vues
Literature review and Case study on Commercial Complex in Nepal, Durbar mall,... par AakashShakya12
Literature review and Case study on Commercial Complex in Nepal, Durbar mall,...Literature review and Case study on Commercial Complex in Nepal, Durbar mall,...
Literature review and Case study on Commercial Complex in Nepal, Durbar mall,...
AakashShakya1266 vues
zincalume water storage tank design.pdf par 3D LABS
zincalume water storage tank design.pdfzincalume water storage tank design.pdf
zincalume water storage tank design.pdf
3D LABS5 vues
Investigation of Physicochemical Changes of Soft Clay around Deep Geopolymer ... par AltinKaradagli
Investigation of Physicochemical Changes of Soft Clay around Deep Geopolymer ...Investigation of Physicochemical Changes of Soft Clay around Deep Geopolymer ...
Investigation of Physicochemical Changes of Soft Clay around Deep Geopolymer ...

IRJET - For(E)Sight :A Perceptive Device to Assist Blind People

  • 1. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 07 Issue: 03 | Mar 2020 www.irjet.net p-ISSN: 2395-0072 © 2020, IRJET | Impact Factor value: 7.34 | ISO 9001:2008 Certified Journal | Page 2842 For(E)Sight : A Perceptive Device to Assist Blind People Jajini M1, Udith Swetha S2, Gayathri k3, Harishma A4 1 Assistant Professor, Velammal College of Engineering and Technology Viraganoor, Madurai, Tamil Nadu. 2,3,4Student, Electrical and Electronics Engineering, Velammal College of Engineering and Technology Viraganoor, Madurai, Tamil Nadu. ---------------------------------------------------------------***--------------------------------------------------------------- Abstract- Blind mobility is one of the major challenges faced by the visually impaired people in their daily lives. Many electronic gadgets have been developed to help them but they are reluctant to use those gadgets as they are expensive, large in size and very complex to operate. This paper presents a novel device called FOR(E)SIGHT to facilitate normal life to the visually impaired people. FOR(E)SIGHT is specifically designed to assist visually challenged people in identifying the objects, text ,action with the help of ultrasonic sensor and Camera modules as it converts them to voice with the help of RASPBERRY PI 3 and OPENCV. FOR(E)SIGHT helps the blind person to communicate with the mute people effectively, as it identifies the gesture language of mute using RASPBERRY PI 3 and its camera module. It also locates the position of visually challenged people with the help of GPS of mobiles interfaced with RASPBERRY PI 3. While performing the above mentioned processes, the data is collected and stored in cloud. The components used in this device are of low cost, small size and easy incorporation, making it possible for the device to be widely used as consumer and user friendly device. Keywords- Object detection, OPENCV, Deep learning, Text detection, Gesture. 1. INTRODUCTION Visual impairment is a prevalent disorder among people of different age groups. The spread of visual impairment throughout the world is an issue of greater significance. According to the World Health Organization (WHO), about 8 percent of the population in the eastern Mediterranean region is suffering from vision problems, including blindness, poor vision and some sort of visual impairment. They face many challenges and difficulties. It includes the complexity of moving in complete autonomy and the ability to identify objects. According to Foulke "Mobility is the ability to travel safely, comfortably, gracefully and independently through the world."Mobility is an important key to personal independence. As visually challenged people face a problem for safe and independent mobility, a great deal of work has been devoted to the development of technical aids in this field to enhance their safe travel. Outdoor Travelling can be very demanding for blind people. Nonetheless, if a blind person is able to move outdoors, they gain confidence and will be more likely to travel outdoors often leading to improved quality of their life. Travelling can be made easier if the visually challenged people are able to do the following tasks. They are: - Identify objects that lie as an obstacle in the path. - Identify any form of texts that convey traffic rules. Most blind people and people with vision difficulties were not in a position to complete their studies. Special schools for people with special needs are not available everywhere and most of them are private and expensive thus it also becomes a major obstacle for them to live a scholastic life. Communication between people with different defects has been impossible decades ago. Particularly the communication between visually challenged people and deaf-mute people is a challenging task. If complete progress is made to enable such communication, the people with impairment forget their inability and can live an ambitious life in this competitive world. The rest of the paper is organized as follows. Section II will provide an overview of the various related works in the area of using deep learning based computer vision techniques for implementing smart devices for the visually impaired. The detail of the methodology used is discussed in section III. Proposed system design and implementation will be presented in Section IV. Experimental results are presented in Section V. Future scope is discussed in Section VI. Conclusion is provided at section VII. 2. RELATED WORK The application of technology to aid the blind has been a post-World War II development. Navigation and positioning techniques in mobile robotics have been explored for blind assistance to improve mobility. Many
  • 2. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 07 Issue: 03 | Mar 2020 www.irjet.net p-ISSN: 2395-0072 © 2020, IRJET | Impact Factor value: 7.34 | ISO 9001:2008 Certified Journal | Page 2843 systems rely on heavy, complex, and expensive handheld devices or ambient infrastructure, which limit their portability and feasibility for daily uses. Various electronic devices emerged to replace traditional tools with augmented functions of distance measurement and obstacle avoidance, however, the functionality or flexibility of the devices are still very limited. They act as aids for vision substitution. Some of such devices are: 2.1 Long cane: The most popular mobility aid is, of course, the long cane. It is designed primarily as a mobility tool to detect objects in the path of user. Its height depends on the height of the user. 2.2 eSight 3: It was developed by CNET. It has High resolution camera for capturing image and video to help people with poor vision. The demerit is that it does not improve vision as it just an aid. Further improvements are made to produce a Water proof version of eSight 3. 2.3 Aira: It was developed by SumanKanuganti. Aira uses smart glasses to scan the environment Aira agents help users to interpret their surroundings by smart glasses. But it takes large waiting time for connecting it to the Aira agents in order to be able to sense to include language translation features. 2.4 Google Glasses: It was developed by Google Inc. Google Glasses show information without using hands gestures. Users can communicate with the Internet via normal voice commands. It can capture images and videos, get directions, send messages, audio calling and real- time translation using word lens app. But It is expensive (1,349.99$).Efforts are made to reduce costs to make it more affordable for the consumers. 2.5 Oton Glass: It was developed by Keisuke Shimakage, Japan. It has Support for dyslexic People as it Converts images to words and then to audio. It looks like a normal glasses and it also supports English and Japanese languages. But it supports only people with reading difficulty and no support for blind people. It can be improved to support blind people also by including proximity sensors. 3. METHODOLOGY FOR(E)SIGHT uses OPENCV(OPENSOURCE COMPUTER VISION). Computer Vision Technology has played a vital role in helping visually challenged people to carry out their routine activities without much dependency on other people. Smart devices are the solution which enables blind or visually challenged people to live normal life. FOR(E)SIGHT is an attempt in this direction to build a novel smart device which has the ability to extract and recognize object, text, action captured from an image and convert it to speech. Detection is achieved using the OpenCV software and open source Optical Character Recognition (OCR) tools Tesseract based on Deep Learning techniques. The image is converted into text and the recognized text is further processed to convert to an audible signal for the user. The novelty of the implemented solution lies in providing the basic facilities which is cost effective, small-sized and accurate. This solution can be potentially used for both educational and commercial applications. It consists of a Raspberry Pi 3 B+ microcontroller which processes the image captured from a camera super-imposed on the glasses of the blind person. Fig.1 The block diagram indicates the basic process taking place in FOR(E)SIGHT.(Fig.1) The components used in FOR(E)SIGHT can be classified into hardware and Software. They are listed and described below, 3.1 Hardware Requirements: It includes 1) Raspberry Pi 3B+ 2) camera module 3) ultrasonic sensor 4) Headphones. 1) Raspberry Pi 3B+: Raspberry Pi 3B+ is a credit card-sized computer that contains several important functions on a single chip. It is CAMERA SUPPLY RASPBERRY PI USER SPEECH ULTRASONIC SENSOR
  • 3. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 07 Issue: 03 | Mar 2020 www.irjet.net p-ISSN: 2395-0072 © 2020, IRJET | Impact Factor value: 7.34 | ISO 9001:2008 Certified Journal | Page 2844 a system on a chip (SoC).It needs to be connected with a keyboard, mouse, display, power supply, SD card and installed operating system. It is an embedded system that can run as a no-frills PC, a pocket coding computer, a hub for homemade hardware and more. It includes 40 GPIO (General Purpose Input/output) pins to control various sensors and actuators. Raspberry Pi 3 used for many purposes such as education, coding, and building hardware projects. It uses 1Python through which it can accomplish many important tasks. The Raspberry Pi 3 uses Broadcom BCM2837 SoC Multimedia processor. The Raspberry Pi’s CPU is the 4x ARM Cortex-A53, 1.2GHz processor. It has internal memory of 1GB LPDDR RAM. 2) Camera Module: The Raspberry Pi camera is 5MP and has a resolution of 2592x1944 pixels. It is used to capture stationary objects. On the other hand IPcam can be used to capture actions. IPcam of mobilephones can also be used for this purpose. The IPcam has a view angle of 60° with a fixed focus. It can capture images with maximum resolutions of 1289x720 pixels. It is compatible with most operating systems platforms such as Linux, Windows, and MacOS. 3) Ultrasonic Sensor: The ultrasonic sensor is used to measure the distance using ultrasonic waves. It emits the ultrasonic waves and receives back the echo. So, by measuring the time, the ultrasonic sensor will measure the distance to the object. It can sense distances in the range from 2-400 cm. In FOR(E)SIGHT, the ultrasonic sensor measures the distance between the camera and an object or text or action to decode it into an image. It was observed based on experimentation that the distance to the object should be from 10 cm to 150 cm to capture a clear image. 4) Headphones: Wired headphones was used since Raspberry Pi 3 B+ come with Audio jack of 3.5mm.The headphones will be used to help the user listen to the text that is been converted to audio after it has been captured by the camera or to listen to the translation of the text. The headphone used is small, lightweight so the user will not feel uncomfortable wearing them. 1 The code is available at :https://github.com/Udithswetha/foresight/tree/master/.gitignore 3.2 Software Requirements: 1) OpenCV Libraries: OpenCV is a library of programming functions for real- time computer vision. The library is cross-platform that is, it is available on different platforms including Windows, Linux, OS X, Android, iOSetc and it supports a wide variety of programming languages like C++, Python ,Java etc. OpenCV is free for use under the open-source BSD license It performs real-time applications like image processing, Human–computer interaction (HCI), Facial recognition system, Gesture recognition, Mobile robotics, Motion tracking, Motion understanding, Object identification, Segmentation and recognition, Augmented reality and many more. 2) OCR: OCR (Optical Character Recognition) is used to convert typed, printed or handwritten text into machine-encoded text. There are some OCR software engines which try to recognize any text in images such as Tesseract and EAST. FOR(E)SIGHT uses Tesseract version 4, because it is the best open source OCR engines. It is used directly, or using an API to extract data from images. It supports a wide variety of languages. It is open source software that has a framework for building efficient speech synthesis systems. 3) TTS API: Last function is text to voice conversion. In order to implement this task, TTS (Text-to-Speech) is installed. It is a python library that interfaced with certain API .TTS has many features such as conversion of text to voice, provide error pronunciation using customizable text pre- processors. 4. PROPOSED SYSTEM FOR(E)SIGHT is a device that is designed to use the computer vision technology to capture an image and extract English text and convert it into audio signal with the help of speech synthesis. The overall system can be viewed in three stages. They are, 4.1 Primary Unit: 1) Sensing: The first unit is sensing unit. It comprises of ultrasonic sensor and camera. Whenever the object or text or action is noticed within a limited region of space, the camera captures the image.
  • 4. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 07 Issue: 03 | Mar 2020 www.irjet.net p-ISSN: 2395-0072 © 2020, IRJET | Impact Factor value: 7.34 | ISO 9001:2008 Certified Journal | Page 2845 2) Image acquisition: In this step, the camera captures the images of the object or text or action. The quality of the image captured depends on the camera used. 4.2 Processing unit: It is a technique through which data of Image is obtained in the form of numbers. It is done using OpenCV and OCR tools. The libraries which are commonly used for this are numpy, matplotlib and pillow. This step consists of 1) Gray scale conversion: The image is converted to gray scale because OpenCV functions tend to work on the gray scale image. NumPy includes array objects for representing vectors, matrices, images and linear algebra functions. The array object allows the important operations such as matrix multiplication, solving equation systems, vector multiplication, transposition and normalization, that are needed to align and warp the images, model the variations and classify the images. Arrays in NumPy are multi- dimensional and can represent vectors, matrices, and images. After reading images to NumPy arrays, mathematical operations are performed. 2) Supplementary Process: Noise is removed using bilateral filter. Canny edge detection is performed on the gray scale image for better detection of the contours. The warping and cropping of the image are performed to remove the unwanted background. In the end, Thresholding is done to make the image look like a scanned document to allow the OCR to convert the image to text efficiently. NO YES FOR(E)SIGHT implements a sequential flow of algorithms. The flow of the process is depicted. 4.3 Final unit: The last step is to convert the obtained Text to speech (TTS). TTS program is created in python by installing the TTS engine (Espeak).The quality of the spoken voice depends on the speech engine used. 5. EXPERIMENTAL RESULTS The figure (Fig.2) shows the final prototype of FOR(E)SIGHT. Fig.2 It is noted that the hardware circuit (Raspberry Pi) is placed on the glass head piece and the glasses can be worn on the face which consists of the cameraand the ultrasonic sensor. The sensor senses the object or text or gesture and the pi camera captures it as an image. Though it is not highly comfortable, it is comparatively cost effective and beneficial. 5.1 Object Detection: The main goal was to detect the presence of any obstacle in front of the user by using ultrasonic sensing and image processing techniques. FOR(E)SIGHT has successfully completed the task.(Fig.3) START SENSE OBSTACLE DIST< 10 CAPTURE AS IMAGE PROCESS IMAGE-TEXT SSTEXT TEXT-> SPEECH STOP
  • 5. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 07 Issue: 03 | Mar 2020 www.irjet.net p-ISSN: 2395-0072 © 2020, IRJET | Impact Factor value: 7.34 | ISO 9001:2008 Certified Journal | Page 2846 Fig.4 5.2 Text Detection: In this result, the text detectorTessaract that was used in this solutionwas checked whether it was working well. The test showed mostly good results on big clear texts. It is found that the recognition depends on the clarity of the text in the picture, its font theme, size, and spacing between the words. (Fig.5) Fig.5 5.3 Gesture Detection: In this test, the aim is to check the gesture is identified with the help OCR tools.FOR(E)SIGHT came out with positive results by detecting the gestures. But it is found that FOR(E)SIGHT identifies only the still images of gestures.(Fig.6) Fig.6 6. FUTURE SCOPE However, there are some limitations in the proposed solution which can be addressed in the future implementations. Hence, the following features are recommended to be incorporated in the future versions. - Time taken for each process can be reduced for a better usage. - Translation module can be developed, in order to cater to a wide variety of users, as it would be highly beneficial if it were a multi-lingual featured device. - To improve the direction and warning messages to the user, GPS-based navigation and alert system can be included. - Recognizing more amounts of gestures would be helpful for performing more tasks. - High space visibility can be provided by including a wide angle camera 2700 degrees as compared to 600 currently used. - To provide for more real-time experience, video processing instead of still images can be used instead of still images. 7. CONCLUSION A novel device FOR(E)SIGHT for helping the visually challenged has been proposed in this paper. It is able to capture an image, extract and recognize the embedded text and convert it into speech. Despite certain limitations, the proposed design and implementation served as a practical demonstration of how open source hardware and software tools can be integrated to provide a solution which is cost effective, portable and user-friendly. Thus the device discussed in this paper has helped the visually challenged by providing maximum benefits at a minimal costs.
  • 6. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 07 Issue: 03 | Mar 2020 www.irjet.net p-ISSN: 2395-0072 © 2020, IRJET | Impact Factor value: 7.34 | ISO 9001:2008 Certified Journal | Page 2847 REFERENCES [1] Saransh Sharma, Samyak Jain, Khushboo,” A Static Hand Gesture and Face Recognition System For Blind People”, 6th International Conference on Signal Processing and Integrated Networks (SPIN), pp.534- 539, 2019. [2] Jinqiang Bai, Shiguo Lian, Zhaoxiang Liu, Kai Wang, Dijun Liu,” Virtual-blind Road following based wearable navigation device for blind people ”, IEEE Transactions on consumer electronics, vol. 31, pp. 132-140, February 8, 2018. [3] Arun Kumar,Anurag Pandey,” Hand Gestures and Speech Mapping of a Digital Camera For Differently Abled”, IEEE 6th International Conference on Advanced Computing, pp.392-397, 2016. [4] H. Zhang and C. Ye, “An indoor navigation aid for the visually impaired,”in 2016 IEEE Int. Conf. Robot. Biomimetics (ROBIO), Qingdao, pp.467-472, 2016. [5] Hanene Elleuch, Ali Wali, Anis Samet, Adel M. Alimi, “A static hand gesture recognition system for real time mobile device monitoring”, 2015 15th International Conference on Intelligent Systems Design and Applications (ISDA), pp. 195 – 200, 2015. [6] Hongsheng he, Van li, Vong Guan, and Jindong Tan, ” Wearable Ego-motion Tracking for blind navigation in indoor environments”, IEEE transactions on Automation science and engineering, vol. 12, no.4, pp.1181- 1190, October 2015. [7] X. C. Yin, X. Yin, K. Huang, H. W. Hao, "Robust text detection in natural scene images " IEEE transactions on Pattern Analysis and Machine Intelligence, 36(5), pp. 970-980, 2014. [8] Samleo L. Joseph, Jizhong Xiao, Xiaochen Zhang, Bhupesh Chawda, Kanika Narang, Nitendra Rajput, Sameep Mehta, and L. Venkata Subramaniam,” Being Aware of the World: Toward Using Social Media to Support the blind With Navigation”, IEEE Transactions on Human-machine systems, vol. 33, no. 4, pp.1-7, November 30, 2014. [9] H. He and J. Tan, “Adaptive ego-motion tracking using visual-inertial sensors for wearable blind navigation,” in ACM Proceedings of Wireless Health Conerence, pp.1-7, 2014. [10] L. Neumann, J. Matas, "Scene text localization and recognition with oriented stroke detection", Proceedings of IEEE International Conference on Computer Vision, pp. 97-104, 2013. [11] Bissacco, M. Cummins, Y. Netzer, H. Neven, "Reading text in uncontrolled conditions", Proceedings of the IEEE International Conference on Computer Vision, pp. 785-792, 2013. [12] Y. T. Chucai Yi, “Text extraction from scene images by character appearance and structure modeling,” Computer Visual Image Understanding, vol. 117, no. 2, pp. 182–194, 2013. [13] World Health Organization—Visual Impairment and Blindness. [Online]. Available: http://www.who.int/mediacentre/factsheets/fs282/ en/ (2013). [14] L. Neumann, J. Matas, "Real-time scene text localization and recognition ", Proceedings of IEEE International Conference onComputer Vision and Pattern Recognition (CVPR), pp. 3538-3545, 2012. [15] Dimitrios Dakopoulos and Nikolaos G. Bourbakis,” Wearable obstacle avoidance electronic travel aids for blind: A survey,” IEEE Transactions on systems, man, and cybernetics-part c: applications and reviews”, vol. 40, no. 1, pp. 25-35, January 2010. [16] David J. Calder,” Assistive technology interfaces for the Blind”, 3rd IEEE International conference on Digital Ecosystems and Technologies,pp.318-323,2009. [17] B. Andò ,” A Smart Multisensor Approach to Assist Blind People in Specific Urban Navigation Tasks”, IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 16, no. 6, pp 592-594, December 2008. [18] Hui Tang and David J. Beebe,” An Oral tactile interface for blind navigation”, IEEE Transactions on neural systems and rehabilitation engineering, vol. 14, no.1, pp.116-123, March 2006. [19] Mohammad Shorif Uddin and Tadayoshi Shioyama,” Detection of Pedestrian Crossing Using Bipolarity Feature-An Image-Based Technique”, IEEE Transactions on intelligent transportation systems, vol. 6, no. 4, pp.439-445, December 2005 [20] Y. J. Szeto, “A navigational aid for persons with severe visual impairments: A project in progress,” presented at the Proceedings of the 25th Annual International Conference on IEEE Engineering Medical Biology Society., Cancun, Mexico, Sep. 17–21, 2003. [21] Andò, “Electronic sensory systems for the visually impaired,” IEEE Magazine on Instrumentation and Measurement, vol. 6, no. 2, pp. 62–67, Jun. 2003.