SlideShare a Scribd company logo
1 of 5
Download to read offline
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395 -0056
Volume: 03 Issue: 01 | Jan-2016 www.irjet.net p-ISSN: 2395-0072
© 2016, IRJET | Impact Factor value: 4.45 | ISO 9001:2008 Certified Journal | Page 1231
Visual Product Identification
For Blind Peoples
Prajakata Bhunje1, Anuja Deshmukh2, Dhanshree Phalake3, Amruta Bandal4
1 Prajakta Nitin Bhunje, Computer Engg., DGOI FOE,Maharashtra, India
2 Anuja Ramesh Deshmukh, Computer Engg., DGOI FOE,Maharashtra, India
3 Dhanshree Viajysinh Phalake, Computer Engg., DGOI FOE,Maharashtra, India
4 Amruta Avinash Bandal, Computer Engg., DGOI FOE,Maharashtra, India
---------------------------------------------------------------------***---------------------------------------------------------------------
Abstract - We propose a camera-based assistive text
reading framework to help blind persons ad text labels
and product packaging from hand-held objects in their
daily lives. To isolate the object from cluttered
backgrounds or other surrounding objects in the
camera view, we rest propose an ancient and active
motion-based method to dine a region of interest (ROI)
in the video by asking the user to shake the object. This
method extracts moving object region by a mixture-of-
Gaussians based background subtraction method. In
the extracted ROI, text localization and recognition are
conducted to acquire text information. To automatically
localize the text regions from the object ROI, we propose
a novel text localization algorithm by learning gradient
features of stroke orientations and distributions of edge
pixels in an Ad boost model. Text characters in the
localized text regions are then binaries and recognized
by o -the-shelf optical character recognition (OCR)
software. The recognized text codes are output to blind
users in speech. Performance of the proposed text
localization algorithm is quantitatively evaluated on
ICDAR-2003 and ICDAR-2011 Robust Reading Datasets.
Experimental results demonstrate that our algorithm
achieves the state-of-the-arts. The proof-of-concept
prototype is also evaluated on a dataset collected using
10 blind persons, to evaluate ether activeness of the
systems hardware. We explore user interface issues,
and assess robustness of the algorithm in extracting
and reading text from di
erent objects with complex backgrounds.
Keywords: blindness; assistive devices; text reading;
hand-held objects, text region localization; stroke
orientation; distribution of edge pixels; OCR; Key
Words
I. INTRODUCTION
Reading is obviously essential in today’s society. Printed
text is everywhere in the form of reports, receipts, bank
statements, restaurant menus, classroom handouts, product
packages, instructions on medicine bottles, etc. And while
optical aids, video mangier and screen readers can help blind
users and those with low vision to access documents, there are
few devices that can provide good access to common hand-
held objects such as product packages, and objects printed
with text such as prescription medication bottles. The ability
of people who are blind or have significant visual impairments
to read printed labels and product packages will enhance
independent living, and foster economic and social self-
sufficiency. Today there are already a few systems that have
some promise for portable use, but they cannot handle product
labeling. For example portable bar code readers designed to
help blind people identify different products in an extensive
product database can enable users who are blind to access
information about these products through speech and Braille.
But a big limitation is that it is very hard for blind users to
find the position of the bar code and to correctly point the bar
code reader at the bar code. Some reading-assistive systems
such as pen scanners, might be employed in these and similar
situations. Such systems integrate optical character
recognition (OCR) software to offer the function of scanning
and recognition of text and some have integrated voice output.
However, these systems are generally designed for and
perform best with document images with simple backgrounds,
standard fonts, a small range of font sizes, and well-organized
characters rather
than commercial product boxes with multiple decorative
patterns. Most state-of-the-art OCR software cannot directly
handle scene images with complex backgrounds. A number of
portable reading assistants have been designed specifically for
the visually impaired. KReader Mobile runs on a cell phone
and allows the user to read mail, receipts, fliers and many
other documents [13]. However, the document to be read must
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395 -0056
Volume: 03 Issue: 01 | Jan-2016 www.irjet.net p-ISSN: 2395-0072
© 2016, IRJET | Impact Factor value: 4.45 | ISO 9001:2008 Certified Journal | Page 1232
be nearly flat, placed on a clear, dark surface (i.e., a non-
cluttered background), and contain mostly text. Furthermore,
Reader Mobile accurately reads black print on a white
background, but has problems recognizing colored text or text
on a colored background. It cannot read text with complex
backgrounds, text printed on cylinders with warped or
incomplete images (such as soup cans or medicine bottles).
Furthermore, these systems require a blind user to manually
localize areas of interest and text regions on the objects in
most cases.
Fig. 1. Example of printed text.
Although a number of reading assistants have been designed
specifically for the visually impaired, to our knowledge, no
existing reading assistant can read text from the kinds of
challenging patterns and back- grounds found on many
everyday commercial products. As shown in Fig. 1, such text
information can appear in multiple scales, fonts, colors, and
orientations. To assist blind persons to read text from these
kinds of hand-held objects, we have conceived of a camera
based assistive text reading framework to track the object of
interest within the camera view and extract print text
information from the object.Our proposed algorithm can
effectively handle complex background and multiple patterns,
and extract text information from both hand-held objects and
nearby signage, as shown in Fig. 2.In assistive reading
systems for blind persons, it is very challenging for users to
position the object of interest within the center of the cameras
view. As of now, there are still no acceptable solutions. We
approach the problem in stages: To make sure the hand-held
object appears in the camera view, we se a camera with
sufficiently wide angle to accommodate users with only
approximate aim. This may often result in other text objects
appearing in the cameras view (for example while shopping at
a supermarket). To extract the hand-held object from the
camera image, we develop a motion-based method to obtain a
region of interest (ROI) of the object. Then we perform text
recognition only in this ROI.
II. MOTIVATION
Today, there are already a few systems that have some
promise for portable use, but they cannot handle product
labeling. For example, portable bar
Fig. 2. Example of Text localization.
code readers designed to help blind people identify different
products in an extensive product database can enable users
who are blind to access information about these products.
Through Speech and Braille. But a big limitation is that it is
very hard for blind users to find the position of the bar code
and to correctly point the bar code reader at the bar code.
Some reading assistive systems such as pen scanners might be
employed in these and similar situations. Such systems
integrate OCR software to offer the function of scanning and
recognition of text and some have integrated voice output.
Two of the biggest challenges to independence for blind
individuals are difficulties in accessing printed material [1]
and the stressors associated with safe and efficient navigation.
Access to printed documents has been greatly improved by the
development and proliferation of adaptive technologies such
as screen-reading programs, optical character recognition
software, text-to-speech engines, and electronic Braille
displays. By contrast, difficulty accessing room numbers,
street signs, store names, bus numbers, maps, and other
printed information related to navigation remains a major
challenge for blind travel. Imagine trying toad room n257 in a
large university building without being able to read the room
numbers or access the you are here map at the buildings
entrance. Braille signage certainly helps in identifying a room,
but it is difcult for blind people to nd Braille signs. In
addition, only a modest fraction of the more than 3 million
visually impaired people in the United States read Braille.
Estimates put the number of Braille readers between 15,000
and 85,000.
III. OBJECTIVES
This paper presents a prototype system of assistive text
reading. The system framework consists of three functional
components: scene capture, data processing, and audio output.
The scene capture component collects scenes containing
objects of interest in the form of images or video. In our
prototype, it corresponds to a camera attached to a pair of
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395 -0056
Volume: 03 Issue: 01 | Jan-2016 www.irjet.net p-ISSN: 2395-0072
© 2016, IRJET | Impact Factor value: 4.45 | ISO 9001:2008 Certified Journal | Page 1233
sunglasses. The data processing component is used for
deploying our proposed algorithms, including
1) object- of- interest detection to selectively extract the image
of the object held by the blind user from the cluttered
background or other neutral objects in the camera view; and
2) Text localization to obtain image regions containing text,
and text recognition to transform image-based text
information into readable codes. We use a mini laptop as the
processing device in our current prototype system. The audio
output component is to inform the blind user of recognized
text codes.
In solving the task at hand, to extract text information from
complex backgrounds with multiple and variable text patterns,
we here propose a text localization algorithm that combines
rule-based layout analysis and learning-based text classifier
training, which define novel feature maps based on stroke
orientations and edge distributions. These in turn generate
representative and discriminative text features to distinguish
text characters from background outliers.
IV. APPLICATIONS
1. Image capturing and pre-processing
2. Automatic text extraction
3. Text recognition and audio output
4. Prototype System Evaluation
V. SYSTEM DESIGN
This paper presents a prototype system of Assistive text
reading. The system framework consists of three functional
components: scene
Fig. 3. Block Diagram of Text Reading.
Fig. 4. Flowchart of the proposed framework to read text from hand-
held objects for blind users.
Processing, and audio output. The scene capture Component
collects scenes containing objects of interest in the form of
images or video. In our prototype, it corresponds to a camera
attached to a pair of sunglasses. The data processing
component is used for deploying our proposed algorithms,
including
1. Object- of- interest detection to selectively extract the
image of the object held by the blind user from the cluttered
background or other neutral
objects in the camera view
2. Text localization to obtain image regions containing text,
and text recognition to transform image based text
information into readable codes. We use a mini laptop as the
processing device in our current prototype system. The audio
output component is to inform the blind user of recognized
text codes.
Fig. 5. Two examples of text localization and recognition from
camera captured Images.
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395 -0056
Volume: 03 Issue: 01 | Jan-2016 www.irjet.net p-ISSN: 2395-0072
© 2016, IRJET | Impact Factor value: 4.45 | ISO 9001:2008 Certified Journal | Page 1234
VI. MODULES
A. IMAGE CAPTURING AND PREPROCESSING
The video is captured by using web-cam and the
frames from the video is segregated and undergone to the pre-
processing. First, get the objects continuously from the
camera and adapted to process. Once the object of interest is
extracted from the camera image and it Converted into gray
image. Use hears cascade classifier for recognizing the
character from the object. The work with a cascade classifier
includes two major stages: training and detection. For training
need a set of samples. There are two types of samples:
positive and negative. To extract the hand-held object of
interest from other objects in the camera view, ask users to
shake the hand-held objects containing the text they wish to
identify and then employ a motion-based method to localize
objects from cluttered background.
B. AUTOMATIC TEXT EXTRACTION
In order to handle complex backgrounds, two
novel feature maps to extracts text features based on stroke
orientations and edge distributions, respectively. Here, stroke
is defined as a uniform region with bounded width and
significant extent. These feature maps are combined to build
an Ad boost based text classifier.
C. TEXT REGION LOCALIZATION
Text localization is then performed on the camera
based image. The Cascade-Adaboost classifier confirms the
existence of text information in an image patch but it cannot
the whole images, so heuristic layout analysis is performed to
extract candidate image patches prepared for text
classification. Text information in the image usually appears
in the form of horizontal text strings containing no less than
three character members.
D. TEXT RECOGNITION AND AUDIO OUTPUT
Text recognition is performed by off-the-shelf OCR
prior to output of informative words from the localized text
regions. A text region labels the minimum rectangular area for
the commodation of characters inside it, so the border of the
text region contacts the edge boundary of the text characters.
However, this experiment shows that OCR generates better
performance text regions are first assigned proper margin
areas and binaries to segments text characters from
background. The recognized text codes are recorded in script
files. Then, employ the Microsoft Speech Software
Development Kit to load these files and display the audio
output of text information. Blind users can adjust speech rate,
volume and tone according to their preferences.
VII. CONCLUSIONS AND FUTURE WORK
In this paper, we have described a prototype System to
read printed text on hand-held objects for Assisting blind
persons. In order to solve the common aiming problem for
blind users, we have proposed a motion-based method to
detect the object of interest while the blind user simply shakes
the object for a couple of seconds. This method can
effectively distinguish the object of interest from background
or other objects in the camera view. To extract text regions
from complex backgrounds, we have proposed a novel text
localization algorithm based on models of stroke orientation
and edge distributions. The corresponding feature maps
estimate the global structural feature of text at every pixel.
Block patterns are defined to project the proposed feature
maps of an image patch into a feature vector. Adjacent
character grouping is performed to calculate candidates of text
patches prepared for text classification. An Adaboost learning
model is employed to localize text in camera captured images.
Off-the-shelf OCR is used to perform word recognition on the
localized text regions and transform into audio output for
blind users. Our future work will extend our localization
algorithm to process text strings with characters fewer than 3
and to design more robust block patterns for text feature
extraction. We will also extend our algorithm to handle non-
horizontal text strings. Furthermore, we will address the
significant human interface issues associated with reading text
by blind users.
VIII. REFERENCES
[1] 10 facts about blindness and visual impairment,
World Health Organization: Blindness and visual
Impairment, 2009.
[2] Advance Data Repor ts f rom the Nat ional
Heal th Interview Survey, 2008.
[3] International Workshop on Camera-Based
Document Analysis and Recognition (CBDAR 2005,2007,
2009, 2011) .
[4] X. Chen and A. L. Yuille, Detecting and
reading text in natural scenes, In CVPR, Vol. 2, pp.
II-366 II-373, 2004.
[5] X. Chen, J. Yang, J. Zhang and A. Waibel,
Automatic detection and recognition of signs from
natural scenes, In IEEE Transactions on image processing,
Vol. 13, No. 1, pp. 87-99, 2004.
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395 -0056
Volume: 03 Issue: 01 | Jan-2016 www.irjet.net p-ISSN: 2395-0072
© 2016, IRJET | Impact Factor value: 4.45 | ISO 9001:2008 Certified Journal | Page 1235
[6] D. Dakopoulos and N. G. Bourbakis, Wearable obstacle
avoidance electronic travel aids for blind: a survey, In IEEE
Transactions on systems, man, and cybernetics, Vol. 40, No.
1, pp. 25-35, 2010.
[7] B. Epshtein, E. Ofek and Y. Wexler, Detecting
text in natural scenes with stroke width transform, In CVPR,
pp. 2963-2970, 2010.
[8] Y. Freund and R. Schapire, Experiments with
a new boosting algorithm, In Int. Conf. on Machine
Learning, pp.148156, 1996.
[9] N. Giudice and G. Legge, Blind navigation and the role of
technology, in The engineering handbook of smart technology
for aging, disability.

More Related Content

What's hot

Data representation and visualization ppt
Data representation and visualization pptData representation and visualization ppt
Data representation and visualization pptSindhujaCSEngineerin
 
Voice based email for blinds
Voice based email for blindsVoice based email for blinds
Voice based email for blindsArjun AJ
 
BIG DATA-Seminar Report
BIG DATA-Seminar ReportBIG DATA-Seminar Report
BIG DATA-Seminar Reportjosnapv
 
Waste Management Using IOT
Waste Management Using IOTWaste Management Using IOT
Waste Management Using IOTMariamKhan120
 
“Medical Robotics - Perception & Reality - The R&D challenge” - Yossi Bar @Pr...
“Medical Robotics - Perception & Reality - The R&D challenge” - Yossi Bar @Pr...“Medical Robotics - Perception & Reality - The R&D challenge” - Yossi Bar @Pr...
“Medical Robotics - Perception & Reality - The R&D challenge” - Yossi Bar @Pr...Product of Things
 
Edge Artificial Intelligence in smart city development
Edge Artificial Intelligence in smart city developmentEdge Artificial Intelligence in smart city development
Edge Artificial Intelligence in smart city developmentPromiseElechi1
 
Human robot interaction
Human robot interactionHuman robot interaction
Human robot interactionPrakashSoft
 
Face spoofing detection using texture analysis
Face spoofing detection  using texture analysisFace spoofing detection  using texture analysis
Face spoofing detection using texture analysisSREEKUTTY SREEKUMAR
 
Smart Objects
Smart ObjectsSmart Objects
Smart Objectsamenard
 
5g wireless systems
5g wireless systems5g wireless systems
5g wireless systemsAmar
 
digital twin seminar 1.pptx
digital twin seminar 1.pptxdigital twin seminar 1.pptx
digital twin seminar 1.pptxMacZain
 
The sixth sense technology complete ppt
The sixth sense technology complete pptThe sixth sense technology complete ppt
The sixth sense technology complete pptatinav242
 
Speech Recognition by Iqbal
Speech Recognition by IqbalSpeech Recognition by Iqbal
Speech Recognition by IqbalIqbal
 
Sensitiv Imago
Sensitiv ImagoSensitiv Imago
Sensitiv ImagoSamchuchoo
 
Wearable devices in health - using Google Glass, Apple Watch & Health Bands i...
Wearable devices in health - using Google Glass, Apple Watch & Health Bands i...Wearable devices in health - using Google Glass, Apple Watch & Health Bands i...
Wearable devices in health - using Google Glass, Apple Watch & Health Bands i...Gaurav Gupta
 
Sensitive skin
Sensitive skinSensitive skin
Sensitive skinVivek Jha
 

What's hot (20)

Data representation and visualization ppt
Data representation and visualization pptData representation and visualization ppt
Data representation and visualization ppt
 
COBOTS.ppt
COBOTS.pptCOBOTS.ppt
COBOTS.ppt
 
Voice based email for blinds
Voice based email for blindsVoice based email for blinds
Voice based email for blinds
 
BIG DATA-Seminar Report
BIG DATA-Seminar ReportBIG DATA-Seminar Report
BIG DATA-Seminar Report
 
Waste Management Using IOT
Waste Management Using IOTWaste Management Using IOT
Waste Management Using IOT
 
Haptic Technology
Haptic Technology Haptic Technology
Haptic Technology
 
“Medical Robotics - Perception & Reality - The R&D challenge” - Yossi Bar @Pr...
“Medical Robotics - Perception & Reality - The R&D challenge” - Yossi Bar @Pr...“Medical Robotics - Perception & Reality - The R&D challenge” - Yossi Bar @Pr...
“Medical Robotics - Perception & Reality - The R&D challenge” - Yossi Bar @Pr...
 
Edge Artificial Intelligence in smart city development
Edge Artificial Intelligence in smart city developmentEdge Artificial Intelligence in smart city development
Edge Artificial Intelligence in smart city development
 
Human robot interaction
Human robot interactionHuman robot interaction
Human robot interaction
 
Face spoofing detection using texture analysis
Face spoofing detection  using texture analysisFace spoofing detection  using texture analysis
Face spoofing detection using texture analysis
 
Smart Objects
Smart ObjectsSmart Objects
Smart Objects
 
5g wireless systems
5g wireless systems5g wireless systems
5g wireless systems
 
digital twin seminar 1.pptx
digital twin seminar 1.pptxdigital twin seminar 1.pptx
digital twin seminar 1.pptx
 
The sixth sense technology complete ppt
The sixth sense technology complete pptThe sixth sense technology complete ppt
The sixth sense technology complete ppt
 
Speech Recognition by Iqbal
Speech Recognition by IqbalSpeech Recognition by Iqbal
Speech Recognition by Iqbal
 
Sensitiv Imago
Sensitiv ImagoSensitiv Imago
Sensitiv Imago
 
Smart kitchen
Smart kitchenSmart kitchen
Smart kitchen
 
Satrack
SatrackSatrack
Satrack
 
Wearable devices in health - using Google Glass, Apple Watch & Health Bands i...
Wearable devices in health - using Google Glass, Apple Watch & Health Bands i...Wearable devices in health - using Google Glass, Apple Watch & Health Bands i...
Wearable devices in health - using Google Glass, Apple Watch & Health Bands i...
 
Sensitive skin
Sensitive skinSensitive skin
Sensitive skin
 

Similar to Visual Product Identification For Blind Peoples

Product Recognition using Label and Barcodes
Product Recognition using Label and BarcodesProduct Recognition using Label and Barcodes
Product Recognition using Label and BarcodesIRJET Journal
 
Product Label Reading System for visually challenged people
Product Label Reading System for visually challenged peopleProduct Label Reading System for visually challenged people
Product Label Reading System for visually challenged peopleIRJET Journal
 
IRJET- Review on Text Recognization of Product for Blind Person using MATLAB
IRJET-  Review on Text Recognization of Product for Blind Person using MATLABIRJET-  Review on Text Recognization of Product for Blind Person using MATLAB
IRJET- Review on Text Recognization of Product for Blind Person using MATLABIRJET Journal
 
IRJET- Text Recognization of Product for Blind Person using MATLAB
IRJET- Text Recognization of Product for Blind Person using MATLABIRJET- Text Recognization of Product for Blind Person using MATLAB
IRJET- Text Recognization of Product for Blind Person using MATLABIRJET Journal
 
IRJET- Design and Development of Tesseract-OCR Based Assistive System to Conv...
IRJET- Design and Development of Tesseract-OCR Based Assistive System to Conv...IRJET- Design and Development of Tesseract-OCR Based Assistive System to Conv...
IRJET- Design and Development of Tesseract-OCR Based Assistive System to Conv...IRJET Journal
 
An Assistive System for Visually Impaired People
An Assistive System for Visually Impaired PeopleAn Assistive System for Visually Impaired People
An Assistive System for Visually Impaired PeopleIRJET Journal
 
IRJET- Review on Portable Camera based Assistive Text and Label Reading f...
IRJET-  	  Review on Portable Camera based Assistive Text and Label Reading f...IRJET-  	  Review on Portable Camera based Assistive Text and Label Reading f...
IRJET- Review on Portable Camera based Assistive Text and Label Reading f...IRJET Journal
 
IRJET- Indoor Shopping System for Visually Impaired People
IRJET- Indoor Shopping System for Visually Impaired PeopleIRJET- Indoor Shopping System for Visually Impaired People
IRJET- Indoor Shopping System for Visually Impaired PeopleIRJET Journal
 
IRJET - Number/Text Translate from Image
IRJET -  	  Number/Text Translate from ImageIRJET -  	  Number/Text Translate from Image
IRJET - Number/Text Translate from ImageIRJET Journal
 
An approach for text detection and reading of product label for blind persons
An approach for text detection and reading of product label for blind personsAn approach for text detection and reading of product label for blind persons
An approach for text detection and reading of product label for blind personsVivek Chamorshikar
 
IRJET- Navigation and Camera Reading System for Visually Impaired
IRJET- Navigation and Camera Reading System for Visually ImpairedIRJET- Navigation and Camera Reading System for Visually Impaired
IRJET- Navigation and Camera Reading System for Visually ImpairedIRJET Journal
 
IRJET- Text Reading for Visually Impaired Person using Raspberry Pi
IRJET- Text Reading for Visually Impaired Person using Raspberry PiIRJET- Text Reading for Visually Impaired Person using Raspberry Pi
IRJET- Text Reading for Visually Impaired Person using Raspberry PiIRJET Journal
 
Smart Assistant for Blind Humans using Rashberry PI
Smart Assistant for Blind Humans using Rashberry PISmart Assistant for Blind Humans using Rashberry PI
Smart Assistant for Blind Humans using Rashberry PIijtsrd
 
IRJET - Smart Vision System for Visually Impaired People
IRJET -  	  Smart Vision System for Visually Impaired PeopleIRJET -  	  Smart Vision System for Visually Impaired People
IRJET - Smart Vision System for Visually Impaired PeopleIRJET Journal
 
IRJET- Sign Language Interpreter
IRJET- Sign Language InterpreterIRJET- Sign Language Interpreter
IRJET- Sign Language InterpreterIRJET Journal
 
Finger reader thesis and seminar report
Finger reader thesis and seminar reportFinger reader thesis and seminar report
Finger reader thesis and seminar reportSarvesh Meena
 
IRJETDevice for Location Finder and Text Reader for Visually Impaired People
IRJETDevice for Location Finder and Text Reader for Visually Impaired PeopleIRJETDevice for Location Finder and Text Reader for Visually Impaired People
IRJETDevice for Location Finder and Text Reader for Visually Impaired PeopleIRJET Journal
 
IRJET- Device for Location Finder and Text Reader for Visually Impaired P...
IRJET-  	  Device for Location Finder and Text Reader for Visually Impaired P...IRJET-  	  Device for Location Finder and Text Reader for Visually Impaired P...
IRJET- Device for Location Finder and Text Reader for Visually Impaired P...IRJET Journal
 
F2R Analyzer Using Machine Learning and Deep Learning
F2R Analyzer Using Machine Learning and Deep LearningF2R Analyzer Using Machine Learning and Deep Learning
F2R Analyzer Using Machine Learning and Deep LearningIRJET Journal
 

Similar to Visual Product Identification For Blind Peoples (20)

Product Recognition using Label and Barcodes
Product Recognition using Label and BarcodesProduct Recognition using Label and Barcodes
Product Recognition using Label and Barcodes
 
Product Label Reading System for visually challenged people
Product Label Reading System for visually challenged peopleProduct Label Reading System for visually challenged people
Product Label Reading System for visually challenged people
 
IRJET- Review on Text Recognization of Product for Blind Person using MATLAB
IRJET-  Review on Text Recognization of Product for Blind Person using MATLABIRJET-  Review on Text Recognization of Product for Blind Person using MATLAB
IRJET- Review on Text Recognization of Product for Blind Person using MATLAB
 
IRJET- Text Recognization of Product for Blind Person using MATLAB
IRJET- Text Recognization of Product for Blind Person using MATLABIRJET- Text Recognization of Product for Blind Person using MATLAB
IRJET- Text Recognization of Product for Blind Person using MATLAB
 
IRJET- Design and Development of Tesseract-OCR Based Assistive System to Conv...
IRJET- Design and Development of Tesseract-OCR Based Assistive System to Conv...IRJET- Design and Development of Tesseract-OCR Based Assistive System to Conv...
IRJET- Design and Development of Tesseract-OCR Based Assistive System to Conv...
 
An Assistive System for Visually Impaired People
An Assistive System for Visually Impaired PeopleAn Assistive System for Visually Impaired People
An Assistive System for Visually Impaired People
 
IRJET- Review on Portable Camera based Assistive Text and Label Reading f...
IRJET-  	  Review on Portable Camera based Assistive Text and Label Reading f...IRJET-  	  Review on Portable Camera based Assistive Text and Label Reading f...
IRJET- Review on Portable Camera based Assistive Text and Label Reading f...
 
IRJET- Indoor Shopping System for Visually Impaired People
IRJET- Indoor Shopping System for Visually Impaired PeopleIRJET- Indoor Shopping System for Visually Impaired People
IRJET- Indoor Shopping System for Visually Impaired People
 
IRJET - Number/Text Translate from Image
IRJET -  	  Number/Text Translate from ImageIRJET -  	  Number/Text Translate from Image
IRJET - Number/Text Translate from Image
 
An approach for text detection and reading of product label for blind persons
An approach for text detection and reading of product label for blind personsAn approach for text detection and reading of product label for blind persons
An approach for text detection and reading of product label for blind persons
 
IRJET- Navigation and Camera Reading System for Visually Impaired
IRJET- Navigation and Camera Reading System for Visually ImpairedIRJET- Navigation and Camera Reading System for Visually Impaired
IRJET- Navigation and Camera Reading System for Visually Impaired
 
IRJET- Healthy Beat
IRJET- Healthy BeatIRJET- Healthy Beat
IRJET- Healthy Beat
 
IRJET- Text Reading for Visually Impaired Person using Raspberry Pi
IRJET- Text Reading for Visually Impaired Person using Raspberry PiIRJET- Text Reading for Visually Impaired Person using Raspberry Pi
IRJET- Text Reading for Visually Impaired Person using Raspberry Pi
 
Smart Assistant for Blind Humans using Rashberry PI
Smart Assistant for Blind Humans using Rashberry PISmart Assistant for Blind Humans using Rashberry PI
Smart Assistant for Blind Humans using Rashberry PI
 
IRJET - Smart Vision System for Visually Impaired People
IRJET -  	  Smart Vision System for Visually Impaired PeopleIRJET -  	  Smart Vision System for Visually Impaired People
IRJET - Smart Vision System for Visually Impaired People
 
IRJET- Sign Language Interpreter
IRJET- Sign Language InterpreterIRJET- Sign Language Interpreter
IRJET- Sign Language Interpreter
 
Finger reader thesis and seminar report
Finger reader thesis and seminar reportFinger reader thesis and seminar report
Finger reader thesis and seminar report
 
IRJETDevice for Location Finder and Text Reader for Visually Impaired People
IRJETDevice for Location Finder and Text Reader for Visually Impaired PeopleIRJETDevice for Location Finder and Text Reader for Visually Impaired People
IRJETDevice for Location Finder and Text Reader for Visually Impaired People
 
IRJET- Device for Location Finder and Text Reader for Visually Impaired P...
IRJET-  	  Device for Location Finder and Text Reader for Visually Impaired P...IRJET-  	  Device for Location Finder and Text Reader for Visually Impaired P...
IRJET- Device for Location Finder and Text Reader for Visually Impaired P...
 
F2R Analyzer Using Machine Learning and Deep Learning
F2R Analyzer Using Machine Learning and Deep LearningF2R Analyzer Using Machine Learning and Deep Learning
F2R Analyzer Using Machine Learning and Deep Learning
 

More from IRJET Journal

TUNNELING IN HIMALAYAS WITH NATM METHOD: A SPECIAL REFERENCES TO SUNGAL TUNNE...
TUNNELING IN HIMALAYAS WITH NATM METHOD: A SPECIAL REFERENCES TO SUNGAL TUNNE...TUNNELING IN HIMALAYAS WITH NATM METHOD: A SPECIAL REFERENCES TO SUNGAL TUNNE...
TUNNELING IN HIMALAYAS WITH NATM METHOD: A SPECIAL REFERENCES TO SUNGAL TUNNE...IRJET Journal
 
STUDY THE EFFECT OF RESPONSE REDUCTION FACTOR ON RC FRAMED STRUCTURE
STUDY THE EFFECT OF RESPONSE REDUCTION FACTOR ON RC FRAMED STRUCTURESTUDY THE EFFECT OF RESPONSE REDUCTION FACTOR ON RC FRAMED STRUCTURE
STUDY THE EFFECT OF RESPONSE REDUCTION FACTOR ON RC FRAMED STRUCTUREIRJET Journal
 
A COMPARATIVE ANALYSIS OF RCC ELEMENT OF SLAB WITH STARK STEEL (HYSD STEEL) A...
A COMPARATIVE ANALYSIS OF RCC ELEMENT OF SLAB WITH STARK STEEL (HYSD STEEL) A...A COMPARATIVE ANALYSIS OF RCC ELEMENT OF SLAB WITH STARK STEEL (HYSD STEEL) A...
A COMPARATIVE ANALYSIS OF RCC ELEMENT OF SLAB WITH STARK STEEL (HYSD STEEL) A...IRJET Journal
 
Effect of Camber and Angles of Attack on Airfoil Characteristics
Effect of Camber and Angles of Attack on Airfoil CharacteristicsEffect of Camber and Angles of Attack on Airfoil Characteristics
Effect of Camber and Angles of Attack on Airfoil CharacteristicsIRJET Journal
 
A Review on the Progress and Challenges of Aluminum-Based Metal Matrix Compos...
A Review on the Progress and Challenges of Aluminum-Based Metal Matrix Compos...A Review on the Progress and Challenges of Aluminum-Based Metal Matrix Compos...
A Review on the Progress and Challenges of Aluminum-Based Metal Matrix Compos...IRJET Journal
 
Dynamic Urban Transit Optimization: A Graph Neural Network Approach for Real-...
Dynamic Urban Transit Optimization: A Graph Neural Network Approach for Real-...Dynamic Urban Transit Optimization: A Graph Neural Network Approach for Real-...
Dynamic Urban Transit Optimization: A Graph Neural Network Approach for Real-...IRJET Journal
 
Structural Analysis and Design of Multi-Storey Symmetric and Asymmetric Shape...
Structural Analysis and Design of Multi-Storey Symmetric and Asymmetric Shape...Structural Analysis and Design of Multi-Storey Symmetric and Asymmetric Shape...
Structural Analysis and Design of Multi-Storey Symmetric and Asymmetric Shape...IRJET Journal
 
A Review of “Seismic Response of RC Structures Having Plan and Vertical Irreg...
A Review of “Seismic Response of RC Structures Having Plan and Vertical Irreg...A Review of “Seismic Response of RC Structures Having Plan and Vertical Irreg...
A Review of “Seismic Response of RC Structures Having Plan and Vertical Irreg...IRJET Journal
 
A REVIEW ON MACHINE LEARNING IN ADAS
A REVIEW ON MACHINE LEARNING IN ADASA REVIEW ON MACHINE LEARNING IN ADAS
A REVIEW ON MACHINE LEARNING IN ADASIRJET Journal
 
Long Term Trend Analysis of Precipitation and Temperature for Asosa district,...
Long Term Trend Analysis of Precipitation and Temperature for Asosa district,...Long Term Trend Analysis of Precipitation and Temperature for Asosa district,...
Long Term Trend Analysis of Precipitation and Temperature for Asosa district,...IRJET Journal
 
P.E.B. Framed Structure Design and Analysis Using STAAD Pro
P.E.B. Framed Structure Design and Analysis Using STAAD ProP.E.B. Framed Structure Design and Analysis Using STAAD Pro
P.E.B. Framed Structure Design and Analysis Using STAAD ProIRJET Journal
 
A Review on Innovative Fiber Integration for Enhanced Reinforcement of Concre...
A Review on Innovative Fiber Integration for Enhanced Reinforcement of Concre...A Review on Innovative Fiber Integration for Enhanced Reinforcement of Concre...
A Review on Innovative Fiber Integration for Enhanced Reinforcement of Concre...IRJET Journal
 
Survey Paper on Cloud-Based Secured Healthcare System
Survey Paper on Cloud-Based Secured Healthcare SystemSurvey Paper on Cloud-Based Secured Healthcare System
Survey Paper on Cloud-Based Secured Healthcare SystemIRJET Journal
 
Review on studies and research on widening of existing concrete bridges
Review on studies and research on widening of existing concrete bridgesReview on studies and research on widening of existing concrete bridges
Review on studies and research on widening of existing concrete bridgesIRJET Journal
 
React based fullstack edtech web application
React based fullstack edtech web applicationReact based fullstack edtech web application
React based fullstack edtech web applicationIRJET Journal
 
A Comprehensive Review of Integrating IoT and Blockchain Technologies in the ...
A Comprehensive Review of Integrating IoT and Blockchain Technologies in the ...A Comprehensive Review of Integrating IoT and Blockchain Technologies in the ...
A Comprehensive Review of Integrating IoT and Blockchain Technologies in the ...IRJET Journal
 
A REVIEW ON THE PERFORMANCE OF COCONUT FIBRE REINFORCED CONCRETE.
A REVIEW ON THE PERFORMANCE OF COCONUT FIBRE REINFORCED CONCRETE.A REVIEW ON THE PERFORMANCE OF COCONUT FIBRE REINFORCED CONCRETE.
A REVIEW ON THE PERFORMANCE OF COCONUT FIBRE REINFORCED CONCRETE.IRJET Journal
 
Optimizing Business Management Process Workflows: The Dynamic Influence of Mi...
Optimizing Business Management Process Workflows: The Dynamic Influence of Mi...Optimizing Business Management Process Workflows: The Dynamic Influence of Mi...
Optimizing Business Management Process Workflows: The Dynamic Influence of Mi...IRJET Journal
 
Multistoried and Multi Bay Steel Building Frame by using Seismic Design
Multistoried and Multi Bay Steel Building Frame by using Seismic DesignMultistoried and Multi Bay Steel Building Frame by using Seismic Design
Multistoried and Multi Bay Steel Building Frame by using Seismic DesignIRJET Journal
 
Cost Optimization of Construction Using Plastic Waste as a Sustainable Constr...
Cost Optimization of Construction Using Plastic Waste as a Sustainable Constr...Cost Optimization of Construction Using Plastic Waste as a Sustainable Constr...
Cost Optimization of Construction Using Plastic Waste as a Sustainable Constr...IRJET Journal
 

More from IRJET Journal (20)

TUNNELING IN HIMALAYAS WITH NATM METHOD: A SPECIAL REFERENCES TO SUNGAL TUNNE...
TUNNELING IN HIMALAYAS WITH NATM METHOD: A SPECIAL REFERENCES TO SUNGAL TUNNE...TUNNELING IN HIMALAYAS WITH NATM METHOD: A SPECIAL REFERENCES TO SUNGAL TUNNE...
TUNNELING IN HIMALAYAS WITH NATM METHOD: A SPECIAL REFERENCES TO SUNGAL TUNNE...
 
STUDY THE EFFECT OF RESPONSE REDUCTION FACTOR ON RC FRAMED STRUCTURE
STUDY THE EFFECT OF RESPONSE REDUCTION FACTOR ON RC FRAMED STRUCTURESTUDY THE EFFECT OF RESPONSE REDUCTION FACTOR ON RC FRAMED STRUCTURE
STUDY THE EFFECT OF RESPONSE REDUCTION FACTOR ON RC FRAMED STRUCTURE
 
A COMPARATIVE ANALYSIS OF RCC ELEMENT OF SLAB WITH STARK STEEL (HYSD STEEL) A...
A COMPARATIVE ANALYSIS OF RCC ELEMENT OF SLAB WITH STARK STEEL (HYSD STEEL) A...A COMPARATIVE ANALYSIS OF RCC ELEMENT OF SLAB WITH STARK STEEL (HYSD STEEL) A...
A COMPARATIVE ANALYSIS OF RCC ELEMENT OF SLAB WITH STARK STEEL (HYSD STEEL) A...
 
Effect of Camber and Angles of Attack on Airfoil Characteristics
Effect of Camber and Angles of Attack on Airfoil CharacteristicsEffect of Camber and Angles of Attack on Airfoil Characteristics
Effect of Camber and Angles of Attack on Airfoil Characteristics
 
A Review on the Progress and Challenges of Aluminum-Based Metal Matrix Compos...
A Review on the Progress and Challenges of Aluminum-Based Metal Matrix Compos...A Review on the Progress and Challenges of Aluminum-Based Metal Matrix Compos...
A Review on the Progress and Challenges of Aluminum-Based Metal Matrix Compos...
 
Dynamic Urban Transit Optimization: A Graph Neural Network Approach for Real-...
Dynamic Urban Transit Optimization: A Graph Neural Network Approach for Real-...Dynamic Urban Transit Optimization: A Graph Neural Network Approach for Real-...
Dynamic Urban Transit Optimization: A Graph Neural Network Approach for Real-...
 
Structural Analysis and Design of Multi-Storey Symmetric and Asymmetric Shape...
Structural Analysis and Design of Multi-Storey Symmetric and Asymmetric Shape...Structural Analysis and Design of Multi-Storey Symmetric and Asymmetric Shape...
Structural Analysis and Design of Multi-Storey Symmetric and Asymmetric Shape...
 
A Review of “Seismic Response of RC Structures Having Plan and Vertical Irreg...
A Review of “Seismic Response of RC Structures Having Plan and Vertical Irreg...A Review of “Seismic Response of RC Structures Having Plan and Vertical Irreg...
A Review of “Seismic Response of RC Structures Having Plan and Vertical Irreg...
 
A REVIEW ON MACHINE LEARNING IN ADAS
A REVIEW ON MACHINE LEARNING IN ADASA REVIEW ON MACHINE LEARNING IN ADAS
A REVIEW ON MACHINE LEARNING IN ADAS
 
Long Term Trend Analysis of Precipitation and Temperature for Asosa district,...
Long Term Trend Analysis of Precipitation and Temperature for Asosa district,...Long Term Trend Analysis of Precipitation and Temperature for Asosa district,...
Long Term Trend Analysis of Precipitation and Temperature for Asosa district,...
 
P.E.B. Framed Structure Design and Analysis Using STAAD Pro
P.E.B. Framed Structure Design and Analysis Using STAAD ProP.E.B. Framed Structure Design and Analysis Using STAAD Pro
P.E.B. Framed Structure Design and Analysis Using STAAD Pro
 
A Review on Innovative Fiber Integration for Enhanced Reinforcement of Concre...
A Review on Innovative Fiber Integration for Enhanced Reinforcement of Concre...A Review on Innovative Fiber Integration for Enhanced Reinforcement of Concre...
A Review on Innovative Fiber Integration for Enhanced Reinforcement of Concre...
 
Survey Paper on Cloud-Based Secured Healthcare System
Survey Paper on Cloud-Based Secured Healthcare SystemSurvey Paper on Cloud-Based Secured Healthcare System
Survey Paper on Cloud-Based Secured Healthcare System
 
Review on studies and research on widening of existing concrete bridges
Review on studies and research on widening of existing concrete bridgesReview on studies and research on widening of existing concrete bridges
Review on studies and research on widening of existing concrete bridges
 
React based fullstack edtech web application
React based fullstack edtech web applicationReact based fullstack edtech web application
React based fullstack edtech web application
 
A Comprehensive Review of Integrating IoT and Blockchain Technologies in the ...
A Comprehensive Review of Integrating IoT and Blockchain Technologies in the ...A Comprehensive Review of Integrating IoT and Blockchain Technologies in the ...
A Comprehensive Review of Integrating IoT and Blockchain Technologies in the ...
 
A REVIEW ON THE PERFORMANCE OF COCONUT FIBRE REINFORCED CONCRETE.
A REVIEW ON THE PERFORMANCE OF COCONUT FIBRE REINFORCED CONCRETE.A REVIEW ON THE PERFORMANCE OF COCONUT FIBRE REINFORCED CONCRETE.
A REVIEW ON THE PERFORMANCE OF COCONUT FIBRE REINFORCED CONCRETE.
 
Optimizing Business Management Process Workflows: The Dynamic Influence of Mi...
Optimizing Business Management Process Workflows: The Dynamic Influence of Mi...Optimizing Business Management Process Workflows: The Dynamic Influence of Mi...
Optimizing Business Management Process Workflows: The Dynamic Influence of Mi...
 
Multistoried and Multi Bay Steel Building Frame by using Seismic Design
Multistoried and Multi Bay Steel Building Frame by using Seismic DesignMultistoried and Multi Bay Steel Building Frame by using Seismic Design
Multistoried and Multi Bay Steel Building Frame by using Seismic Design
 
Cost Optimization of Construction Using Plastic Waste as a Sustainable Constr...
Cost Optimization of Construction Using Plastic Waste as a Sustainable Constr...Cost Optimization of Construction Using Plastic Waste as a Sustainable Constr...
Cost Optimization of Construction Using Plastic Waste as a Sustainable Constr...
 

Recently uploaded

Thermal Engineering Unit - I & II . ppt
Thermal Engineering  Unit - I & II . pptThermal Engineering  Unit - I & II . ppt
Thermal Engineering Unit - I & II . pptDineshKumar4165
 
data_management_and _data_science_cheat_sheet.pdf
data_management_and _data_science_cheat_sheet.pdfdata_management_and _data_science_cheat_sheet.pdf
data_management_and _data_science_cheat_sheet.pdfJiananWang21
 
Vivazz, Mieres Social Housing Design Spain
Vivazz, Mieres Social Housing Design SpainVivazz, Mieres Social Housing Design Spain
Vivazz, Mieres Social Housing Design Spaintimesproduction05
 
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdfONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdfKamal Acharya
 
UNIT-II FMM-Flow Through Circular Conduits
UNIT-II FMM-Flow Through Circular ConduitsUNIT-II FMM-Flow Through Circular Conduits
UNIT-II FMM-Flow Through Circular Conduitsrknatarajan
 
UNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and workingUNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and workingrknatarajan
 
Call for Papers - International Journal of Intelligent Systems and Applicatio...
Call for Papers - International Journal of Intelligent Systems and Applicatio...Call for Papers - International Journal of Intelligent Systems and Applicatio...
Call for Papers - International Journal of Intelligent Systems and Applicatio...Christo Ananth
 
PVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELL
PVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELLPVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELL
PVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELLManishPatel169454
 
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordCCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordAsst.prof M.Gokilavani
 
Booking open Available Pune Call Girls Koregaon Park 6297143586 Call Hot Ind...
Booking open Available Pune Call Girls Koregaon Park  6297143586 Call Hot Ind...Booking open Available Pune Call Girls Koregaon Park  6297143586 Call Hot Ind...
Booking open Available Pune Call Girls Koregaon Park 6297143586 Call Hot Ind...Call Girls in Nagpur High Profile
 
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptx
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptxBSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptx
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptxfenichawla
 
Intze Overhead Water Tank Design by Working Stress - IS Method.pdf
Intze Overhead Water Tank  Design by Working Stress - IS Method.pdfIntze Overhead Water Tank  Design by Working Stress - IS Method.pdf
Intze Overhead Water Tank Design by Working Stress - IS Method.pdfSuman Jyoti
 
chapter 5.pptx: drainage and irrigation engineering
chapter 5.pptx: drainage and irrigation engineeringchapter 5.pptx: drainage and irrigation engineering
chapter 5.pptx: drainage and irrigation engineeringmulugeta48
 
Glass Ceramics: Processing and Properties
Glass Ceramics: Processing and PropertiesGlass Ceramics: Processing and Properties
Glass Ceramics: Processing and PropertiesPrabhanshu Chaturvedi
 
Coefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxCoefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxAsutosh Ranjan
 
Booking open Available Pune Call Girls Pargaon 6297143586 Call Hot Indian Gi...
Booking open Available Pune Call Girls Pargaon  6297143586 Call Hot Indian Gi...Booking open Available Pune Call Girls Pargaon  6297143586 Call Hot Indian Gi...
Booking open Available Pune Call Girls Pargaon 6297143586 Call Hot Indian Gi...Call Girls in Nagpur High Profile
 
Extrusion Processes and Their Limitations
Extrusion Processes and Their LimitationsExtrusion Processes and Their Limitations
Extrusion Processes and Their Limitations120cr0395
 
AKTU Computer Networks notes --- Unit 3.pdf
AKTU Computer Networks notes ---  Unit 3.pdfAKTU Computer Networks notes ---  Unit 3.pdf
AKTU Computer Networks notes --- Unit 3.pdfankushspencer015
 

Recently uploaded (20)

Thermal Engineering Unit - I & II . ppt
Thermal Engineering  Unit - I & II . pptThermal Engineering  Unit - I & II . ppt
Thermal Engineering Unit - I & II . ppt
 
data_management_and _data_science_cheat_sheet.pdf
data_management_and _data_science_cheat_sheet.pdfdata_management_and _data_science_cheat_sheet.pdf
data_management_and _data_science_cheat_sheet.pdf
 
Vivazz, Mieres Social Housing Design Spain
Vivazz, Mieres Social Housing Design SpainVivazz, Mieres Social Housing Design Spain
Vivazz, Mieres Social Housing Design Spain
 
Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar ≼🔝 Delhi door step de...
Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar  ≼🔝 Delhi door step de...Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar  ≼🔝 Delhi door step de...
Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar ≼🔝 Delhi door step de...
 
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdfONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
 
UNIT-II FMM-Flow Through Circular Conduits
UNIT-II FMM-Flow Through Circular ConduitsUNIT-II FMM-Flow Through Circular Conduits
UNIT-II FMM-Flow Through Circular Conduits
 
UNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and workingUNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and working
 
Call for Papers - International Journal of Intelligent Systems and Applicatio...
Call for Papers - International Journal of Intelligent Systems and Applicatio...Call for Papers - International Journal of Intelligent Systems and Applicatio...
Call for Papers - International Journal of Intelligent Systems and Applicatio...
 
(INDIRA) Call Girl Meerut Call Now 8617697112 Meerut Escorts 24x7
(INDIRA) Call Girl Meerut Call Now 8617697112 Meerut Escorts 24x7(INDIRA) Call Girl Meerut Call Now 8617697112 Meerut Escorts 24x7
(INDIRA) Call Girl Meerut Call Now 8617697112 Meerut Escorts 24x7
 
PVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELL
PVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELLPVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELL
PVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELL
 
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordCCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
 
Booking open Available Pune Call Girls Koregaon Park 6297143586 Call Hot Ind...
Booking open Available Pune Call Girls Koregaon Park  6297143586 Call Hot Ind...Booking open Available Pune Call Girls Koregaon Park  6297143586 Call Hot Ind...
Booking open Available Pune Call Girls Koregaon Park 6297143586 Call Hot Ind...
 
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptx
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptxBSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptx
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptx
 
Intze Overhead Water Tank Design by Working Stress - IS Method.pdf
Intze Overhead Water Tank  Design by Working Stress - IS Method.pdfIntze Overhead Water Tank  Design by Working Stress - IS Method.pdf
Intze Overhead Water Tank Design by Working Stress - IS Method.pdf
 
chapter 5.pptx: drainage and irrigation engineering
chapter 5.pptx: drainage and irrigation engineeringchapter 5.pptx: drainage and irrigation engineering
chapter 5.pptx: drainage and irrigation engineering
 
Glass Ceramics: Processing and Properties
Glass Ceramics: Processing and PropertiesGlass Ceramics: Processing and Properties
Glass Ceramics: Processing and Properties
 
Coefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxCoefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptx
 
Booking open Available Pune Call Girls Pargaon 6297143586 Call Hot Indian Gi...
Booking open Available Pune Call Girls Pargaon  6297143586 Call Hot Indian Gi...Booking open Available Pune Call Girls Pargaon  6297143586 Call Hot Indian Gi...
Booking open Available Pune Call Girls Pargaon 6297143586 Call Hot Indian Gi...
 
Extrusion Processes and Their Limitations
Extrusion Processes and Their LimitationsExtrusion Processes and Their Limitations
Extrusion Processes and Their Limitations
 
AKTU Computer Networks notes --- Unit 3.pdf
AKTU Computer Networks notes ---  Unit 3.pdfAKTU Computer Networks notes ---  Unit 3.pdf
AKTU Computer Networks notes --- Unit 3.pdf
 

Visual Product Identification For Blind Peoples

  • 1. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395 -0056 Volume: 03 Issue: 01 | Jan-2016 www.irjet.net p-ISSN: 2395-0072 © 2016, IRJET | Impact Factor value: 4.45 | ISO 9001:2008 Certified Journal | Page 1231 Visual Product Identification For Blind Peoples Prajakata Bhunje1, Anuja Deshmukh2, Dhanshree Phalake3, Amruta Bandal4 1 Prajakta Nitin Bhunje, Computer Engg., DGOI FOE,Maharashtra, India 2 Anuja Ramesh Deshmukh, Computer Engg., DGOI FOE,Maharashtra, India 3 Dhanshree Viajysinh Phalake, Computer Engg., DGOI FOE,Maharashtra, India 4 Amruta Avinash Bandal, Computer Engg., DGOI FOE,Maharashtra, India ---------------------------------------------------------------------***--------------------------------------------------------------------- Abstract - We propose a camera-based assistive text reading framework to help blind persons ad text labels and product packaging from hand-held objects in their daily lives. To isolate the object from cluttered backgrounds or other surrounding objects in the camera view, we rest propose an ancient and active motion-based method to dine a region of interest (ROI) in the video by asking the user to shake the object. This method extracts moving object region by a mixture-of- Gaussians based background subtraction method. In the extracted ROI, text localization and recognition are conducted to acquire text information. To automatically localize the text regions from the object ROI, we propose a novel text localization algorithm by learning gradient features of stroke orientations and distributions of edge pixels in an Ad boost model. Text characters in the localized text regions are then binaries and recognized by o -the-shelf optical character recognition (OCR) software. The recognized text codes are output to blind users in speech. Performance of the proposed text localization algorithm is quantitatively evaluated on ICDAR-2003 and ICDAR-2011 Robust Reading Datasets. Experimental results demonstrate that our algorithm achieves the state-of-the-arts. The proof-of-concept prototype is also evaluated on a dataset collected using 10 blind persons, to evaluate ether activeness of the systems hardware. We explore user interface issues, and assess robustness of the algorithm in extracting and reading text from di erent objects with complex backgrounds. Keywords: blindness; assistive devices; text reading; hand-held objects, text region localization; stroke orientation; distribution of edge pixels; OCR; Key Words I. INTRODUCTION Reading is obviously essential in today’s society. Printed text is everywhere in the form of reports, receipts, bank statements, restaurant menus, classroom handouts, product packages, instructions on medicine bottles, etc. And while optical aids, video mangier and screen readers can help blind users and those with low vision to access documents, there are few devices that can provide good access to common hand- held objects such as product packages, and objects printed with text such as prescription medication bottles. The ability of people who are blind or have significant visual impairments to read printed labels and product packages will enhance independent living, and foster economic and social self- sufficiency. Today there are already a few systems that have some promise for portable use, but they cannot handle product labeling. For example portable bar code readers designed to help blind people identify different products in an extensive product database can enable users who are blind to access information about these products through speech and Braille. But a big limitation is that it is very hard for blind users to find the position of the bar code and to correctly point the bar code reader at the bar code. Some reading-assistive systems such as pen scanners, might be employed in these and similar situations. Such systems integrate optical character recognition (OCR) software to offer the function of scanning and recognition of text and some have integrated voice output. However, these systems are generally designed for and perform best with document images with simple backgrounds, standard fonts, a small range of font sizes, and well-organized characters rather than commercial product boxes with multiple decorative patterns. Most state-of-the-art OCR software cannot directly handle scene images with complex backgrounds. A number of portable reading assistants have been designed specifically for the visually impaired. KReader Mobile runs on a cell phone and allows the user to read mail, receipts, fliers and many other documents [13]. However, the document to be read must
  • 2. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395 -0056 Volume: 03 Issue: 01 | Jan-2016 www.irjet.net p-ISSN: 2395-0072 © 2016, IRJET | Impact Factor value: 4.45 | ISO 9001:2008 Certified Journal | Page 1232 be nearly flat, placed on a clear, dark surface (i.e., a non- cluttered background), and contain mostly text. Furthermore, Reader Mobile accurately reads black print on a white background, but has problems recognizing colored text or text on a colored background. It cannot read text with complex backgrounds, text printed on cylinders with warped or incomplete images (such as soup cans or medicine bottles). Furthermore, these systems require a blind user to manually localize areas of interest and text regions on the objects in most cases. Fig. 1. Example of printed text. Although a number of reading assistants have been designed specifically for the visually impaired, to our knowledge, no existing reading assistant can read text from the kinds of challenging patterns and back- grounds found on many everyday commercial products. As shown in Fig. 1, such text information can appear in multiple scales, fonts, colors, and orientations. To assist blind persons to read text from these kinds of hand-held objects, we have conceived of a camera based assistive text reading framework to track the object of interest within the camera view and extract print text information from the object.Our proposed algorithm can effectively handle complex background and multiple patterns, and extract text information from both hand-held objects and nearby signage, as shown in Fig. 2.In assistive reading systems for blind persons, it is very challenging for users to position the object of interest within the center of the cameras view. As of now, there are still no acceptable solutions. We approach the problem in stages: To make sure the hand-held object appears in the camera view, we se a camera with sufficiently wide angle to accommodate users with only approximate aim. This may often result in other text objects appearing in the cameras view (for example while shopping at a supermarket). To extract the hand-held object from the camera image, we develop a motion-based method to obtain a region of interest (ROI) of the object. Then we perform text recognition only in this ROI. II. MOTIVATION Today, there are already a few systems that have some promise for portable use, but they cannot handle product labeling. For example, portable bar Fig. 2. Example of Text localization. code readers designed to help blind people identify different products in an extensive product database can enable users who are blind to access information about these products. Through Speech and Braille. But a big limitation is that it is very hard for blind users to find the position of the bar code and to correctly point the bar code reader at the bar code. Some reading assistive systems such as pen scanners might be employed in these and similar situations. Such systems integrate OCR software to offer the function of scanning and recognition of text and some have integrated voice output. Two of the biggest challenges to independence for blind individuals are difficulties in accessing printed material [1] and the stressors associated with safe and efficient navigation. Access to printed documents has been greatly improved by the development and proliferation of adaptive technologies such as screen-reading programs, optical character recognition software, text-to-speech engines, and electronic Braille displays. By contrast, difficulty accessing room numbers, street signs, store names, bus numbers, maps, and other printed information related to navigation remains a major challenge for blind travel. Imagine trying toad room n257 in a large university building without being able to read the room numbers or access the you are here map at the buildings entrance. Braille signage certainly helps in identifying a room, but it is difcult for blind people to nd Braille signs. In addition, only a modest fraction of the more than 3 million visually impaired people in the United States read Braille. Estimates put the number of Braille readers between 15,000 and 85,000. III. OBJECTIVES This paper presents a prototype system of assistive text reading. The system framework consists of three functional components: scene capture, data processing, and audio output. The scene capture component collects scenes containing objects of interest in the form of images or video. In our prototype, it corresponds to a camera attached to a pair of
  • 3. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395 -0056 Volume: 03 Issue: 01 | Jan-2016 www.irjet.net p-ISSN: 2395-0072 © 2016, IRJET | Impact Factor value: 4.45 | ISO 9001:2008 Certified Journal | Page 1233 sunglasses. The data processing component is used for deploying our proposed algorithms, including 1) object- of- interest detection to selectively extract the image of the object held by the blind user from the cluttered background or other neutral objects in the camera view; and 2) Text localization to obtain image regions containing text, and text recognition to transform image-based text information into readable codes. We use a mini laptop as the processing device in our current prototype system. The audio output component is to inform the blind user of recognized text codes. In solving the task at hand, to extract text information from complex backgrounds with multiple and variable text patterns, we here propose a text localization algorithm that combines rule-based layout analysis and learning-based text classifier training, which define novel feature maps based on stroke orientations and edge distributions. These in turn generate representative and discriminative text features to distinguish text characters from background outliers. IV. APPLICATIONS 1. Image capturing and pre-processing 2. Automatic text extraction 3. Text recognition and audio output 4. Prototype System Evaluation V. SYSTEM DESIGN This paper presents a prototype system of Assistive text reading. The system framework consists of three functional components: scene Fig. 3. Block Diagram of Text Reading. Fig. 4. Flowchart of the proposed framework to read text from hand- held objects for blind users. Processing, and audio output. The scene capture Component collects scenes containing objects of interest in the form of images or video. In our prototype, it corresponds to a camera attached to a pair of sunglasses. The data processing component is used for deploying our proposed algorithms, including 1. Object- of- interest detection to selectively extract the image of the object held by the blind user from the cluttered background or other neutral objects in the camera view 2. Text localization to obtain image regions containing text, and text recognition to transform image based text information into readable codes. We use a mini laptop as the processing device in our current prototype system. The audio output component is to inform the blind user of recognized text codes. Fig. 5. Two examples of text localization and recognition from camera captured Images.
  • 4. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395 -0056 Volume: 03 Issue: 01 | Jan-2016 www.irjet.net p-ISSN: 2395-0072 © 2016, IRJET | Impact Factor value: 4.45 | ISO 9001:2008 Certified Journal | Page 1234 VI. MODULES A. IMAGE CAPTURING AND PREPROCESSING The video is captured by using web-cam and the frames from the video is segregated and undergone to the pre- processing. First, get the objects continuously from the camera and adapted to process. Once the object of interest is extracted from the camera image and it Converted into gray image. Use hears cascade classifier for recognizing the character from the object. The work with a cascade classifier includes two major stages: training and detection. For training need a set of samples. There are two types of samples: positive and negative. To extract the hand-held object of interest from other objects in the camera view, ask users to shake the hand-held objects containing the text they wish to identify and then employ a motion-based method to localize objects from cluttered background. B. AUTOMATIC TEXT EXTRACTION In order to handle complex backgrounds, two novel feature maps to extracts text features based on stroke orientations and edge distributions, respectively. Here, stroke is defined as a uniform region with bounded width and significant extent. These feature maps are combined to build an Ad boost based text classifier. C. TEXT REGION LOCALIZATION Text localization is then performed on the camera based image. The Cascade-Adaboost classifier confirms the existence of text information in an image patch but it cannot the whole images, so heuristic layout analysis is performed to extract candidate image patches prepared for text classification. Text information in the image usually appears in the form of horizontal text strings containing no less than three character members. D. TEXT RECOGNITION AND AUDIO OUTPUT Text recognition is performed by off-the-shelf OCR prior to output of informative words from the localized text regions. A text region labels the minimum rectangular area for the commodation of characters inside it, so the border of the text region contacts the edge boundary of the text characters. However, this experiment shows that OCR generates better performance text regions are first assigned proper margin areas and binaries to segments text characters from background. The recognized text codes are recorded in script files. Then, employ the Microsoft Speech Software Development Kit to load these files and display the audio output of text information. Blind users can adjust speech rate, volume and tone according to their preferences. VII. CONCLUSIONS AND FUTURE WORK In this paper, we have described a prototype System to read printed text on hand-held objects for Assisting blind persons. In order to solve the common aiming problem for blind users, we have proposed a motion-based method to detect the object of interest while the blind user simply shakes the object for a couple of seconds. This method can effectively distinguish the object of interest from background or other objects in the camera view. To extract text regions from complex backgrounds, we have proposed a novel text localization algorithm based on models of stroke orientation and edge distributions. The corresponding feature maps estimate the global structural feature of text at every pixel. Block patterns are defined to project the proposed feature maps of an image patch into a feature vector. Adjacent character grouping is performed to calculate candidates of text patches prepared for text classification. An Adaboost learning model is employed to localize text in camera captured images. Off-the-shelf OCR is used to perform word recognition on the localized text regions and transform into audio output for blind users. Our future work will extend our localization algorithm to process text strings with characters fewer than 3 and to design more robust block patterns for text feature extraction. We will also extend our algorithm to handle non- horizontal text strings. Furthermore, we will address the significant human interface issues associated with reading text by blind users. VIII. REFERENCES [1] 10 facts about blindness and visual impairment, World Health Organization: Blindness and visual Impairment, 2009. [2] Advance Data Repor ts f rom the Nat ional Heal th Interview Survey, 2008. [3] International Workshop on Camera-Based Document Analysis and Recognition (CBDAR 2005,2007, 2009, 2011) . [4] X. Chen and A. L. Yuille, Detecting and reading text in natural scenes, In CVPR, Vol. 2, pp. II-366 II-373, 2004. [5] X. Chen, J. Yang, J. Zhang and A. Waibel, Automatic detection and recognition of signs from natural scenes, In IEEE Transactions on image processing, Vol. 13, No. 1, pp. 87-99, 2004.
  • 5. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395 -0056 Volume: 03 Issue: 01 | Jan-2016 www.irjet.net p-ISSN: 2395-0072 © 2016, IRJET | Impact Factor value: 4.45 | ISO 9001:2008 Certified Journal | Page 1235 [6] D. Dakopoulos and N. G. Bourbakis, Wearable obstacle avoidance electronic travel aids for blind: a survey, In IEEE Transactions on systems, man, and cybernetics, Vol. 40, No. 1, pp. 25-35, 2010. [7] B. Epshtein, E. Ofek and Y. Wexler, Detecting text in natural scenes with stroke width transform, In CVPR, pp. 2963-2970, 2010. [8] Y. Freund and R. Schapire, Experiments with a new boosting algorithm, In Int. Conf. on Machine Learning, pp.148156, 1996. [9] N. Giudice and G. Legge, Blind navigation and the role of technology, in The engineering handbook of smart technology for aging, disability.