SlideShare une entreprise Scribd logo
1  sur  5
Télécharger pour lire hors ligne
BOHR International Journal of Data Mining and Big Data
2020, Vol. 1, No. 1, pp. 5–9
Copyright © 2020 BOHR Publishers
https://doi.org/10.54646/bijdmbd.002
www.bohrpub.com
Implementation of Web Application for Disease Prediction Using AI
Manasvi Srivastava, Vikas Yadav and Swati Singh∗
IILM, Academy of Higher Learning, College of Engineering and Technology Greater Noida, Uttar Pradesh,
India
∗Corresponding Author: swati.singh@iilm.edu
Abstract. The Internet is the largest source of information created by humanity. It contains a variety of materials
available in various formats such as text, audio, video and much more. In all web scraping is one way. It is a set
of strategies here in which we get information from the website instead of copying the data manually. Many Web-
based data extraction methods are designed to solve specific problems and work on ad-hoc domains. Various tools
and technologies have been developed to facilitate Web Scraping. Unfortunately, the appropriateness and ethics of
using these Web Scraping tools are often overlooked. There are hundreds of web scraping software available today,
most of them designed for Java, Python and Ruby. There is also open source software and commercial software.
Web-based software such as YahooPipes, Google Web Scrapers and Firefox extensions for Outwit are the best tools
for beginners in web cutting. Web extraction is basically used to cut this manual extraction and editing process and
provide an easy and better way to collect data from a web page and convert it into the desired format and save it to a
local or archive directory. In this paper, among others the kind of scrub, we focus on those techniques that extract the
content of a Web page. In particular, we use scrubbing techniques for a variety of diseases with their own symptoms
and precautions.
Keywords: Web scraping, Disease, Legality, Software, Symptons.
Introduction
Web Scraper is a process for downloading and extract-
ing important data by scanning a web page. Web scrap-
ers work best when page content is either transferred,
searched, or modified. The collected information is then
copied to a spreadsheet or stored in a database for fur-
ther analysis. For the ultimate purpose of analysis, data
needs to be categorized by progressively different devel-
opments, for example, by starting with its specification
collection, editing process, cleaning process, remodeling,
and using different models and various algorithms and
end result. There are two ways to extract data from web-
sites, the first is the manual extraction process and the
second is the automatic extraction process. Web scrapers
compile site information in the same way that a person
can do that by removing access to a webpage of the site,
finding relevant information, and moving on to the next
web page. Each website has a different structure which is
why web scrapers are usually designed to search through
a website. Web deletion can help in finding any kind of
targeted information. We will then have the opportunity
to find, analyze and use information in the way we need.
Web logging therefore paves the way for data acquisi-
tion, speeds up automation and makes it easier to access
extracted data by rendering it in CSV pattern. Web publish-
ing often removes a lot of data from websites for example,
monitoring consumer interests, price monitoring eg price
checking, advancing AI models, data collection, tracking
issues, and so on. So there is no doubt that web removal is a
systematic way to get more data from websites. It requires
two stages mainly crawling and removal. A search engine
is an algorithm designed by a person who goes through the
web to look for specific information needed by following
online links. Deleter is a specific tool designed to extract
data from sites.
Web Scraper will work that way if the patient is suf-
fering from any kind of illness or illness, he will add his
symptoms and problems and when the crawl work starts
and he will start scrolling and look like a disease from
the database provided on the website and will show the
best disease like patient symptoms. And when those spe-
cific diseases show up, it will also show the precautionary
measures that the patient needs to take care of in order to
overcome them and to treat the infection.
5
6 Manasvi Srivastava et al.
Overview of Web Scraping
Web Scraping is a great way to extract random data from
websites and convert that data into organized data that can
be stored and analyzed in a database. Web Scraping is also
known as web data extraction, web data removal, web har-
vesting or screen scanning. Web scraping is a form of data
mining. The whole purpose of the web crawling process is
to extract information from websites and convert it into an
understandable format such as spreadsheets, a database or
a comma-separated file (CSV) as shown in Figure 1. Data
such as item price, stock price, various reports, market
prices and product details, can be collected with web ter-
mination. Extracting website-based information helps you
make effective decisions for your business.
Figure 1. Web scraping structure.
Practices of Web Scraping
• Data scraping
• Research
• Web mash up—integrate data from multiple sources
• Extract business details from business directory web-
sites such as Yelp and Yellow pages
• Collect government data
• Market Analysis
The Web Data Scraper process, a software agent, also
known as a Web robot, mimics browsing communica-
tion between Web servers and a person on a normal Web
browser. Step by step, the robot enters as many Websites
as it needs, transfers its content to find and extracts inter-
esting data and builds that content as desired. AP scraping
APIs and frameworks address the most common web data
scrapers involved in achieving specific recovery goals, as
described in the following text:
Hypertext Transfer Protocol (HTTP)
This method is used to extract data from static and
dynamic web pages. Data can be retrieved by sending
HTTP requests to a remote web server using a socket
system.
Hyper Text Markup Language (HTML)
Exploring languages for query data, such as XQuery and
Hyper Text Query Language (HTQL), can be used to scan
HTML pages and retrieve and modify content on the page.
Release Structure
The main purpose is to convert the published content into
a formal representation for further analysis and retention.
Although this last step is on the side of Web scraping, some
tools are aware of post-results, providing memory data for-
mats and text-based solutions, such as cables or files (XML
or CSV files).
Literature Survey
Python has a rich set of libraries available for downloading
digital content online. Among the libraries available, the
following three are the most popular: BeautifulSoup, LXml
and RegEx. Statistical research was performed on the avail-
able data sets; indicates that RegEx was able to deliver the
requested information at an average rate of 153.6 ms. How-
ever, RegEx has those limitations of data extraction of web
pages with internal HTML tags. Because of this demerit
RegEx is used to perform complex data extraction only.
Some libraries such as BeautifulSoup and LXml are able
to extract content from web pages under a complex envi-
ronment that has yielded a response rate of 457.66 ms and
203 ms respectively.
The main purpose of data analysis is to get useful
information from data and make decisions based on data
analysis. Web deletion refers to the collection of data on the
web. Web scraping is also known as data scraping. For the
purpose of data analysis can be divided into several steps
such as cleaning, editing etc. Scrapy is the most widely
used source of information needed by the user. The main
purpose of using scrapy is to extract data from its sources.
Scrapy, which crawls on the web and is based on python
programming language, is very helpful in finding the data
we need by using the URLs needed to clear the data from
its sources? Web scraper is a useful API to retrieve data
from a website. Scrapy provides all the necessary tools to
extract data from a website and process data according to
user needs and store data in a specific format as defined by
users.
Implementation of Web Application for Disease Prediction Using AI 7
The Internet is very much looking at web pages that
include a large number of descriptive elements including
text, audio, graphics, video etc. This process, Web Scraping
is mainly responsible for the collection of raw data from the
website. It is a process in which you extract data automa-
tion very quickly. The process enables us to extract specific
data requested by the user. The most popular method used
is to create individual web data structure using any known
language
Experimental Work
TECHNOLOGY USED
Firebase
For Database we have used Cloud Firestore from firebase.
It is a real-time NOSQL database that stores two key value
data in the form of collections and documents.
Tensor Flow
Tensor Flow is used to train the database model and to
make predictions. There are various algorithms for mod-
eling training, or use line format in our project.
JavaScript Frameworks
• NodeJS
Node.js is an open source, cross-platform, JavaScript
runtime environment running back to V8 engine and
extracting JavaScript code without a web browser.
Our rewriting code is written for Nodejs as its fast
and platform language.
• ElectronJS
Electron is a framework for building native appli-
cations with web technologies such as JavaScript,
HTML, and CSS. As Electron is used to create a short
web application it helps us to write our code and thus
reduce the development time.
• ReactJS
React makes it less painful to create interactive UIs.
Design a simple view of each state in your app and
React will carefully review and provide relevant sec-
tions as your data changes.
React can also render to the server using Node and
the power of mobile applications using React Native.
Python
Python is a high-level programming language translated
into high-level translations.
In this project various libraries such as pandas, NumPy,
good soup. etc. is used to create our database. Pandas and
NumPy are used to filter and process data needed to train
our model by extracting and removing it from a separate
data source.
Compatibility
OS X
Only 64bit binaries are provided for OS X, and the lower
version of OS X is supported by OS X 10.9.
Windows
Electron supports Windows 7 and later, older versions of
the OS are not supported.
Both x86 and amd64 (x64) binary are provided for Win-
dows and are not supported in the ARM version of Win-
dows.
Software Used
– VSCode
Visual Studio Code is a freeware source code edi-
tor developed by Microsoft for Windows, Linux and
MacOS. Features include debugging support, syntax
highlighting, intelligent coding, captions, reuse code,
and embedded Git.
– Google collab notebook
Colaboratory, or Colab for short, is a product of
Google Research, which allows developers to write
and use Python code through their browser. Google
Colab is an excellent tool for in-depth learning activ-
ities. It is a compact Jupyter notebook that needs
no setup and has an excellent free version, provid-
ing free access to Google computer resources such as
GPUs and TPUs.
– PyCharm
PyCharm is an integrated development platform
used for computer programs, especially in the
Python language. Developed by Czech company Jet-
Brains.
Data Source
As we did not get more than 40 diseases so to get dataset
we have created our own dataset. And the dataset which
we have used for our training and testing process have
taken from various sources. One of them is added below.
– https://github.com/DiseaseOntology/HumanDise
aseOntology
8 Manasvi Srivastava et al.
Use of Scrapy
Scrapy is a framework for crawling and retrieving non-
fiction data that can be used for the size of a support-
ive application such as data mining, managed or actual
reported data. Apart from the way it was originally
expected for Scrapy to be removed from the web, it could
be used in the same way to extract data using APIs
for example Amazon AWS or as a very important web
browser. Scrappy is written in python. Let’s take a Wiki
example related to one of these problems. A simple online
photo gallery can provide three options to users as defined
by HTTP GET parameters at URL. If there are four ways
to filter images with three thumbnail-sized options, two
file formats, and a user-provided disabling option, then the
same content set can be accessed with different URLs, all of
which can be linked to the site. This carefully crafted com-
bination creates a problem for the pages as they have to
plan with an endless combination of subtitle changes to get
different content.
Methodology
The method used by the project to collect all the required
data is extracted and extracted from various sources such
as the CDC’s database database and Kaggle resources.
Then analyze the extracted data using texts written in
python language according to project requirements. Pan-
das and NumPy are widely used to perform various func-
tions on the database.
After sorting the data according to each need, it is then
uploaded to the database. In the database we have used
Cloud Firestore as it is a real-time NoSQL database with
extensive API support.
Further in the TensorFlow project is used to train our
model according to needs.
In this project we predict the disease because of the
given symptoms.
Training data set – 70%
Setting test data – 30%
TensorFlow supports Linear Regression which is used to
predict diseases based on the given indicators.
Coding
Project Frontend is written using ReactJS & TypeScript.
Although we have used the MaterialUI kit from Google
ReactJS to speed up our development process.
To provide our app, Electron is used. Our web system
supports MacOS and Windows. Most of today’s web app
is written with the help of Electron JS.
Testing
The project is tested using an Electron built-in test frame-
work called Spectron.
The project is being implemented in the browser. The
output generated turns out to be completely consistent and
the generated analysis is approximate.
Electron’s standard workflow with Spectron can involve
engineers who write unit tests in the standard TDD format
and then write integration tests to ensure that acceptance
criteria are met before approving a feature to be used. Con-
tinuous integration servers can ensure that all these tests
are passed before they are incorporated into production.
Algorithm Used
Linear Regression is a standard mathematical method that
allows us to study a function or relationship in a given set
of continuous data. For example, we are given some of the
corresponding x and y data points and we need to study
the relationship between them called hypothesis.
In the event of a line reversal, the hypothesis is a straight
line, i.e.,
Where the vector is called Weights and b is a scale called
Bias. Weights and Bias are called model parameters.
All we need to do is estimate the value of w and b
from the set of data given that the result of the assump-
tion has produced the minimum cost J defined by the next
cost function
where m the number of data points in the data provided.
This cost function is also called Mean Squared Error.
To find the optimized value of the J’s minimum param-
eters, we will be using a widely used optimizer algorithm
called Gradient Descent. The following is a fake Gradient
Descent code:
Result Discussion
The overall results of the project are useful in predicting
diseases with the given symptoms. The script that was
written to extract data can be used later to compile and for-
mat it according to needs.
Users can pick up symbols by typing them themselves
or by selecting them in the given options. The training
model will predict the disease according to it. Users are
able to create their own medical profile, where they can
submit their medical records and prescribed medication,
this greatly helps us to feed our database and better predict
disease over time as some of these diseases occur directly
during the season.
Moreover, the analysis performed showed a very sim-
ilar disease, but the training model lacks the size of the
database.
Implementation of Web Application for Disease Prediction Using AI 9
Conclusions and Future Scope
The use of the Python program also emphasizes under-
standing the use of pattern matching and general expres-
sions for web releases. Database data is compiled from
factual reports, directly to Government media outlets for
local media where it is considered reliable. A team of
experts and analysts who validate information from a con-
tinuous list of more than 5,000 items is likely to be the
site that collects data effectively. User-provided inputs are
analyzed and deleted from the website and the output is
extracted as the user enters the user interface encounters.
Output is generated in the form of text. This method is
simple and straightforward to eradicate the disease from
companies and provides vigilance against that disease.
For future work, we plan tests that aim to show the med-
ication that a patient can take for treatment. Also, we are
looking to link this website to various hospitals and phar-
macies for easy use.
References
[1] Thivaharan. S, Srivatsun. G and Sarathambekai. S, “A Survey on
Python Libraries Used for Social Media Content Scraping”, Proceed-
ings of the International Conference on Smart Electronics and Com-
munication (ICOSEC 2020) IEEE Xplore Part Number: CFP20V90-
ART; ISBN: 978-1-7281-5461-9, PP: 361–366.
[2] Shreya Upadhyay, Vishal Pant, Shivansh Bhasin and Mahantesh K
Pattanshetti “Articulating the Construction of a Web Scraper for Mas-
sive Data Extraction”, 2017 IEEE.
[3] Amruta Kulkarni, Deepa Kalburgi and Poonam Ghuli, “Design of
Predictive Model for Healthcare Assistance Using Voice Recogni-
tion”, 2nd IEEE International Conference on Computational Sys-
tems and Information Technology for Sustainable Solutions 2017,
PP: 61–64.
[4] Dimitri Dojchinovski, Andrej Ilievski, Marjan Gusev Interactive
home healthcare system with integrated voice assistant MIPRO 2019,
PP: 284–288 Posted: 2019.
[5] Mohammad Shahnawaz, Prashant Singh, Prabhat Kumar and
Dr. Anuradha Konidena, “Grievance Redressal System”, Interna-
tional Journal of Data Mining and Big Data, 2020, Vol. 1, No. 1,
PP. 1–4.
Hyperlink of research paper
1. https://www.sciencedirect.com/science/article/pii/S2352914817302253

Contenu connexe

Similaire à AI-Powered Disease Prediction Web App Extracts Data

ANALYTICAL IMPLEMENTATION OF WEB STRUCTURE MINING USING DATA ANALYSIS IN ONLI...
ANALYTICAL IMPLEMENTATION OF WEB STRUCTURE MINING USING DATA ANALYSIS IN ONLI...ANALYTICAL IMPLEMENTATION OF WEB STRUCTURE MINING USING DATA ANALYSIS IN ONLI...
ANALYTICAL IMPLEMENTATION OF WEB STRUCTURE MINING USING DATA ANALYSIS IN ONLI...IAEME Publication
 
International conference On Computer Science And technology
International conference On Computer Science And technologyInternational conference On Computer Science And technology
International conference On Computer Science And technologyanchalsinghdm
 
IRJET - Review on Search Engine Optimization
IRJET - Review on Search Engine OptimizationIRJET - Review on Search Engine Optimization
IRJET - Review on Search Engine OptimizationIRJET Journal
 
STRATEGY AND IMPLEMENTATION OF WEB MINING TOOLS
STRATEGY AND IMPLEMENTATION OF WEB MINING TOOLSSTRATEGY AND IMPLEMENTATION OF WEB MINING TOOLS
STRATEGY AND IMPLEMENTATION OF WEB MINING TOOLSAM Publications
 
a novel technique to pre-process web log data using sql server management studio
a novel technique to pre-process web log data using sql server management studioa novel technique to pre-process web log data using sql server management studio
a novel technique to pre-process web log data using sql server management studioINFOGAIN PUBLICATION
 
Sekhon final 1_ppt
Sekhon final 1_pptSekhon final 1_ppt
Sekhon final 1_pptManant Sweet
 
DESIGN AND IMPLEMENTATION OF CARPOOL DATA ACQUISITION PROGRAM BASED ON WEB CR...
DESIGN AND IMPLEMENTATION OF CARPOOL DATA ACQUISITION PROGRAM BASED ON WEB CR...DESIGN AND IMPLEMENTATION OF CARPOOL DATA ACQUISITION PROGRAM BASED ON WEB CR...
DESIGN AND IMPLEMENTATION OF CARPOOL DATA ACQUISITION PROGRAM BASED ON WEB CR...ijmech
 
Design and Implementation of Carpool Data Acquisition Program Based on Web Cr...
Design and Implementation of Carpool Data Acquisition Program Based on Web Cr...Design and Implementation of Carpool Data Acquisition Program Based on Web Cr...
Design and Implementation of Carpool Data Acquisition Program Based on Web Cr...ijmech
 
DESIGN AND IMPLEMENTATION OF CARPOOL DATA ACQUISITION PROGRAM BASED ON WEB CR...
DESIGN AND IMPLEMENTATION OF CARPOOL DATA ACQUISITION PROGRAM BASED ON WEB CR...DESIGN AND IMPLEMENTATION OF CARPOOL DATA ACQUISITION PROGRAM BASED ON WEB CR...
DESIGN AND IMPLEMENTATION OF CARPOOL DATA ACQUISITION PROGRAM BASED ON WEB CR...ijmech
 
A Web Extraction Using Soft Algorithm for Trinity Structure
A Web Extraction Using Soft Algorithm for Trinity StructureA Web Extraction Using Soft Algorithm for Trinity Structure
A Web Extraction Using Soft Algorithm for Trinity Structureiosrjce
 
DEVELOPING PRODUCTS UPDATE-ALERT SYSTEM FOR E-COMMERCE WEBSITES USERS USING H...
DEVELOPING PRODUCTS UPDATE-ALERT SYSTEM FOR E-COMMERCE WEBSITES USERS USING H...DEVELOPING PRODUCTS UPDATE-ALERT SYSTEM FOR E-COMMERCE WEBSITES USERS USING H...
DEVELOPING PRODUCTS UPDATE-ALERT SYSTEM FOR E-COMMERCE WEBSITES USERS USING H...ijnlc
 
DEVELOPING PRODUCTS UPDATE-ALERT SYSTEM FOR E-COMMERCE WEBSITES USERS USING ...
DEVELOPING PRODUCTS UPDATE-ALERT SYSTEM  FOR E-COMMERCE WEBSITES USERS USING ...DEVELOPING PRODUCTS UPDATE-ALERT SYSTEM  FOR E-COMMERCE WEBSITES USERS USING ...
DEVELOPING PRODUCTS UPDATE-ALERT SYSTEM FOR E-COMMERCE WEBSITES USERS USING ...kevig
 
WEB MINING – A CATALYST FOR E-BUSINESS
WEB MINING – A CATALYST FOR E-BUSINESSWEB MINING – A CATALYST FOR E-BUSINESS
WEB MINING – A CATALYST FOR E-BUSINESSacijjournal
 
A Novel Method for Data Cleaning and User- Session Identification for Web Mining
A Novel Method for Data Cleaning and User- Session Identification for Web MiningA Novel Method for Data Cleaning and User- Session Identification for Web Mining
A Novel Method for Data Cleaning and User- Session Identification for Web MiningIJMER
 
COMPARISON ANALYSIS OF WEB USAGE MINING USING PATTERN RECOGNITION TECHNIQUES
COMPARISON ANALYSIS OF WEB USAGE MINING USING PATTERN RECOGNITION TECHNIQUESCOMPARISON ANALYSIS OF WEB USAGE MINING USING PATTERN RECOGNITION TECHNIQUES
COMPARISON ANALYSIS OF WEB USAGE MINING USING PATTERN RECOGNITION TECHNIQUESIJDKP
 
Odam an optimized distributed association rule mining algorithm (synopsis)
Odam an optimized distributed association rule mining algorithm (synopsis)Odam an optimized distributed association rule mining algorithm (synopsis)
Odam an optimized distributed association rule mining algorithm (synopsis)Mumbai Academisc
 
Web crawler with seo analysis
Web crawler with seo analysis Web crawler with seo analysis
Web crawler with seo analysis Vikram Parmar
 
Search Engine Scrapper
Search Engine ScrapperSearch Engine Scrapper
Search Engine ScrapperIRJET Journal
 

Similaire à AI-Powered Disease Prediction Web App Extracts Data (20)

ANALYTICAL IMPLEMENTATION OF WEB STRUCTURE MINING USING DATA ANALYSIS IN ONLI...
ANALYTICAL IMPLEMENTATION OF WEB STRUCTURE MINING USING DATA ANALYSIS IN ONLI...ANALYTICAL IMPLEMENTATION OF WEB STRUCTURE MINING USING DATA ANALYSIS IN ONLI...
ANALYTICAL IMPLEMENTATION OF WEB STRUCTURE MINING USING DATA ANALYSIS IN ONLI...
 
International conference On Computer Science And technology
International conference On Computer Science And technologyInternational conference On Computer Science And technology
International conference On Computer Science And technology
 
IRJET - Review on Search Engine Optimization
IRJET - Review on Search Engine OptimizationIRJET - Review on Search Engine Optimization
IRJET - Review on Search Engine Optimization
 
STRATEGY AND IMPLEMENTATION OF WEB MINING TOOLS
STRATEGY AND IMPLEMENTATION OF WEB MINING TOOLSSTRATEGY AND IMPLEMENTATION OF WEB MINING TOOLS
STRATEGY AND IMPLEMENTATION OF WEB MINING TOOLS
 
a novel technique to pre-process web log data using sql server management studio
a novel technique to pre-process web log data using sql server management studioa novel technique to pre-process web log data using sql server management studio
a novel technique to pre-process web log data using sql server management studio
 
Sekhon final 1_ppt
Sekhon final 1_pptSekhon final 1_ppt
Sekhon final 1_ppt
 
DESIGN AND IMPLEMENTATION OF CARPOOL DATA ACQUISITION PROGRAM BASED ON WEB CR...
DESIGN AND IMPLEMENTATION OF CARPOOL DATA ACQUISITION PROGRAM BASED ON WEB CR...DESIGN AND IMPLEMENTATION OF CARPOOL DATA ACQUISITION PROGRAM BASED ON WEB CR...
DESIGN AND IMPLEMENTATION OF CARPOOL DATA ACQUISITION PROGRAM BASED ON WEB CR...
 
Design and Implementation of Carpool Data Acquisition Program Based on Web Cr...
Design and Implementation of Carpool Data Acquisition Program Based on Web Cr...Design and Implementation of Carpool Data Acquisition Program Based on Web Cr...
Design and Implementation of Carpool Data Acquisition Program Based on Web Cr...
 
DESIGN AND IMPLEMENTATION OF CARPOOL DATA ACQUISITION PROGRAM BASED ON WEB CR...
DESIGN AND IMPLEMENTATION OF CARPOOL DATA ACQUISITION PROGRAM BASED ON WEB CR...DESIGN AND IMPLEMENTATION OF CARPOOL DATA ACQUISITION PROGRAM BASED ON WEB CR...
DESIGN AND IMPLEMENTATION OF CARPOOL DATA ACQUISITION PROGRAM BASED ON WEB CR...
 
G017334248
G017334248G017334248
G017334248
 
A Web Extraction Using Soft Algorithm for Trinity Structure
A Web Extraction Using Soft Algorithm for Trinity StructureA Web Extraction Using Soft Algorithm for Trinity Structure
A Web Extraction Using Soft Algorithm for Trinity Structure
 
DEVELOPING PRODUCTS UPDATE-ALERT SYSTEM FOR E-COMMERCE WEBSITES USERS USING H...
DEVELOPING PRODUCTS UPDATE-ALERT SYSTEM FOR E-COMMERCE WEBSITES USERS USING H...DEVELOPING PRODUCTS UPDATE-ALERT SYSTEM FOR E-COMMERCE WEBSITES USERS USING H...
DEVELOPING PRODUCTS UPDATE-ALERT SYSTEM FOR E-COMMERCE WEBSITES USERS USING H...
 
DEVELOPING PRODUCTS UPDATE-ALERT SYSTEM FOR E-COMMERCE WEBSITES USERS USING ...
DEVELOPING PRODUCTS UPDATE-ALERT SYSTEM  FOR E-COMMERCE WEBSITES USERS USING ...DEVELOPING PRODUCTS UPDATE-ALERT SYSTEM  FOR E-COMMERCE WEBSITES USERS USING ...
DEVELOPING PRODUCTS UPDATE-ALERT SYSTEM FOR E-COMMERCE WEBSITES USERS USING ...
 
WEB MINING – A CATALYST FOR E-BUSINESS
WEB MINING – A CATALYST FOR E-BUSINESSWEB MINING – A CATALYST FOR E-BUSINESS
WEB MINING – A CATALYST FOR E-BUSINESS
 
A Novel Method for Data Cleaning and User- Session Identification for Web Mining
A Novel Method for Data Cleaning and User- Session Identification for Web MiningA Novel Method for Data Cleaning and User- Session Identification for Web Mining
A Novel Method for Data Cleaning and User- Session Identification for Web Mining
 
COMPARISON ANALYSIS OF WEB USAGE MINING USING PATTERN RECOGNITION TECHNIQUES
COMPARISON ANALYSIS OF WEB USAGE MINING USING PATTERN RECOGNITION TECHNIQUESCOMPARISON ANALYSIS OF WEB USAGE MINING USING PATTERN RECOGNITION TECHNIQUES
COMPARISON ANALYSIS OF WEB USAGE MINING USING PATTERN RECOGNITION TECHNIQUES
 
Odam an optimized distributed association rule mining algorithm (synopsis)
Odam an optimized distributed association rule mining algorithm (synopsis)Odam an optimized distributed association rule mining algorithm (synopsis)
Odam an optimized distributed association rule mining algorithm (synopsis)
 
Web crawler with seo analysis
Web crawler with seo analysis Web crawler with seo analysis
Web crawler with seo analysis
 
Search Engine Scrapper
Search Engine ScrapperSearch Engine Scrapper
Search Engine Scrapper
 
Scrappy
ScrappyScrappy
Scrappy
 

Dernier

Decoding Patterns: Customer Churn Prediction Data Analysis Project
Decoding Patterns: Customer Churn Prediction Data Analysis ProjectDecoding Patterns: Customer Churn Prediction Data Analysis Project
Decoding Patterns: Customer Churn Prediction Data Analysis ProjectBoston Institute of Analytics
 
modul pembelajaran robotic Workshop _ by Slidesgo.pptx
modul pembelajaran robotic Workshop _ by Slidesgo.pptxmodul pembelajaran robotic Workshop _ by Slidesgo.pptx
modul pembelajaran robotic Workshop _ by Slidesgo.pptxaleedritatuxx
 
Bank Loan Approval Analysis: A Comprehensive Data Analysis Project
Bank Loan Approval Analysis: A Comprehensive Data Analysis ProjectBank Loan Approval Analysis: A Comprehensive Data Analysis Project
Bank Loan Approval Analysis: A Comprehensive Data Analysis ProjectBoston Institute of Analytics
 
6 Tips for Interpretable Topic Models _ by Nicha Ruchirawat _ Towards Data Sc...
6 Tips for Interpretable Topic Models _ by Nicha Ruchirawat _ Towards Data Sc...6 Tips for Interpretable Topic Models _ by Nicha Ruchirawat _ Towards Data Sc...
6 Tips for Interpretable Topic Models _ by Nicha Ruchirawat _ Towards Data Sc...Dr Arash Najmaei ( Phd., MBA, BSc)
 
NO1 Certified Black Magic Specialist Expert Amil baba in Lahore Islamabad Raw...
NO1 Certified Black Magic Specialist Expert Amil baba in Lahore Islamabad Raw...NO1 Certified Black Magic Specialist Expert Amil baba in Lahore Islamabad Raw...
NO1 Certified Black Magic Specialist Expert Amil baba in Lahore Islamabad Raw...Amil Baba Dawood bangali
 
Decoding the Heart: Student Presentation on Heart Attack Prediction with Data...
Decoding the Heart: Student Presentation on Heart Attack Prediction with Data...Decoding the Heart: Student Presentation on Heart Attack Prediction with Data...
Decoding the Heart: Student Presentation on Heart Attack Prediction with Data...Boston Institute of Analytics
 
SMOTE and K-Fold Cross Validation-Presentation.pptx
SMOTE and K-Fold Cross Validation-Presentation.pptxSMOTE and K-Fold Cross Validation-Presentation.pptx
SMOTE and K-Fold Cross Validation-Presentation.pptxHaritikaChhatwal1
 
English-8-Q4-W3-Synthesizing-Essential-Information-From-Various-Sources-1.pdf
English-8-Q4-W3-Synthesizing-Essential-Information-From-Various-Sources-1.pdfEnglish-8-Q4-W3-Synthesizing-Essential-Information-From-Various-Sources-1.pdf
English-8-Q4-W3-Synthesizing-Essential-Information-From-Various-Sources-1.pdfblazblazml
 
Minimizing AI Hallucinations/Confabulations and the Path towards AGI with Exa...
Minimizing AI Hallucinations/Confabulations and the Path towards AGI with Exa...Minimizing AI Hallucinations/Confabulations and the Path towards AGI with Exa...
Minimizing AI Hallucinations/Confabulations and the Path towards AGI with Exa...Thomas Poetter
 
Data Analysis Project Presentation: Unveiling Your Ideal Customer, Bank Custo...
Data Analysis Project Presentation: Unveiling Your Ideal Customer, Bank Custo...Data Analysis Project Presentation: Unveiling Your Ideal Customer, Bank Custo...
Data Analysis Project Presentation: Unveiling Your Ideal Customer, Bank Custo...Boston Institute of Analytics
 
Student Profile Sample report on improving academic performance by uniting gr...
Student Profile Sample report on improving academic performance by uniting gr...Student Profile Sample report on improving academic performance by uniting gr...
Student Profile Sample report on improving academic performance by uniting gr...Seán Kennedy
 
Data Factory in Microsoft Fabric (MsBIP #82)
Data Factory in Microsoft Fabric (MsBIP #82)Data Factory in Microsoft Fabric (MsBIP #82)
Data Factory in Microsoft Fabric (MsBIP #82)Cathrine Wilhelmsen
 
Semantic Shed - Squashing and Squeezing.pptx
Semantic Shed - Squashing and Squeezing.pptxSemantic Shed - Squashing and Squeezing.pptx
Semantic Shed - Squashing and Squeezing.pptxMike Bennett
 
FAIR, FAIRsharing, FAIR Cookbook and ELIXIR - Sansone SA - Boston 2024
FAIR, FAIRsharing, FAIR Cookbook and ELIXIR - Sansone SA - Boston 2024FAIR, FAIRsharing, FAIR Cookbook and ELIXIR - Sansone SA - Boston 2024
FAIR, FAIRsharing, FAIR Cookbook and ELIXIR - Sansone SA - Boston 2024Susanna-Assunta Sansone
 
Principles and Practices of Data Visualization
Principles and Practices of Data VisualizationPrinciples and Practices of Data Visualization
Principles and Practices of Data VisualizationKianJazayeri1
 
Decoding Movie Sentiments: Analyzing Reviews with Data Analysis model
Decoding Movie Sentiments: Analyzing Reviews with Data Analysis modelDecoding Movie Sentiments: Analyzing Reviews with Data Analysis model
Decoding Movie Sentiments: Analyzing Reviews with Data Analysis modelBoston Institute of Analytics
 
What To Do For World Nature Conservation Day by Slidesgo.pptx
What To Do For World Nature Conservation Day by Slidesgo.pptxWhat To Do For World Nature Conservation Day by Slidesgo.pptx
What To Do For World Nature Conservation Day by Slidesgo.pptxSimranPal17
 
Cyber awareness ppt on the recorded data
Cyber awareness ppt on the recorded dataCyber awareness ppt on the recorded data
Cyber awareness ppt on the recorded dataTecnoIncentive
 

Dernier (20)

Decoding Patterns: Customer Churn Prediction Data Analysis Project
Decoding Patterns: Customer Churn Prediction Data Analysis ProjectDecoding Patterns: Customer Churn Prediction Data Analysis Project
Decoding Patterns: Customer Churn Prediction Data Analysis Project
 
modul pembelajaran robotic Workshop _ by Slidesgo.pptx
modul pembelajaran robotic Workshop _ by Slidesgo.pptxmodul pembelajaran robotic Workshop _ by Slidesgo.pptx
modul pembelajaran robotic Workshop _ by Slidesgo.pptx
 
Bank Loan Approval Analysis: A Comprehensive Data Analysis Project
Bank Loan Approval Analysis: A Comprehensive Data Analysis ProjectBank Loan Approval Analysis: A Comprehensive Data Analysis Project
Bank Loan Approval Analysis: A Comprehensive Data Analysis Project
 
Data Analysis Project: Stroke Prediction
Data Analysis Project: Stroke PredictionData Analysis Project: Stroke Prediction
Data Analysis Project: Stroke Prediction
 
6 Tips for Interpretable Topic Models _ by Nicha Ruchirawat _ Towards Data Sc...
6 Tips for Interpretable Topic Models _ by Nicha Ruchirawat _ Towards Data Sc...6 Tips for Interpretable Topic Models _ by Nicha Ruchirawat _ Towards Data Sc...
6 Tips for Interpretable Topic Models _ by Nicha Ruchirawat _ Towards Data Sc...
 
NO1 Certified Black Magic Specialist Expert Amil baba in Lahore Islamabad Raw...
NO1 Certified Black Magic Specialist Expert Amil baba in Lahore Islamabad Raw...NO1 Certified Black Magic Specialist Expert Amil baba in Lahore Islamabad Raw...
NO1 Certified Black Magic Specialist Expert Amil baba in Lahore Islamabad Raw...
 
Decoding the Heart: Student Presentation on Heart Attack Prediction with Data...
Decoding the Heart: Student Presentation on Heart Attack Prediction with Data...Decoding the Heart: Student Presentation on Heart Attack Prediction with Data...
Decoding the Heart: Student Presentation on Heart Attack Prediction with Data...
 
SMOTE and K-Fold Cross Validation-Presentation.pptx
SMOTE and K-Fold Cross Validation-Presentation.pptxSMOTE and K-Fold Cross Validation-Presentation.pptx
SMOTE and K-Fold Cross Validation-Presentation.pptx
 
English-8-Q4-W3-Synthesizing-Essential-Information-From-Various-Sources-1.pdf
English-8-Q4-W3-Synthesizing-Essential-Information-From-Various-Sources-1.pdfEnglish-8-Q4-W3-Synthesizing-Essential-Information-From-Various-Sources-1.pdf
English-8-Q4-W3-Synthesizing-Essential-Information-From-Various-Sources-1.pdf
 
Minimizing AI Hallucinations/Confabulations and the Path towards AGI with Exa...
Minimizing AI Hallucinations/Confabulations and the Path towards AGI with Exa...Minimizing AI Hallucinations/Confabulations and the Path towards AGI with Exa...
Minimizing AI Hallucinations/Confabulations and the Path towards AGI with Exa...
 
Insurance Churn Prediction Data Analysis Project
Insurance Churn Prediction Data Analysis ProjectInsurance Churn Prediction Data Analysis Project
Insurance Churn Prediction Data Analysis Project
 
Data Analysis Project Presentation: Unveiling Your Ideal Customer, Bank Custo...
Data Analysis Project Presentation: Unveiling Your Ideal Customer, Bank Custo...Data Analysis Project Presentation: Unveiling Your Ideal Customer, Bank Custo...
Data Analysis Project Presentation: Unveiling Your Ideal Customer, Bank Custo...
 
Student Profile Sample report on improving academic performance by uniting gr...
Student Profile Sample report on improving academic performance by uniting gr...Student Profile Sample report on improving academic performance by uniting gr...
Student Profile Sample report on improving academic performance by uniting gr...
 
Data Factory in Microsoft Fabric (MsBIP #82)
Data Factory in Microsoft Fabric (MsBIP #82)Data Factory in Microsoft Fabric (MsBIP #82)
Data Factory in Microsoft Fabric (MsBIP #82)
 
Semantic Shed - Squashing and Squeezing.pptx
Semantic Shed - Squashing and Squeezing.pptxSemantic Shed - Squashing and Squeezing.pptx
Semantic Shed - Squashing and Squeezing.pptx
 
FAIR, FAIRsharing, FAIR Cookbook and ELIXIR - Sansone SA - Boston 2024
FAIR, FAIRsharing, FAIR Cookbook and ELIXIR - Sansone SA - Boston 2024FAIR, FAIRsharing, FAIR Cookbook and ELIXIR - Sansone SA - Boston 2024
FAIR, FAIRsharing, FAIR Cookbook and ELIXIR - Sansone SA - Boston 2024
 
Principles and Practices of Data Visualization
Principles and Practices of Data VisualizationPrinciples and Practices of Data Visualization
Principles and Practices of Data Visualization
 
Decoding Movie Sentiments: Analyzing Reviews with Data Analysis model
Decoding Movie Sentiments: Analyzing Reviews with Data Analysis modelDecoding Movie Sentiments: Analyzing Reviews with Data Analysis model
Decoding Movie Sentiments: Analyzing Reviews with Data Analysis model
 
What To Do For World Nature Conservation Day by Slidesgo.pptx
What To Do For World Nature Conservation Day by Slidesgo.pptxWhat To Do For World Nature Conservation Day by Slidesgo.pptx
What To Do For World Nature Conservation Day by Slidesgo.pptx
 
Cyber awareness ppt on the recorded data
Cyber awareness ppt on the recorded dataCyber awareness ppt on the recorded data
Cyber awareness ppt on the recorded data
 

AI-Powered Disease Prediction Web App Extracts Data

  • 1. BOHR International Journal of Data Mining and Big Data 2020, Vol. 1, No. 1, pp. 5–9 Copyright © 2020 BOHR Publishers https://doi.org/10.54646/bijdmbd.002 www.bohrpub.com Implementation of Web Application for Disease Prediction Using AI Manasvi Srivastava, Vikas Yadav and Swati Singh∗ IILM, Academy of Higher Learning, College of Engineering and Technology Greater Noida, Uttar Pradesh, India ∗Corresponding Author: swati.singh@iilm.edu Abstract. The Internet is the largest source of information created by humanity. It contains a variety of materials available in various formats such as text, audio, video and much more. In all web scraping is one way. It is a set of strategies here in which we get information from the website instead of copying the data manually. Many Web- based data extraction methods are designed to solve specific problems and work on ad-hoc domains. Various tools and technologies have been developed to facilitate Web Scraping. Unfortunately, the appropriateness and ethics of using these Web Scraping tools are often overlooked. There are hundreds of web scraping software available today, most of them designed for Java, Python and Ruby. There is also open source software and commercial software. Web-based software such as YahooPipes, Google Web Scrapers and Firefox extensions for Outwit are the best tools for beginners in web cutting. Web extraction is basically used to cut this manual extraction and editing process and provide an easy and better way to collect data from a web page and convert it into the desired format and save it to a local or archive directory. In this paper, among others the kind of scrub, we focus on those techniques that extract the content of a Web page. In particular, we use scrubbing techniques for a variety of diseases with their own symptoms and precautions. Keywords: Web scraping, Disease, Legality, Software, Symptons. Introduction Web Scraper is a process for downloading and extract- ing important data by scanning a web page. Web scrap- ers work best when page content is either transferred, searched, or modified. The collected information is then copied to a spreadsheet or stored in a database for fur- ther analysis. For the ultimate purpose of analysis, data needs to be categorized by progressively different devel- opments, for example, by starting with its specification collection, editing process, cleaning process, remodeling, and using different models and various algorithms and end result. There are two ways to extract data from web- sites, the first is the manual extraction process and the second is the automatic extraction process. Web scrapers compile site information in the same way that a person can do that by removing access to a webpage of the site, finding relevant information, and moving on to the next web page. Each website has a different structure which is why web scrapers are usually designed to search through a website. Web deletion can help in finding any kind of targeted information. We will then have the opportunity to find, analyze and use information in the way we need. Web logging therefore paves the way for data acquisi- tion, speeds up automation and makes it easier to access extracted data by rendering it in CSV pattern. Web publish- ing often removes a lot of data from websites for example, monitoring consumer interests, price monitoring eg price checking, advancing AI models, data collection, tracking issues, and so on. So there is no doubt that web removal is a systematic way to get more data from websites. It requires two stages mainly crawling and removal. A search engine is an algorithm designed by a person who goes through the web to look for specific information needed by following online links. Deleter is a specific tool designed to extract data from sites. Web Scraper will work that way if the patient is suf- fering from any kind of illness or illness, he will add his symptoms and problems and when the crawl work starts and he will start scrolling and look like a disease from the database provided on the website and will show the best disease like patient symptoms. And when those spe- cific diseases show up, it will also show the precautionary measures that the patient needs to take care of in order to overcome them and to treat the infection. 5
  • 2. 6 Manasvi Srivastava et al. Overview of Web Scraping Web Scraping is a great way to extract random data from websites and convert that data into organized data that can be stored and analyzed in a database. Web Scraping is also known as web data extraction, web data removal, web har- vesting or screen scanning. Web scraping is a form of data mining. The whole purpose of the web crawling process is to extract information from websites and convert it into an understandable format such as spreadsheets, a database or a comma-separated file (CSV) as shown in Figure 1. Data such as item price, stock price, various reports, market prices and product details, can be collected with web ter- mination. Extracting website-based information helps you make effective decisions for your business. Figure 1. Web scraping structure. Practices of Web Scraping • Data scraping • Research • Web mash up—integrate data from multiple sources • Extract business details from business directory web- sites such as Yelp and Yellow pages • Collect government data • Market Analysis The Web Data Scraper process, a software agent, also known as a Web robot, mimics browsing communica- tion between Web servers and a person on a normal Web browser. Step by step, the robot enters as many Websites as it needs, transfers its content to find and extracts inter- esting data and builds that content as desired. AP scraping APIs and frameworks address the most common web data scrapers involved in achieving specific recovery goals, as described in the following text: Hypertext Transfer Protocol (HTTP) This method is used to extract data from static and dynamic web pages. Data can be retrieved by sending HTTP requests to a remote web server using a socket system. Hyper Text Markup Language (HTML) Exploring languages for query data, such as XQuery and Hyper Text Query Language (HTQL), can be used to scan HTML pages and retrieve and modify content on the page. Release Structure The main purpose is to convert the published content into a formal representation for further analysis and retention. Although this last step is on the side of Web scraping, some tools are aware of post-results, providing memory data for- mats and text-based solutions, such as cables or files (XML or CSV files). Literature Survey Python has a rich set of libraries available for downloading digital content online. Among the libraries available, the following three are the most popular: BeautifulSoup, LXml and RegEx. Statistical research was performed on the avail- able data sets; indicates that RegEx was able to deliver the requested information at an average rate of 153.6 ms. How- ever, RegEx has those limitations of data extraction of web pages with internal HTML tags. Because of this demerit RegEx is used to perform complex data extraction only. Some libraries such as BeautifulSoup and LXml are able to extract content from web pages under a complex envi- ronment that has yielded a response rate of 457.66 ms and 203 ms respectively. The main purpose of data analysis is to get useful information from data and make decisions based on data analysis. Web deletion refers to the collection of data on the web. Web scraping is also known as data scraping. For the purpose of data analysis can be divided into several steps such as cleaning, editing etc. Scrapy is the most widely used source of information needed by the user. The main purpose of using scrapy is to extract data from its sources. Scrapy, which crawls on the web and is based on python programming language, is very helpful in finding the data we need by using the URLs needed to clear the data from its sources? Web scraper is a useful API to retrieve data from a website. Scrapy provides all the necessary tools to extract data from a website and process data according to user needs and store data in a specific format as defined by users.
  • 3. Implementation of Web Application for Disease Prediction Using AI 7 The Internet is very much looking at web pages that include a large number of descriptive elements including text, audio, graphics, video etc. This process, Web Scraping is mainly responsible for the collection of raw data from the website. It is a process in which you extract data automa- tion very quickly. The process enables us to extract specific data requested by the user. The most popular method used is to create individual web data structure using any known language Experimental Work TECHNOLOGY USED Firebase For Database we have used Cloud Firestore from firebase. It is a real-time NOSQL database that stores two key value data in the form of collections and documents. Tensor Flow Tensor Flow is used to train the database model and to make predictions. There are various algorithms for mod- eling training, or use line format in our project. JavaScript Frameworks • NodeJS Node.js is an open source, cross-platform, JavaScript runtime environment running back to V8 engine and extracting JavaScript code without a web browser. Our rewriting code is written for Nodejs as its fast and platform language. • ElectronJS Electron is a framework for building native appli- cations with web technologies such as JavaScript, HTML, and CSS. As Electron is used to create a short web application it helps us to write our code and thus reduce the development time. • ReactJS React makes it less painful to create interactive UIs. Design a simple view of each state in your app and React will carefully review and provide relevant sec- tions as your data changes. React can also render to the server using Node and the power of mobile applications using React Native. Python Python is a high-level programming language translated into high-level translations. In this project various libraries such as pandas, NumPy, good soup. etc. is used to create our database. Pandas and NumPy are used to filter and process data needed to train our model by extracting and removing it from a separate data source. Compatibility OS X Only 64bit binaries are provided for OS X, and the lower version of OS X is supported by OS X 10.9. Windows Electron supports Windows 7 and later, older versions of the OS are not supported. Both x86 and amd64 (x64) binary are provided for Win- dows and are not supported in the ARM version of Win- dows. Software Used – VSCode Visual Studio Code is a freeware source code edi- tor developed by Microsoft for Windows, Linux and MacOS. Features include debugging support, syntax highlighting, intelligent coding, captions, reuse code, and embedded Git. – Google collab notebook Colaboratory, or Colab for short, is a product of Google Research, which allows developers to write and use Python code through their browser. Google Colab is an excellent tool for in-depth learning activ- ities. It is a compact Jupyter notebook that needs no setup and has an excellent free version, provid- ing free access to Google computer resources such as GPUs and TPUs. – PyCharm PyCharm is an integrated development platform used for computer programs, especially in the Python language. Developed by Czech company Jet- Brains. Data Source As we did not get more than 40 diseases so to get dataset we have created our own dataset. And the dataset which we have used for our training and testing process have taken from various sources. One of them is added below. – https://github.com/DiseaseOntology/HumanDise aseOntology
  • 4. 8 Manasvi Srivastava et al. Use of Scrapy Scrapy is a framework for crawling and retrieving non- fiction data that can be used for the size of a support- ive application such as data mining, managed or actual reported data. Apart from the way it was originally expected for Scrapy to be removed from the web, it could be used in the same way to extract data using APIs for example Amazon AWS or as a very important web browser. Scrappy is written in python. Let’s take a Wiki example related to one of these problems. A simple online photo gallery can provide three options to users as defined by HTTP GET parameters at URL. If there are four ways to filter images with three thumbnail-sized options, two file formats, and a user-provided disabling option, then the same content set can be accessed with different URLs, all of which can be linked to the site. This carefully crafted com- bination creates a problem for the pages as they have to plan with an endless combination of subtitle changes to get different content. Methodology The method used by the project to collect all the required data is extracted and extracted from various sources such as the CDC’s database database and Kaggle resources. Then analyze the extracted data using texts written in python language according to project requirements. Pan- das and NumPy are widely used to perform various func- tions on the database. After sorting the data according to each need, it is then uploaded to the database. In the database we have used Cloud Firestore as it is a real-time NoSQL database with extensive API support. Further in the TensorFlow project is used to train our model according to needs. In this project we predict the disease because of the given symptoms. Training data set – 70% Setting test data – 30% TensorFlow supports Linear Regression which is used to predict diseases based on the given indicators. Coding Project Frontend is written using ReactJS & TypeScript. Although we have used the MaterialUI kit from Google ReactJS to speed up our development process. To provide our app, Electron is used. Our web system supports MacOS and Windows. Most of today’s web app is written with the help of Electron JS. Testing The project is tested using an Electron built-in test frame- work called Spectron. The project is being implemented in the browser. The output generated turns out to be completely consistent and the generated analysis is approximate. Electron’s standard workflow with Spectron can involve engineers who write unit tests in the standard TDD format and then write integration tests to ensure that acceptance criteria are met before approving a feature to be used. Con- tinuous integration servers can ensure that all these tests are passed before they are incorporated into production. Algorithm Used Linear Regression is a standard mathematical method that allows us to study a function or relationship in a given set of continuous data. For example, we are given some of the corresponding x and y data points and we need to study the relationship between them called hypothesis. In the event of a line reversal, the hypothesis is a straight line, i.e., Where the vector is called Weights and b is a scale called Bias. Weights and Bias are called model parameters. All we need to do is estimate the value of w and b from the set of data given that the result of the assump- tion has produced the minimum cost J defined by the next cost function where m the number of data points in the data provided. This cost function is also called Mean Squared Error. To find the optimized value of the J’s minimum param- eters, we will be using a widely used optimizer algorithm called Gradient Descent. The following is a fake Gradient Descent code: Result Discussion The overall results of the project are useful in predicting diseases with the given symptoms. The script that was written to extract data can be used later to compile and for- mat it according to needs. Users can pick up symbols by typing them themselves or by selecting them in the given options. The training model will predict the disease according to it. Users are able to create their own medical profile, where they can submit their medical records and prescribed medication, this greatly helps us to feed our database and better predict disease over time as some of these diseases occur directly during the season. Moreover, the analysis performed showed a very sim- ilar disease, but the training model lacks the size of the database.
  • 5. Implementation of Web Application for Disease Prediction Using AI 9 Conclusions and Future Scope The use of the Python program also emphasizes under- standing the use of pattern matching and general expres- sions for web releases. Database data is compiled from factual reports, directly to Government media outlets for local media where it is considered reliable. A team of experts and analysts who validate information from a con- tinuous list of more than 5,000 items is likely to be the site that collects data effectively. User-provided inputs are analyzed and deleted from the website and the output is extracted as the user enters the user interface encounters. Output is generated in the form of text. This method is simple and straightforward to eradicate the disease from companies and provides vigilance against that disease. For future work, we plan tests that aim to show the med- ication that a patient can take for treatment. Also, we are looking to link this website to various hospitals and phar- macies for easy use. References [1] Thivaharan. S, Srivatsun. G and Sarathambekai. S, “A Survey on Python Libraries Used for Social Media Content Scraping”, Proceed- ings of the International Conference on Smart Electronics and Com- munication (ICOSEC 2020) IEEE Xplore Part Number: CFP20V90- ART; ISBN: 978-1-7281-5461-9, PP: 361–366. [2] Shreya Upadhyay, Vishal Pant, Shivansh Bhasin and Mahantesh K Pattanshetti “Articulating the Construction of a Web Scraper for Mas- sive Data Extraction”, 2017 IEEE. [3] Amruta Kulkarni, Deepa Kalburgi and Poonam Ghuli, “Design of Predictive Model for Healthcare Assistance Using Voice Recogni- tion”, 2nd IEEE International Conference on Computational Sys- tems and Information Technology for Sustainable Solutions 2017, PP: 61–64. [4] Dimitri Dojchinovski, Andrej Ilievski, Marjan Gusev Interactive home healthcare system with integrated voice assistant MIPRO 2019, PP: 284–288 Posted: 2019. [5] Mohammad Shahnawaz, Prashant Singh, Prabhat Kumar and Dr. Anuradha Konidena, “Grievance Redressal System”, Interna- tional Journal of Data Mining and Big Data, 2020, Vol. 1, No. 1, PP. 1–4. Hyperlink of research paper 1. https://www.sciencedirect.com/science/article/pii/S2352914817302253