SlideShare une entreprise Scribd logo
1  sur  9
Télécharger pour lire hors ligne
© 2023, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 402
A Review on Multiclass Brain Tumor Detection using
Convolutional Neural Network and Support Vector Machines
KAMLESH MANKAR1, SHREYA MANGEKAR2, SAURABH SONKAR3,RAMESH SANTRA4, V.V.
CHAKOLE5
-------------------------------------------------------------------------***------------------------------------------------------------------------
ABSTRACT:
The role of clinical diagnosis in modern healthcare has grown in importance. Brain cancer is a major concern in the
realm of medical imaging since it is the worst illness in the world. Brain tumour evaluation and prognosis may
benefit from magnetic resonance imaging-based early and accurate diagnosis. Medical images need to be identified,
segmented, and categorised so that radiologists may use computer-aided diagnostic techniques to help them find
brain tumours. Radiologists find the process of manually detecting brain tumours to be tedious and prone to errors;
as a result, an automated technique is urgently needed. Therefore, the approach for accurately detecting and
categorising brain tumours is introduced. In terms of the materials and methods used, There are five stages to the
procedure suggested. Initial steps include applying a linear contrast stretching to the original picture in order to
locate the image's edges. In the second stage, a deep neural network architecture tailored specifically to the task of
brain tumour segmentation is created. Third, we employ transfer learning to train a modified version of the
MobileNetV2 architecture for feature extraction. Lastly, the best features were chosen using a multiclass support
vector machine (M-SVM) and a controlled entropy-based technique. Finally, meningioma, glioma, and pituitary
pictures are categorised by using M-SVM for brain tumour classification.
KEYWORDS
Biomedical image processing; brain tumor; deep learning; linear contrast stretching; segmentation.
I.INTRODUCTION:
Primitive costs for patients with brain tumours are
currently the highest of all cancer types. Brain tumours
may develop in persons of any age due to the rapid
growth of certain cell types. A brain tumour is an
abnormal growth of tissue that develops in the brain or
central spine and disrupts normal brain function [1].
These big tumour cells may be classified as malignant
(cancerous) or benign (non-cancerous) based on their
location, size, and surface area [2]. Primary and
secondary tumour sites refer to the most recent
developments of cancer cells. The initial tumour region
is defined as the stage at which cancer cells are deemed
benign. Tumors that originate in brain tissue are
considered primary brain tumours, and they are curable
or at least manageable with the right treatment.
Malignancies that originate in other organs and spread
to the brain are called secondary tumours. Only via
proper surgery or radiation treatment [3] can the
pretentious patient's tumour be treated. Due to the
danger posed by brain tumours to healthy brain tissue,
monitoring their development is crucial for patient
survival [4].
Tumors called meningiomas may spread to the brain and
spinal cord. The tumours themselves are composed of
three meningeal layers [5]. Meningiomas often manifest
as asymmetrical lobar masses with sharp margins [6].
Survival rates for people with meningioma depend on
factors such as tumour size, where the tumour is located,
and the patient's age. Symptoms of meningioma include
obsessive clinginess, frequent headaches, and limb
weakness. Tumors of benign meningiomas are less than
2 mm in diameter, but those of malignant meningiomas
may be as large as 5 cm in diameter [7]. Most malignant
meningiomas are curable if detected and treated early.
Since several different types of MRI may be used for the
detection of brain cancer, magnetic resonance imaging
(MRI) has become one of the most common techniques
for making this diagnosis [8]. Since brain tumours are
potentially life-threatening, it is crucial that they be
diagnosed and treated correctly. Detecting the illness in
its earliest stages with a whole brain scan is the only way
to avoid putting patients at risk. Different types of brain
tissue may be detected by various MRI techniques, each
of which has its own unique settling time [9]. Due to
their unpredictable form and location, brain tumours
might be difficult to identify using only one MRI
modality. When trying to locate tumours, contrasting
data from several MRI techniques is crucial [10]. T1-
weighted MRI can be used to differentiate tumour tissue
from healthy tissue, T2-weighted MRI can be used to
outline areas of edoema, resulting in clear image areas,
T4-Gd MRI can show a bright signal at the tumour edge
when contrast enhancement is used, and FLAIR MRI can
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 10 Issue: 02 | Feb 2023 www.irjet.net p-ISSN: 2395-0072
© 2023, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 403
use water molecules to suppress signals to differentiate
cerebrospinal fluid (CSF) from areas of edoema.
Calculating area, determining uncertainty in
segmentation area, and tumour segmentation are
challenging activities [11] because of the structural
complexity and unpredictability of brain tumours, high
volatility, and intrinsic features of MRI data, i.e.,
fluctuation of tumour size and form. Creating manual
tumour segmentation is a time-consuming operation,
and doctors may notice variations in segmentation
outcomes owing to variances in tumour form and shape.
Meningiomas, on the other hand, are rather easy to
separate, whereas gliomas and glioblastomas are more
challenging [12]. It is crucial, then, to provide an
automated segmentation approach to facilitate this
laborious process.
Brain tumour detection and monitoring need a lot of
time and human error is common when done manually
[13]. We have to find a way to replace the manual
processes with an automated one. It is incompatible with
brain tumour detection processes to use the present
approaches, which rely on labelling methods to identify
sick regions in the brain and cannot detect internal
peripheral pixels. We favour MRI over CT scans because
of the contrast agent's ability to clearly show the affected
region (CT). Therefore, MRI modalities are utilised in a
wide variety of approaches to diagnose brain cancer.
Many different approaches have been proposed in recent
years for the automatic classification of brain tumours;
these can be broadly categorised as either Machine
Learning (ML) or Deep Learning (DL) approaches,
depending on their focus on feature fusion, feature
selection, or the underlying learning mechanism. Feature
selection and feature extraction are cornerstones of
classification in ML approaches [14,15]. On the other
hand, deep learning techniques may learn by manually
extracting characteristics from photos. The latest deep
learning (DL) techniques, convolutional neural networks
(CNNs) in particular, boast impressive precision and are
widely used for MRI analysis and other forms of medical
image analysis [16,17,18]. Although these drawbacks can
be mitigated through the use of transfer learning [19],
they still exist when compared to traditional ML
approaches and include the need for a large training
dataset, high complexity of time, low accuracy for
applications where only small datasets are available, and
expensive GPUs that eventually increase the cost to the
user. Furthermore, knowing enough about various
parameters, training techniques, and topologies might
make picking the right deep learning model seem like an
overwhelming undertaking. Support Vector Machine
(SVM), Random Forest (RF), fuzzy C-mean (FCM),
Convolutional Neural Network (CNN), Nave Bayes (NB),
K-Nearest Neighbor (KNN), Sequential Minimal
Optimization (SMO), and Decision Tree are just some of
the machine learning-based classifiers that have been
used for brain tumour classification and detection (DT).
With reduced computational and spatial complexity, the
CNN implementation is easier to implement. The little
dataset needed for training, the low computing cost, and
the simplicity of adoption by untrained persons have
attracted a lot of academic interest to these classifiers in
general.
These are some of the projected improvements brought
about by the novel approach of segmenting and
classifying brain tumours.
In this pre-processing stage, we apply a linear contrast
stretching technique to enhance the original image's
edge features;
Developed from scratch 17-layer convolutional neural
network (CNN) architecture specifically designed for
brain tumour segmentation;
To get the required datasets for deep feature extraction,
we used transfer learning from a modified version of
MobileNetV2;
To achieve this, we use an entropy-controlled technique
for selecting features, where the best features are chosen
according to the entropy value. A multi-class SVM
classifier is used to categorise the final characteristics; a
thorough statistical analysis and comparison with state-
of-the-art approaches are carried out to validate the
stability of the proposed methodology.
II. RELATED WORKS
Modern medical practises often use MR imaging for the
diagnosis of brain cancer [8,14]. This part takes a close
look at how well brain tumours are known to be
detected and categorised.
Numerous studies on brain tumour detection,
segmentation, and classification have been conducted in
recent years. Even after many publications emphasising
its significance [20,21,22], the medical community
continues to stress the topic's relevance. This study
presents a methodology for the identification and
delineation of brain tumours. Differentiating brain
pictures may be done using either generative or
discriminating techniques [17,23] for the diagnosis of
brain cancers. Brain tumour identification using fuzzy
logic and the U-NET CNN architecture was shown by
Maqsood et al. [4]. It included using contrast
enhancement, a fuzzy logic-based edge detection
approach, and U-NET CNN classification. For this
purpose, we first apply a contrast enhancement
technique to the source pictures for pre-processing, then
we use a fuzzy logic-based edge detection technique to
find the edges in the contrast-enhanced images, and
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 10 Issue: 02 | Feb 2023 www.irjet.net p-ISSN: 2395-0072
© 2023, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 404
lastly we apply a dual tree-complex wavelet transform at
different scale levels. In order to differentiate between
meningioma and non-meningioma in brain imaging,
features are derived from deconstructed sub-band
images and then classified using the U-NET CNN
classification approach. The suggested technique
outperformed several state-of-the-art algorithms by a
wide margin (98.59%).
Sobhaninia et al. [24] trained a CNN model for
segmentation using brain MRI using a LinkNet network,
combining several views to improve the model's
performance; the final result was a dice score of 0.79.
Johnpeter et al. [25] use an adaptive neuro-fuzzy
inference classification approach to identify and locate
cancers in brain MRI, yet this network seems to be
rather sophisticated. Without employing edge detection
on brain pictures, our strategy enhanced tumour regions
using histogram equalisation. The resulting work was
accurate 98.80 percent of the time.
With the use of the modulo and hypercolumn technique,
Togacar et al. [26] created a network called BrainMRNet.
The original photos were first subjected to pre-
processing, and then they were sent to the attention
module. The convolutional layer receives the picture and
processes it based on the attention module, which
controls the focal regions of the image. The BrainMRNet
model's convolutional layers make extensive use of the
hypercolumn strategy. By using this technique, we were
able to increase accuracy to 96.05%, since the
characteristics collected from each successive layer were
saved in the array tree of the final layer. To classify brain
tumours, Kibriya et al. [27] established an approach
based on the fusion of many features. To solve the data
issue, we first apply the minimum-maximum
normalisation approach to the original photos and then
use enormous data extension to the pre-processed
images. The final output and accuracy of 97.7% are the
result of a combination of a support vector machine
(SVM) and a k-nearest neighbour (KNN) classifier
trained on data from the GoogLeNet and ResNet18 deep
CNN models. Brain tumours may be detected and
classified using a CNN developed by Sajjad et al. [28].
The authors were able to achieve a 94.58% accuracy rate
by using a Cascade CNN algorithm for segmenting the
tumours in the brain and a fine-tuned version of VGG19
for classifying them. Shanthakumar [29] employed
watershed segmentation on brain MRI scans to localise
tumours. The accuracy of tumour segmentation was
improved to 94.52% using this segmentation approach,
which makes use of a preset labelling scheme to achieve
this result. Separating tumour regions in MR images of
the brain is shown to be possible by Prastawa et al. [30].
Although this approach has an 88.17% success rate, it is
only able to identify the outside, abnormal boundaries of
the tumour area, and not the interior boundary.
For the purpose of classifying brain tumours, Gumaei et
al. [31] suggested a hybrid feature extraction technique
based on a regularised extreme learning machine
(RELM). Preprocessing is performed using the min-max
normalisation contrast enhancement approach, feature
extraction is performed using a hybrid PCA-NGIST
method, and brain tumour classification is performed
using the RELM method. The overall precision of this job
was 94.23%. When used to contrast-enhanced magnetic
resonance imaging (CE-MRI), a fine-tuned pre-trained
VGG19 model improved outcomes for Swati et al. [32],
who reported an average accuracy of 94.82%. After
addressing the issue of overfitting using the ResNet50
CNN model and global average pooling, Kumar et al. [33]
suggested a brain tumour technique with an average
accuracy of 97.48%. These game-changing advances
have garnered widespread attention in the field of
medical image analysis. Brain picture categorization by
machine learning and an understanding of brain
architecture was proposed by Veeramuthuet al. [4].
Decomposing the picture and subsequently extracting its
characteristics is made easier with the aid of Multi Level
Discrete Wavelet Transform. In order to classify the
severity of the illness in the brain picture, a PNN-RBF
training and classification approach is used. The hybrid
strategy was created by Sanjeev et al. [5]. This mixed
method utilises discrete wavelet transformation (DWT)
to filter out irrelevant information, a genetic algorithm to
narrow the feature set, and a support vector machine
(SVM) to classify different types of brain tumours. To
enhance the effectiveness of motor imagery
categorization, Gopal et al. [6] suggested a strategy based
on feed forward backpropagation of the neural network
(FFBPNN). Artificial neural networks (ANN), fuzzy
clustering techniques (FCM), support vector machines
(SVM), decision trees (DT), K-Nearest Neighbors (KNN),
and Bayesian classification are only some of the
approaches available for classifying medical pictures.
This ANN, as well as SVM and KNN, are examples of
supervised learning methods. Unsupervised learning
methods, such K-means clustering and the Self-
Organizing Map, are another category that may be used
to group data into meaningful groups.
The shift from manually crafted features to machine-
learned ones has been slow, however. Several methods
for learning features were in use prior to AlexNet's
breakthrough. The methods will be thoroughly analysed
by Bengio et al. [7]. Main component analysis, picture
patch clustering, and dictionary techniques are just a few
examples. At the conclusion of their study, in a section
titled Global Training of Deep Models, Moosaet al.[8] will
deploy CNNs that are taught from beginning to end. In
this review, we focus on these foundational models and
leave out the more surface-level ones. using traditional
feature learning techniques on medical photos. Refer to
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 10 Issue: 02 | Feb 2023 www.irjet.net p-ISSN: 2395-0072
© 2023, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 405
Ravi et al. (2017) [9] for a more comprehensive review
of deep learning's role in health informatics, including a
short discussion of its application to the interpretation of
medical images.
Shen et al. (2017) [10] revealed the results of a deep
learning-specific study of medical picture analysis. While
they certainly cover a lot of ground, we believe that
certain crucial points have been overlooked. Medical
image segmentation is essential for prompt treatment
planning, and it is used for the categorization and
identification of brain tumours from MR images.
Methods for MRI Classification
There are a lot of brain tumours. Imaging of the brain
with MRI (Magnetic Resonance Imaging) is the standard
diagnostic procedure for evaluating tumours in the head.
Traditional machine learning methods often assign a
classification to a brain tumour based on an arbitrary
attribute or the discretion of a radiologist. In this study,
we use ensemble modelling using the SVM & CNN
classifier [5] on MRI scans of the brain to distinguish
between benign and malignant tumours.
In addition, when it comes to recognising brain tumours,
threshold-based segmentation management causes
blurred edges and limits.
Using Resnet-50 and TL, a deep learning model was
created for detecting and diagnosing brain tumours. The
accuracy rate of their experiments is 95%. Researchers
used block-wise based transfer learning to accomplish
fivefold crossvalidation. 95 percent accuracy Their
technology (CEMRI) was put to the test using a
benchmark dataset constructed from T1-weighted MR
images. Classifying MRI scans of the brain using Google's
neural network architecture. Classification accuracy of
98% was achieved. As a classifier, a support vector
machine-based method is utilized [7]. Classification and
feature extraction are two applications of CNN. There are
two convolutional layers and two fully linked layers used
in this structure. In a study using nine deep learning
models, they employed Transfer Learning to identify
brain tumours with a TL accuracy of 97.39% [7].
In order to delve into MR data, they switched to deep
learning mode. Classification of MRI scans was successful
to the tune of 98.71 percent using the suggested
approach. Despite the small size of the research, the
findings were striking. CNN's blueprints were spot-on in
every respect. VGG also achieved 96 percent accuracy,
while ResNet50 achieved 89 percent and InceptionV3
achieved 89 percent. Accuracy of 75% [8] As reported by
CNN, modern buildings are designed to work at lightning
speeds while maintaining a 98.24 percent accuracy rate.
Multi-scale analysis of MRI scans of brain tumours using
CNN is highly recommended. They tested the suggested
model on the MRI image dataset, and found that it had a
97.3 percent accuracy in classifying the images [9]. In
order to classify brain tumours, the CNN model uses two
convolutional layers and two fully connected layers to
gather relevant data for feature extraction. Classifying
brain tumours, they had a 97% success rate [10].
Researchers employed a convolutional neural network
ResNet34 model and a transfer learning approach to
categorise MRI scans of the brain into healthy and
diseased categories [20]. They used a method for
improving data photographs of brain tumours are
normal or not to increase the number of shots and get to
100% accuracy [11].
The ANN model was combined with the optimization
strategy of the Gray Wolf Optimizer (GWO). Using GWO-
ANN, they were able to achieve a 98.91 percent accuracy
in their classifications. They showed off a ResNet-50- and
brain MR-trained deep CNN network [12]. With the help
of the recommended data improvement technique, the
model's accuracy increased to 97.48%. Using an MRI of
the brain as training data, a Capsnet CNN model was
proposed with 90.89 percent accuracy [13]. An ensemble
model consisting of three separate convolutional neural
network classifiers reached 98% accuracy [14].
The research found that transfer learning might be
utilised to categorise brain tumours. A CNN with one of
four possible architectures—DenseNet-2, VGG-16, VGG-
19, or ResNet-50—was chosen for this task. In this
investigation, we used FigureShare. using 3064 magnetic
resonance imaging (MRI) data to differentiate between
three different types of brain tumours The produced
model was refined with the help of a public test bed. The
results demonstrated that the publicly available Figshare
dataset promoted knowledge exchange.
The creation of the ResNet-50 model was a triumph.
Usually within 99.02 percent. The team picked on where
they left off in 2020, continuing to use the same dataset
in an attempt to improve the diagnostic accuracy for
brain cancer. CNNs are suggested to include two
convolution layers for feature extraction and two fully
linked layers for classification [16]. A total of 97.39% of
cases of brain cancer were correctly diagnosed using this
CNN system. Using the data at hand, researchers were
able to identify several subtypes of brain tumours. A
variety of classifiers including KNN, ANN, RF, and LDA
were applied. Accuracy of 95.56 percent was attained by
combining the KNN model with the NLBP feature
extraction method [17]. When working with a brain to
identify and classify tumours, the shortcomings of the
aforementioned methods of transfer learning—
intrusiveness, complexity, and susceptibility to sampling
errors—must be overcome. Systematically, there is a
dearth of research on the effectiveness and
dependability of such methods. This has led to the
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 10 Issue: 02 | Feb 2023 www.irjet.net p-ISSN: 2395-0072
© 2023, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 406
development of transfer learning models for use in the
diagnosis and classification of malignant brain tumours.
Images were classified using deep learning, a unique and
powerful classification approach, and tumour types were
classified using faster region-based CNN (faster R-CNN).
Khairandish et. al. [1] provided an explanation of how
brain tumors actually behave, and with the aid of many
methodologies and the analysis of research studies using
a variety of criteria, it offers a clear image of this stage.
The examination is conducted in relation to the dataset,
proposed model, proposed model performance, and type
of method used in each paper. Between 79 and 97.7% of
the publications under study had accurate results. They
employed Convolutional Neural Network, K-Nearest
Neighbour, K-Means, and Random Forest algorithms, in
that sequence (highest frequency of use to lowest). Here
Convolutional Neural Network gave the highest accuracy
of around 79-97.7% Someswararao et. al. [2] developed
a new novel method for detecting tumors in MR images
By using machine learning techniques, particularly the
CNN model, in this study. This study combined a CNN
model classification challenge for determining whether
or not a subject has a brain tumor with a computer
vision problem to automatically crop the brain from MRI
scans. Other techniques used were Convolutional Neural
Network, K-Means Clustering and the highest Accuracy is
given by Convolutional Neural Network which is around
90%. Choudhury et. al. [3] proposed a new CNN-based
system that can distinguish between different brain MRI
images and label them as tumorous or not. The model's
accuracy was 96.08%, and its f-score was 97.3. The
model uses a CNN with three layers and only a few pre-
processing steps to yield results in 35 epochs. The goal of
this study is to emphasize the significance of predictive
therapy and diagnostic machine learning applications.
Other techniques used were Support Vector Machine,
Convolutional Neural Network, k-Nearest Neighbour,
Boosted trees, Random forest and Decision trees To
detect, categorise, and segment brain tumours, the
suggested approaches are certain to be very efficient and
accurate. Automatic or semi-automatic precision
This can only be done with the use of certain techniques.
For this research, CNN was used to identify and
categorise data with the use of a proposed automated
segmentation technique. Convolutional neural networks,
conditional random fields, support vector machines, and
genetic algorithms are some of the other methods.CNN
has the best efficiency and accuracy, at around 91% and
92.7%, respectively. In [1] In this study, we use GLCM
features and a Multilayer Perceptron neutron to
interpret MRI images. With MLP, the network is
forwarded with one or more layers between the input
and output layers. Segmentation with thresholding,
feature vector extraction with GLCM by declaring the
four angles-energy, entropy, contrast, and variance, and
model learning are all components of the suggested
method that make use of this neural network technology.
Images that have been filtered or equalised prior to
thresholding. Extracting features from data is a kind of
data reduction. The extracted characteristics are used as
training data for a neural classifier. Twenty magnetic
resonance imaging (MRI) scans of the head are utilised
for testing the suggested ISO algorithm. After the
histogram has been equalised, segmentation is used to
remove the tumour areas from the whole picture,
allowing for a more accurate assessment of the tumor's
location within the MRI. The retrieved photos are also
used to make a call.
In this study, we use the Support Vector Machine (SVM)
technique for detecting brain tumours in MRI data. The
Support Vector Machine (SVM) is a statistically-based
supervised learning system. Representing a picture with
DWT. When doing SVM classification, a Simulink model
is employed. Using support vector machines (SVM), a
prototype is shown in this article that combines fast
performance with excellent detection accuracy. To
classify tumours, it is necessary to first choose the
appropriate pictures for analysis (known as "pre-
processing"). Once the tumor's size and form have been
identified, the next step is feature extraction. Following
this, picture data is used to train a support vector
machine. The DICOM format is then used for SVM
classification. At last, a diagnosis of tumour is made.
Predictive values (PPV) of 81.48 percent and negative
predictive values (NPV) of 82 percent are determined.
There were 22 genuine positives, 5 genuine negatives, 5
genuine positives, and 22 genuine negatives.
In [3] In this study, we use the CNN technique to detect
MRI images. Brain tumour identification relies on MRI
scans, which are processed to increase precision.
Neurons and convolution layers are the basis of a
convolutional neural network (CNN). A system's
behaviour may be more succinctly represented via
clustering, which is used to find naturally occurring
groups in huge data sets. The goal of cluster analysis is to
reveal hidden patterns in large datasets. To locate the
areas of malignancy in imaging, patch extraction is used.
CNN's architecture is tailored to exploit the two-
dimensional nature of an input picture. Segmentation,
detection, and extraction are all components of post-
processing for MRI scans. Among the system's benefits is
improved segmentation. The accuracy rate for MRI scans
is 88%. Applying a neural network improves accuracy.
In [4] In this study, the MRI images are identified with
the help of a Recurrent Neural Network (RNN). To scale
up and scale down the network's nodes, the BP NN
activation function was first used. The number of nodes
in the hidden layer was set to 270 and then brought back
down to 230 using the log sigmoid function. Finally,
we've reached the optimal performance for RNN thanks
to a bump in the node count to 300. For optimal
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 10 Issue: 02 | Feb 2023 www.irjet.net p-ISSN: 2395-0072
© 2023, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 407
efficiency, we use an Elman network. When the number
of nodes is increased, so does the amount of
performance mistake. Elman networks, when used in the
recognition process, were shown to be both quick and
accurate compared to other ANN systems. When
compared to Elman's 88.14%, our ratio was 76.47%.
III.PROBLEM STATEMENT
Image segmentation and classification has a number of
challenges, such as the lack of a universally applicable
standard model. However, picking the best approach for
each given.Creating a positive image is challenging. So,
there is no universally accepted method for picture
recognition and categorization. It's still a huge roadblock
for AI vision systems. The method did not take into
account the classification of photographs of different
clinical disorders, illness types, or disease stages.
Because of the high concentration of pure nodes in the
system, it is vulnerable to overfitting.
Developers proposed a deep learning concept to identify
brain tumours automatically using MRI scans, and then
analysed the results to determine how well it worked
The Contribution of Proposed Work
 By using a novel boosted adaptive anisotrophic
diffusion filter, histogram equalizationthe image get
enhanced.
 Segmentation is done in two phases. The first phase is
brainsection; the tumour area is extracted using a hybrid
deformable model with a fuzzy approach and a
superpixel-based adaptive clustering.
 The features are extracted based on texture and
tetrolet transform, and the extracted features are
combined by using Harish hawks optimization
algorithm.
 The suggested technique is intended to distinguish
between normal brain tumours and abnormal brain MRI
images by the CNN classifier
Clusters of improperly developing brain tissue are called
tumours, and they may have devastating effects on the
central nervous system.
Furthermore, the proliferation of tumour cells might
cause abnormal capabilities of the mind. It's also
important to note that many different kinds of tumours
lead to the gradual enlargement and eventual death of
brain cells [1]. However, the survival rate and number of
treatment choices for people with brain tumours greatly
improves if they are diagnosed at an early stage. While
benign tumours develop more slowly and pose less risk,
classifying them using a large number of MRI images is a
time-consuming and labor-intensive process. / High-
quality medical pictures may be obtained with the use of
magnetic resonance imaging (MRI). Medical
professionals often use this imaging method to spot
problems in the brain. revealing the development of
malignancies throughout time. Automatic medical
analysis [2] relies heavily on MRI images. They improve
the visual representation of the various brain areas by
providing anatomical specifics. Researchers have
developed a number of ways for identifying and
classifying brain tumours using MRI images. There is a
wide spectrum of methods, from more traditional
medical image processing to cutting-edge machine
learning methods.
To learn from unstructured and unlabeled data without
human supervision, deep learning (DL) is a kind of
machine learning. In recent years, DL approaches and
models have shown effective in solving a wide variety of
difficult issues that need a high degree of accuracy and
rely on hierarchical feature extraction and data-driven
self-learning. Among its many uses, deep learning has
been put to work in areas such as pattern recognition,
object detection, voice recognition, and decision-making
[3]. The largest obstacle facing DL is the massive
quantity of training data required. As an example, in the
field of healthcare, there is a lack of publicly accessible
medical data that may be used to train deep learning
models.
The main reason for this is the concern for personal
information security. Because of this, the medical
industry has relied heavily on transfer learning to make
up for the paucity of data. This is an example of transfer
learning, in which a deep learning model that was first
trained for one task is then applied to a new issue. This is
often done in situations when there is insufficient
training data [4]. In this research, we apply transfer
learning to create a deep learning model that can identify
and classify brain tumours in MRI scans. All that's
needed to construct the proposed model are three pre-
trained deeps:
➢ Here, we show off a CNN-based, fully-automatic
method for identifying and classifying brain tumours.
To further extract deep information from brain MR
images, we use pre-trained models. On a dataset of brain
MRI images with 2 classifications (normal/tumor), we
tested 3 pre-trained models and an ensemble of these
pre-trained and CNN models.
➢ The present study uses densenet121 and
densenet169 with transfer learning on Kaggle datasets
to distinguish between benign and malignant brain
tumours. Tumor detection, classification, and growth
rate estimation are all enhanced by MRI.
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 10 Issue: 02 | Feb 2023 www.irjet.net p-ISSN: 2395-0072
© 2023, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 408
A. Data Pre-processing step
Preparing Data Training dataset size is very important
for deep convolutional neural networks. The first version
of the model was built using Keras TensorFlow's Image
Data Generator.
In order to train the proposed system on a sufficient
number of MRI pictures, the image dataset is augmented
with random modifications (rotations, height and
breadth shifts, brightness changes, etc.). The
recommended classifier will never be exposed to the
same picture again because to the data augmentation
parameters that were carefully selected.
B. Data augmentation
There is a strong correlation between the amount of
datasets utilised during training and the final
performance of deep convolutional neural networks. To
better train the proposed system on MRI pictures, we
augment the original dataset with additional MRI
images. (pivots, scale adjustments, luminance)
TensorFlow is modified with the use of the
ImageDataGenerator function in Keras. What was done
to improve the data so that the suggested classifier can
make reliable identifications. You will never see a repeat
of yourself. This process improves the model's
generalisation abilities.
C. Rezing and Crope
The removal of the brain from the picture backdrop is
the initial stage in this process [19]. The approach
showed here use OpenCV to pinpoint a bounding box's
extrema. It's worth noting that the MRI images utilised in
this research come in a range of sizes according to their
geographical origins. To ensure uniformity, the photos
are resized to 64x64x1..
D. Data Spiliting In this analysis, we partition the data
into smaller subgroups. The suggested approach will be
tested, validated, and refined over the course of three
phases. a case study in comprehensive education. The
model can be fit using the first subset. About 80% of the
whole dataset is included here. The remaining portion
has vanished. The system will be tested and validated at
the same rate of 80% each.
E. Convolutional Neural Network CNN is well-known
because it has gotten better at classifying images. CNN
uses the data that is given to it to automatically collect
features. It is a well-known DL architecture with
feedforward connections between each layer. Deep
architecture helps these networks learn complex
functions that a simple neural network can't learn [20].
CNN is the brains behind computer vision, and it can be
used to classify objects, monitor activities, and create
medical images. Its preprocessing is simple and small
compared to other neural classifiers because it has an
internal filter. A typical CNN architecture includes the
following components:
(i)Convolution
(ii), pooling
(iii), activation
(iv), and a dense layer
(v) are all steps in the classification process.
F. Transfer Learning Transfer learning (TL) is a deep
learning strategy that marries an existing model trained
on a large dataset with a new model trained on a
different dataset to tackle the same problem. In general,
CNN performs better with more data than with less. In
CNN situations when the datasets are small, TL can be
useful. The field of TL has expanded in recent years,
finding applications in object detection, medical imaging,
and picture categorization [21] Large Datasets, such as
ImageNet, were utilized to train the models so that they
can extract useful characteristics. Applications utilizing
smaller data sets, such as brain MRI data, are common.
One of the benefits of TL is that it shortens the time it
takes to complete training. Procedures, avoiding snug
fits, training with fewer data, and motivating improved
performance. CNN model training is complete.
Efficientb0, DenseNet121, and DenseNet169 were
utilized in our study.
Conclusion:
In terms of classification, dense convolutional layers
neural pathways (CNNs) have been greatly explored in
the science and new firms. In this research paper,
evaluate the effectiveness of a deep neural network
model for a classification algorithm linked to the
diagnosis of brain cancer. The extension applied to the
ResNet model shows that the deep learning algorithm
used in natural image processing can provide superior
efficiency in the analysis of medical data. The proposed
device was designed to identify the brain tumour of the
Magnetic Resonance Imaging (MRI) brain. There are
many phases novel boosted adaptive anisotrophic
diffusion filter is used for noise removal, to detect the
tumor part from the brain image. Segmentation extracts
the tumor portion. Textural feature extraction is used in
proposed system. Classification uses CNN classifiers;
proposed system achieved 98.3% of accuracy. Future
enhancement is to perform with the Perfusion based
MRI images which is quite complex.
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 10 Issue: 02 | Feb 2023 www.irjet.net p-ISSN: 2395-0072
© 2023, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 409
REFERENCES:
1. Saba, T., Mohamed, A.S., El-Affendi, M., Amin, J., &
Sharif, M. (2020). Brain tumor detection using fusion
of hand crafted and deep learning features. Cognitive
Systems Research, 59, 221-230.
2. Sharif, M., Amin, J., Nisar, M.W., Anjum, M.A.,
Muhammad, N., & Shad, S. A. (2020). A unified patch
based method for brain tumor detection using
features fusion. Cognitive Systems Research, 59,
273-286.
3. Toğaçar, M., Ergen, B., &Cömert, Z. (2020).
BrainMRNet: Brain tumor detection using magnetic
resonance images with a novel convolutional neural
network model. Medical Hypotheses, 134, 109531.
4. Veeramuthu, A., Meenakshi, S., &Darsini, V.P. (2015).
Brain image classification using learning machine
approach and brain structure analysis. Procedia
Computer Science, 50, 388-394.
www.sciencedirect.com
5. Kumar, S., Dabas, C., &Godara, S. (2017).
Classification of brain MRI tumor images: A hybrid
approach. Procedia computer science, 122, 510-517.
www.sciencedirect.com
6. Jana, G.C., Swetapadma, A., &Pattnaik, P.K. (2018).
Enhancing the performance of motor imagery
classification to design a robust brain computer
interface using feed forward back-propagation
neural network.Ain Shams Engineering Journal, 9(4),
2871-2878. www.sciencedirect.com
7. Sidhu, H.S., Benigno, S., Ganeshan, B., Dikaios, N.,
Johnston, E.W., Allen, C., & Taylor, S.A. (2017).
Textural analysis of multiparametric MRI detects
transition zone prostate cancer. European radiology,
27(6), 2348-2358.
8. Anaraki, A.K., Ayati, M., &Kazemi, F. (2019). Magnetic
resonance imagingbased brain tumor grades
classification and grading via convolutional neural
networks and genetic algorithms. Biocybernetics
and Biomedical Engineering, 39(1), 63-74.
9. Fabelo, H., Ortega, S., Lazcano, R., Madroñal, D., M
Callicó, G., Juárez, E., &Piñeiro, J.F. (2018). An
intraoperative visualization system using
hyperspectral imaging to aid in brain tumor
delineation. Sensors, 18(2), 430.
10. Zhang, J., Shen, X., Zhuo, T., & Zhou, H. (2017). Brain
tumor segmentation based on refined fully
convolutional neural networks with a hierarchical
dice loss. arXiv preprint arXiv:1712.09093.
11. Wang, N., Shang, Y., Chen, Y., Yang, M., Zhang, Q., Liu,
Y., &Gui, Z. (2018). A hybrid model for image
denoising combining modified isotropic diffusion
model and modified Perona-Malik model. IEEE
Access, 6, 33568-33582.
12. Chang, Y., Jung, C., Ke, P., Song, H., & Hwang, J.
(2018). Automatic contrast-limited adaptive
histogram equalization with dual gamma
correction.IEEE Access, 6, 11782-11792.
13. Haleem, M.S., Han, L., van Hemert, J., Li, B., Fleming,
A., Pasquale, L.R., & Song, B.J. (2018). A novel
adaptive deformable model for automated optic disc
and cup segmentation to aid glaucoma diagnosis.
Journal of medical systems, 42(1), 20.
14. Wang, Y., Karimi, H.R., Shen, H., Fang, Z., & Liu, M.
(2018). Fuzzy-modelbased sliding mode control of
nonlinear descriptor systems. IEEE transactions on
cybernetics, 49(9), 3409-3419.
15. Uziel, R., Ronen, M., &Freifeld, O. (2019). Bayesian
Adaptive Superpixel Segmentation. In Proceedings
of the IEEE International Conference on Computer
Vision, 8470-8479.
16. Pradhan, J., Kumar, S., Pal, A.K., & Banka, H. (2018).
A hierarchical CBIR framework using adaptive
tetrolet transform and novel histograms from color
and shape features. Digital Signal Processing, 82,
258-281.
17. Yang, F., Dogan, N., Stoyanova, R., & Ford, J.C. (2018).
Evaluation of radiomic texture feature error due to
MRI acquisition and reconstruction: a simulation
study utilizing ground truth. PhysicaMedica, 50, 26-
36.
18. Sihwail, R., Omar, K., Ariffin, K.A.Z., &Tubishat, M.
(2020). Improved harrishawks optimization using
elite opposition-based learning and novel search
mechanism for feature selection. IEEE Access, 8,
121127- 121145.
19. Chuang, T.Y., Han, J.Y., Jhan, D.J., & Yang, M.D.
(2020). Geometric Recognition of Moving Objects in
Monocular Rotating Imagery Using Faster R-CNN.
Remote Sensing, 12(12), 1908.
20. Ćiprijanović, A., Snyder, G.F., Nord, B., & Peek, J.E.
(2020). DeepMerge: Classifying high-redshift
merging galaxies with deep neural networks.
Astronomy and Computing, 32, 100390.
21. Sarhan, A.M. (2020). Detection and Classification of
Brain Tumor in MRI Images Using Wavelet
Transform and Convolutional Neural Network.
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 10 Issue: 02 | Feb 2023 www.irjet.net p-ISSN: 2395-0072
© 2023, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 410
Journal of Advances in Medicine and Medical
Research, 15- 26.
22. Singh, S.P., Wang, L., Gupta, S., Goli, H.,
Padmanabhan, P., &Gulyás, B. (2020). 3D Deep
Learning on Medical Images: A Review. arXiv
preprint arXiv:2004.00218.
23. Chaubey, N.K., &Jayanthi, P. (2020). Disease
Diagnosis and Treatment Using Deep Learning
Algorithms for the Healthcare System. In
Applications of Deep Learning and Big IoT on
Personalized Healthcare Services,IGI Global, 99-114.
24. Joseph, T., Kalaiselvan, S.A., Aswathy, S.U.,
Radhakrishnan, R., &Shamna, A. R. (2020). A
multimodal biometric authentication scheme based
on feature fusion for improving security in cloud
environment. Journal of Ambient Intelligence and
Humanized Computing, 1-9.
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 10 Issue: 02 | Feb 2023 www.irjet.net p-ISSN: 2395-0072

Contenu connexe

Similaire à A Review on Multiclass Brain Tumor Detection using Convolutional Neural Network and Support Vector Machines

An overview of automatic brain tumor detection frommagnetic resonance images
An overview of automatic brain tumor detection frommagnetic resonance imagesAn overview of automatic brain tumor detection frommagnetic resonance images
An overview of automatic brain tumor detection frommagnetic resonance images
Mangesh Lingampalle
 
Identifying brain tumour from mri image using modified fcm and support
Identifying brain tumour from mri image using modified fcm and supportIdentifying brain tumour from mri image using modified fcm and support
Identifying brain tumour from mri image using modified fcm and support
IAEME Publication
 
Multi Stage Classification and Segmentation of Brain Tumor Images Based on St...
Multi Stage Classification and Segmentation of Brain Tumor Images Based on St...Multi Stage Classification and Segmentation of Brain Tumor Images Based on St...
Multi Stage Classification and Segmentation of Brain Tumor Images Based on St...
rahulmonikasharma
 
Brain Tumor Detection System for MRI Image
Brain Tumor Detection System for MRI ImageBrain Tumor Detection System for MRI Image
Brain Tumor Detection System for MRI Image
ijtsrd
 

Similaire à A Review on Multiclass Brain Tumor Detection using Convolutional Neural Network and Support Vector Machines (20)

IRJET- Analysis of Brain Tumor Classification by using Multiple Clustering Al...
IRJET- Analysis of Brain Tumor Classification by using Multiple Clustering Al...IRJET- Analysis of Brain Tumor Classification by using Multiple Clustering Al...
IRJET- Analysis of Brain Tumor Classification by using Multiple Clustering Al...
 
IRJET- MRI Brain Image Segmentation using Machine Learning Techniques
IRJET-  	  MRI Brain Image Segmentation using Machine Learning TechniquesIRJET-  	  MRI Brain Image Segmentation using Machine Learning Techniques
IRJET- MRI Brain Image Segmentation using Machine Learning Techniques
 
An overview of automatic brain tumor detection frommagnetic resonance images
An overview of automatic brain tumor detection frommagnetic resonance imagesAn overview of automatic brain tumor detection frommagnetic resonance images
An overview of automatic brain tumor detection frommagnetic resonance images
 
IRJET- A Novel Segmentation Technique for MRI Brain Tumor Images
IRJET- A Novel Segmentation Technique for MRI Brain Tumor ImagesIRJET- A Novel Segmentation Technique for MRI Brain Tumor Images
IRJET- A Novel Segmentation Technique for MRI Brain Tumor Images
 
Automatic brain tumor detection using adaptive region growing with thresholdi...
Automatic brain tumor detection using adaptive region growing with thresholdi...Automatic brain tumor detection using adaptive region growing with thresholdi...
Automatic brain tumor detection using adaptive region growing with thresholdi...
 
A Survey On Brain Tumor Detection Techniques
A Survey On Brain Tumor Detection TechniquesA Survey On Brain Tumor Detection Techniques
A Survey On Brain Tumor Detection Techniques
 
BRAIN TUMOR DETECTION USING CNN & ML TECHNIQUES
BRAIN TUMOR DETECTION USING CNN & ML TECHNIQUESBRAIN TUMOR DETECTION USING CNN & ML TECHNIQUES
BRAIN TUMOR DETECTION USING CNN & ML TECHNIQUES
 
IRJET- Brain Tumor Detection using Image Processing, ML & NLP
IRJET- Brain Tumor Detection using Image Processing, ML & NLPIRJET- Brain Tumor Detection using Image Processing, ML & NLP
IRJET- Brain Tumor Detection using Image Processing, ML & NLP
 
IRJET - Brain Tumor Detection using Image Processing, ML & NLP
IRJET - Brain Tumor Detection using Image Processing, ML & NLPIRJET - Brain Tumor Detection using Image Processing, ML & NLP
IRJET - Brain Tumor Detection using Image Processing, ML & NLP
 
3D Segmentation of Brain Tumor Imaging
3D Segmentation of Brain Tumor Imaging3D Segmentation of Brain Tumor Imaging
3D Segmentation of Brain Tumor Imaging
 
Identifying brain tumour from mri image using modified fcm and support
Identifying brain tumour from mri image using modified fcm and supportIdentifying brain tumour from mri image using modified fcm and support
Identifying brain tumour from mri image using modified fcm and support
 
Brain Tumor Detection Using Deep Learning
Brain Tumor Detection Using Deep LearningBrain Tumor Detection Using Deep Learning
Brain Tumor Detection Using Deep Learning
 
aneeshppt.pptx
aneeshppt.pptxaneeshppt.pptx
aneeshppt.pptx
 
Survey on “Brain Tumor Detection Using Deep Learning
Survey on “Brain Tumor Detection Using Deep LearningSurvey on “Brain Tumor Detection Using Deep Learning
Survey on “Brain Tumor Detection Using Deep Learning
 
Multi Stage Classification and Segmentation of Brain Tumor Images Based on St...
Multi Stage Classification and Segmentation of Brain Tumor Images Based on St...Multi Stage Classification and Segmentation of Brain Tumor Images Based on St...
Multi Stage Classification and Segmentation of Brain Tumor Images Based on St...
 
Ijetcas14 313
Ijetcas14 313Ijetcas14 313
Ijetcas14 313
 
A Brief Survey on Deep Learning-Based Brain Tumour Detection Techniques
A Brief Survey on Deep Learning-Based Brain Tumour Detection Techniques A Brief Survey on Deep Learning-Based Brain Tumour Detection Techniques
A Brief Survey on Deep Learning-Based Brain Tumour Detection Techniques
 
A Review Paper on Automated Brain Tumor Detection
A Review Paper on Automated Brain Tumor DetectionA Review Paper on Automated Brain Tumor Detection
A Review Paper on Automated Brain Tumor Detection
 
Brain Tumor Detection System for MRI Image
Brain Tumor Detection System for MRI ImageBrain Tumor Detection System for MRI Image
Brain Tumor Detection System for MRI Image
 
BRAIN TUMOR DETECTION
BRAIN TUMOR DETECTIONBRAIN TUMOR DETECTION
BRAIN TUMOR DETECTION
 

Plus de IRJET Journal

Plus de IRJET Journal (20)

TUNNELING IN HIMALAYAS WITH NATM METHOD: A SPECIAL REFERENCES TO SUNGAL TUNNE...
TUNNELING IN HIMALAYAS WITH NATM METHOD: A SPECIAL REFERENCES TO SUNGAL TUNNE...TUNNELING IN HIMALAYAS WITH NATM METHOD: A SPECIAL REFERENCES TO SUNGAL TUNNE...
TUNNELING IN HIMALAYAS WITH NATM METHOD: A SPECIAL REFERENCES TO SUNGAL TUNNE...
 
STUDY THE EFFECT OF RESPONSE REDUCTION FACTOR ON RC FRAMED STRUCTURE
STUDY THE EFFECT OF RESPONSE REDUCTION FACTOR ON RC FRAMED STRUCTURESTUDY THE EFFECT OF RESPONSE REDUCTION FACTOR ON RC FRAMED STRUCTURE
STUDY THE EFFECT OF RESPONSE REDUCTION FACTOR ON RC FRAMED STRUCTURE
 
A COMPARATIVE ANALYSIS OF RCC ELEMENT OF SLAB WITH STARK STEEL (HYSD STEEL) A...
A COMPARATIVE ANALYSIS OF RCC ELEMENT OF SLAB WITH STARK STEEL (HYSD STEEL) A...A COMPARATIVE ANALYSIS OF RCC ELEMENT OF SLAB WITH STARK STEEL (HYSD STEEL) A...
A COMPARATIVE ANALYSIS OF RCC ELEMENT OF SLAB WITH STARK STEEL (HYSD STEEL) A...
 
Effect of Camber and Angles of Attack on Airfoil Characteristics
Effect of Camber and Angles of Attack on Airfoil CharacteristicsEffect of Camber and Angles of Attack on Airfoil Characteristics
Effect of Camber and Angles of Attack on Airfoil Characteristics
 
A Review on the Progress and Challenges of Aluminum-Based Metal Matrix Compos...
A Review on the Progress and Challenges of Aluminum-Based Metal Matrix Compos...A Review on the Progress and Challenges of Aluminum-Based Metal Matrix Compos...
A Review on the Progress and Challenges of Aluminum-Based Metal Matrix Compos...
 
Dynamic Urban Transit Optimization: A Graph Neural Network Approach for Real-...
Dynamic Urban Transit Optimization: A Graph Neural Network Approach for Real-...Dynamic Urban Transit Optimization: A Graph Neural Network Approach for Real-...
Dynamic Urban Transit Optimization: A Graph Neural Network Approach for Real-...
 
Structural Analysis and Design of Multi-Storey Symmetric and Asymmetric Shape...
Structural Analysis and Design of Multi-Storey Symmetric and Asymmetric Shape...Structural Analysis and Design of Multi-Storey Symmetric and Asymmetric Shape...
Structural Analysis and Design of Multi-Storey Symmetric and Asymmetric Shape...
 
A Review of “Seismic Response of RC Structures Having Plan and Vertical Irreg...
A Review of “Seismic Response of RC Structures Having Plan and Vertical Irreg...A Review of “Seismic Response of RC Structures Having Plan and Vertical Irreg...
A Review of “Seismic Response of RC Structures Having Plan and Vertical Irreg...
 
A REVIEW ON MACHINE LEARNING IN ADAS
A REVIEW ON MACHINE LEARNING IN ADASA REVIEW ON MACHINE LEARNING IN ADAS
A REVIEW ON MACHINE LEARNING IN ADAS
 
Long Term Trend Analysis of Precipitation and Temperature for Asosa district,...
Long Term Trend Analysis of Precipitation and Temperature for Asosa district,...Long Term Trend Analysis of Precipitation and Temperature for Asosa district,...
Long Term Trend Analysis of Precipitation and Temperature for Asosa district,...
 
P.E.B. Framed Structure Design and Analysis Using STAAD Pro
P.E.B. Framed Structure Design and Analysis Using STAAD ProP.E.B. Framed Structure Design and Analysis Using STAAD Pro
P.E.B. Framed Structure Design and Analysis Using STAAD Pro
 
A Review on Innovative Fiber Integration for Enhanced Reinforcement of Concre...
A Review on Innovative Fiber Integration for Enhanced Reinforcement of Concre...A Review on Innovative Fiber Integration for Enhanced Reinforcement of Concre...
A Review on Innovative Fiber Integration for Enhanced Reinforcement of Concre...
 
Survey Paper on Cloud-Based Secured Healthcare System
Survey Paper on Cloud-Based Secured Healthcare SystemSurvey Paper on Cloud-Based Secured Healthcare System
Survey Paper on Cloud-Based Secured Healthcare System
 
Review on studies and research on widening of existing concrete bridges
Review on studies and research on widening of existing concrete bridgesReview on studies and research on widening of existing concrete bridges
Review on studies and research on widening of existing concrete bridges
 
React based fullstack edtech web application
React based fullstack edtech web applicationReact based fullstack edtech web application
React based fullstack edtech web application
 
A Comprehensive Review of Integrating IoT and Blockchain Technologies in the ...
A Comprehensive Review of Integrating IoT and Blockchain Technologies in the ...A Comprehensive Review of Integrating IoT and Blockchain Technologies in the ...
A Comprehensive Review of Integrating IoT and Blockchain Technologies in the ...
 
A REVIEW ON THE PERFORMANCE OF COCONUT FIBRE REINFORCED CONCRETE.
A REVIEW ON THE PERFORMANCE OF COCONUT FIBRE REINFORCED CONCRETE.A REVIEW ON THE PERFORMANCE OF COCONUT FIBRE REINFORCED CONCRETE.
A REVIEW ON THE PERFORMANCE OF COCONUT FIBRE REINFORCED CONCRETE.
 
Optimizing Business Management Process Workflows: The Dynamic Influence of Mi...
Optimizing Business Management Process Workflows: The Dynamic Influence of Mi...Optimizing Business Management Process Workflows: The Dynamic Influence of Mi...
Optimizing Business Management Process Workflows: The Dynamic Influence of Mi...
 
Multistoried and Multi Bay Steel Building Frame by using Seismic Design
Multistoried and Multi Bay Steel Building Frame by using Seismic DesignMultistoried and Multi Bay Steel Building Frame by using Seismic Design
Multistoried and Multi Bay Steel Building Frame by using Seismic Design
 
Cost Optimization of Construction Using Plastic Waste as a Sustainable Constr...
Cost Optimization of Construction Using Plastic Waste as a Sustainable Constr...Cost Optimization of Construction Using Plastic Waste as a Sustainable Constr...
Cost Optimization of Construction Using Plastic Waste as a Sustainable Constr...
 

Dernier

DeepFakes presentation : brief idea of DeepFakes
DeepFakes presentation : brief idea of DeepFakesDeepFakes presentation : brief idea of DeepFakes
DeepFakes presentation : brief idea of DeepFakes
MayuraD1
 
scipt v1.pptxcxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx...
scipt v1.pptxcxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx...scipt v1.pptxcxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx...
scipt v1.pptxcxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx...
HenryBriggs2
 
notes on Evolution Of Analytic Scalability.ppt
notes on Evolution Of Analytic Scalability.pptnotes on Evolution Of Analytic Scalability.ppt
notes on Evolution Of Analytic Scalability.ppt
MsecMca
 
"Lesotho Leaps Forward: A Chronicle of Transformative Developments"
"Lesotho Leaps Forward: A Chronicle of Transformative Developments""Lesotho Leaps Forward: A Chronicle of Transformative Developments"
"Lesotho Leaps Forward: A Chronicle of Transformative Developments"
mphochane1998
 
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak HamilCara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
Cara Menggugurkan Kandungan 087776558899
 

Dernier (20)

S1S2 B.Arch MGU - HOA1&2 Module 3 -Temple Architecture of Kerala.pptx
S1S2 B.Arch MGU - HOA1&2 Module 3 -Temple Architecture of Kerala.pptxS1S2 B.Arch MGU - HOA1&2 Module 3 -Temple Architecture of Kerala.pptx
S1S2 B.Arch MGU - HOA1&2 Module 3 -Temple Architecture of Kerala.pptx
 
COST-EFFETIVE and Energy Efficient BUILDINGS ptx
COST-EFFETIVE  and Energy Efficient BUILDINGS ptxCOST-EFFETIVE  and Energy Efficient BUILDINGS ptx
COST-EFFETIVE and Energy Efficient BUILDINGS ptx
 
DC MACHINE-Motoring and generation, Armature circuit equation
DC MACHINE-Motoring and generation, Armature circuit equationDC MACHINE-Motoring and generation, Armature circuit equation
DC MACHINE-Motoring and generation, Armature circuit equation
 
DeepFakes presentation : brief idea of DeepFakes
DeepFakes presentation : brief idea of DeepFakesDeepFakes presentation : brief idea of DeepFakes
DeepFakes presentation : brief idea of DeepFakes
 
Block diagram reduction techniques in control systems.ppt
Block diagram reduction techniques in control systems.pptBlock diagram reduction techniques in control systems.ppt
Block diagram reduction techniques in control systems.ppt
 
scipt v1.pptxcxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx...
scipt v1.pptxcxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx...scipt v1.pptxcxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx...
scipt v1.pptxcxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx...
 
Online food ordering system project report.pdf
Online food ordering system project report.pdfOnline food ordering system project report.pdf
Online food ordering system project report.pdf
 
Unleashing the Power of the SORA AI lastest leap
Unleashing the Power of the SORA AI lastest leapUnleashing the Power of the SORA AI lastest leap
Unleashing the Power of the SORA AI lastest leap
 
notes on Evolution Of Analytic Scalability.ppt
notes on Evolution Of Analytic Scalability.pptnotes on Evolution Of Analytic Scalability.ppt
notes on Evolution Of Analytic Scalability.ppt
 
Minimum and Maximum Modes of microprocessor 8086
Minimum and Maximum Modes of microprocessor 8086Minimum and Maximum Modes of microprocessor 8086
Minimum and Maximum Modes of microprocessor 8086
 
Double Revolving field theory-how the rotor develops torque
Double Revolving field theory-how the rotor develops torqueDouble Revolving field theory-how the rotor develops torque
Double Revolving field theory-how the rotor develops torque
 
Thermal Engineering-R & A / C - unit - V
Thermal Engineering-R & A / C - unit - VThermal Engineering-R & A / C - unit - V
Thermal Engineering-R & A / C - unit - V
 
"Lesotho Leaps Forward: A Chronicle of Transformative Developments"
"Lesotho Leaps Forward: A Chronicle of Transformative Developments""Lesotho Leaps Forward: A Chronicle of Transformative Developments"
"Lesotho Leaps Forward: A Chronicle of Transformative Developments"
 
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak HamilCara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
 
AIRCANVAS[1].pdf mini project for btech students
AIRCANVAS[1].pdf mini project for btech studentsAIRCANVAS[1].pdf mini project for btech students
AIRCANVAS[1].pdf mini project for btech students
 
A Study of Urban Area Plan for Pabna Municipality
A Study of Urban Area Plan for Pabna MunicipalityA Study of Urban Area Plan for Pabna Municipality
A Study of Urban Area Plan for Pabna Municipality
 
Design For Accessibility: Getting it right from the start
Design For Accessibility: Getting it right from the startDesign For Accessibility: Getting it right from the start
Design For Accessibility: Getting it right from the start
 
2016EF22_0 solar project report rooftop projects
2016EF22_0 solar project report rooftop projects2016EF22_0 solar project report rooftop projects
2016EF22_0 solar project report rooftop projects
 
Air Compressor reciprocating single stage
Air Compressor reciprocating single stageAir Compressor reciprocating single stage
Air Compressor reciprocating single stage
 
Employee leave management system project.
Employee leave management system project.Employee leave management system project.
Employee leave management system project.
 

A Review on Multiclass Brain Tumor Detection using Convolutional Neural Network and Support Vector Machines

  • 1. © 2023, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 402 A Review on Multiclass Brain Tumor Detection using Convolutional Neural Network and Support Vector Machines KAMLESH MANKAR1, SHREYA MANGEKAR2, SAURABH SONKAR3,RAMESH SANTRA4, V.V. CHAKOLE5 -------------------------------------------------------------------------***------------------------------------------------------------------------ ABSTRACT: The role of clinical diagnosis in modern healthcare has grown in importance. Brain cancer is a major concern in the realm of medical imaging since it is the worst illness in the world. Brain tumour evaluation and prognosis may benefit from magnetic resonance imaging-based early and accurate diagnosis. Medical images need to be identified, segmented, and categorised so that radiologists may use computer-aided diagnostic techniques to help them find brain tumours. Radiologists find the process of manually detecting brain tumours to be tedious and prone to errors; as a result, an automated technique is urgently needed. Therefore, the approach for accurately detecting and categorising brain tumours is introduced. In terms of the materials and methods used, There are five stages to the procedure suggested. Initial steps include applying a linear contrast stretching to the original picture in order to locate the image's edges. In the second stage, a deep neural network architecture tailored specifically to the task of brain tumour segmentation is created. Third, we employ transfer learning to train a modified version of the MobileNetV2 architecture for feature extraction. Lastly, the best features were chosen using a multiclass support vector machine (M-SVM) and a controlled entropy-based technique. Finally, meningioma, glioma, and pituitary pictures are categorised by using M-SVM for brain tumour classification. KEYWORDS Biomedical image processing; brain tumor; deep learning; linear contrast stretching; segmentation. I.INTRODUCTION: Primitive costs for patients with brain tumours are currently the highest of all cancer types. Brain tumours may develop in persons of any age due to the rapid growth of certain cell types. A brain tumour is an abnormal growth of tissue that develops in the brain or central spine and disrupts normal brain function [1]. These big tumour cells may be classified as malignant (cancerous) or benign (non-cancerous) based on their location, size, and surface area [2]. Primary and secondary tumour sites refer to the most recent developments of cancer cells. The initial tumour region is defined as the stage at which cancer cells are deemed benign. Tumors that originate in brain tissue are considered primary brain tumours, and they are curable or at least manageable with the right treatment. Malignancies that originate in other organs and spread to the brain are called secondary tumours. Only via proper surgery or radiation treatment [3] can the pretentious patient's tumour be treated. Due to the danger posed by brain tumours to healthy brain tissue, monitoring their development is crucial for patient survival [4]. Tumors called meningiomas may spread to the brain and spinal cord. The tumours themselves are composed of three meningeal layers [5]. Meningiomas often manifest as asymmetrical lobar masses with sharp margins [6]. Survival rates for people with meningioma depend on factors such as tumour size, where the tumour is located, and the patient's age. Symptoms of meningioma include obsessive clinginess, frequent headaches, and limb weakness. Tumors of benign meningiomas are less than 2 mm in diameter, but those of malignant meningiomas may be as large as 5 cm in diameter [7]. Most malignant meningiomas are curable if detected and treated early. Since several different types of MRI may be used for the detection of brain cancer, magnetic resonance imaging (MRI) has become one of the most common techniques for making this diagnosis [8]. Since brain tumours are potentially life-threatening, it is crucial that they be diagnosed and treated correctly. Detecting the illness in its earliest stages with a whole brain scan is the only way to avoid putting patients at risk. Different types of brain tissue may be detected by various MRI techniques, each of which has its own unique settling time [9]. Due to their unpredictable form and location, brain tumours might be difficult to identify using only one MRI modality. When trying to locate tumours, contrasting data from several MRI techniques is crucial [10]. T1- weighted MRI can be used to differentiate tumour tissue from healthy tissue, T2-weighted MRI can be used to outline areas of edoema, resulting in clear image areas, T4-Gd MRI can show a bright signal at the tumour edge when contrast enhancement is used, and FLAIR MRI can International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 10 Issue: 02 | Feb 2023 www.irjet.net p-ISSN: 2395-0072
  • 2. © 2023, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 403 use water molecules to suppress signals to differentiate cerebrospinal fluid (CSF) from areas of edoema. Calculating area, determining uncertainty in segmentation area, and tumour segmentation are challenging activities [11] because of the structural complexity and unpredictability of brain tumours, high volatility, and intrinsic features of MRI data, i.e., fluctuation of tumour size and form. Creating manual tumour segmentation is a time-consuming operation, and doctors may notice variations in segmentation outcomes owing to variances in tumour form and shape. Meningiomas, on the other hand, are rather easy to separate, whereas gliomas and glioblastomas are more challenging [12]. It is crucial, then, to provide an automated segmentation approach to facilitate this laborious process. Brain tumour detection and monitoring need a lot of time and human error is common when done manually [13]. We have to find a way to replace the manual processes with an automated one. It is incompatible with brain tumour detection processes to use the present approaches, which rely on labelling methods to identify sick regions in the brain and cannot detect internal peripheral pixels. We favour MRI over CT scans because of the contrast agent's ability to clearly show the affected region (CT). Therefore, MRI modalities are utilised in a wide variety of approaches to diagnose brain cancer. Many different approaches have been proposed in recent years for the automatic classification of brain tumours; these can be broadly categorised as either Machine Learning (ML) or Deep Learning (DL) approaches, depending on their focus on feature fusion, feature selection, or the underlying learning mechanism. Feature selection and feature extraction are cornerstones of classification in ML approaches [14,15]. On the other hand, deep learning techniques may learn by manually extracting characteristics from photos. The latest deep learning (DL) techniques, convolutional neural networks (CNNs) in particular, boast impressive precision and are widely used for MRI analysis and other forms of medical image analysis [16,17,18]. Although these drawbacks can be mitigated through the use of transfer learning [19], they still exist when compared to traditional ML approaches and include the need for a large training dataset, high complexity of time, low accuracy for applications where only small datasets are available, and expensive GPUs that eventually increase the cost to the user. Furthermore, knowing enough about various parameters, training techniques, and topologies might make picking the right deep learning model seem like an overwhelming undertaking. Support Vector Machine (SVM), Random Forest (RF), fuzzy C-mean (FCM), Convolutional Neural Network (CNN), Nave Bayes (NB), K-Nearest Neighbor (KNN), Sequential Minimal Optimization (SMO), and Decision Tree are just some of the machine learning-based classifiers that have been used for brain tumour classification and detection (DT). With reduced computational and spatial complexity, the CNN implementation is easier to implement. The little dataset needed for training, the low computing cost, and the simplicity of adoption by untrained persons have attracted a lot of academic interest to these classifiers in general. These are some of the projected improvements brought about by the novel approach of segmenting and classifying brain tumours. In this pre-processing stage, we apply a linear contrast stretching technique to enhance the original image's edge features; Developed from scratch 17-layer convolutional neural network (CNN) architecture specifically designed for brain tumour segmentation; To get the required datasets for deep feature extraction, we used transfer learning from a modified version of MobileNetV2; To achieve this, we use an entropy-controlled technique for selecting features, where the best features are chosen according to the entropy value. A multi-class SVM classifier is used to categorise the final characteristics; a thorough statistical analysis and comparison with state- of-the-art approaches are carried out to validate the stability of the proposed methodology. II. RELATED WORKS Modern medical practises often use MR imaging for the diagnosis of brain cancer [8,14]. This part takes a close look at how well brain tumours are known to be detected and categorised. Numerous studies on brain tumour detection, segmentation, and classification have been conducted in recent years. Even after many publications emphasising its significance [20,21,22], the medical community continues to stress the topic's relevance. This study presents a methodology for the identification and delineation of brain tumours. Differentiating brain pictures may be done using either generative or discriminating techniques [17,23] for the diagnosis of brain cancers. Brain tumour identification using fuzzy logic and the U-NET CNN architecture was shown by Maqsood et al. [4]. It included using contrast enhancement, a fuzzy logic-based edge detection approach, and U-NET CNN classification. For this purpose, we first apply a contrast enhancement technique to the source pictures for pre-processing, then we use a fuzzy logic-based edge detection technique to find the edges in the contrast-enhanced images, and International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 10 Issue: 02 | Feb 2023 www.irjet.net p-ISSN: 2395-0072
  • 3. © 2023, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 404 lastly we apply a dual tree-complex wavelet transform at different scale levels. In order to differentiate between meningioma and non-meningioma in brain imaging, features are derived from deconstructed sub-band images and then classified using the U-NET CNN classification approach. The suggested technique outperformed several state-of-the-art algorithms by a wide margin (98.59%). Sobhaninia et al. [24] trained a CNN model for segmentation using brain MRI using a LinkNet network, combining several views to improve the model's performance; the final result was a dice score of 0.79. Johnpeter et al. [25] use an adaptive neuro-fuzzy inference classification approach to identify and locate cancers in brain MRI, yet this network seems to be rather sophisticated. Without employing edge detection on brain pictures, our strategy enhanced tumour regions using histogram equalisation. The resulting work was accurate 98.80 percent of the time. With the use of the modulo and hypercolumn technique, Togacar et al. [26] created a network called BrainMRNet. The original photos were first subjected to pre- processing, and then they were sent to the attention module. The convolutional layer receives the picture and processes it based on the attention module, which controls the focal regions of the image. The BrainMRNet model's convolutional layers make extensive use of the hypercolumn strategy. By using this technique, we were able to increase accuracy to 96.05%, since the characteristics collected from each successive layer were saved in the array tree of the final layer. To classify brain tumours, Kibriya et al. [27] established an approach based on the fusion of many features. To solve the data issue, we first apply the minimum-maximum normalisation approach to the original photos and then use enormous data extension to the pre-processed images. The final output and accuracy of 97.7% are the result of a combination of a support vector machine (SVM) and a k-nearest neighbour (KNN) classifier trained on data from the GoogLeNet and ResNet18 deep CNN models. Brain tumours may be detected and classified using a CNN developed by Sajjad et al. [28]. The authors were able to achieve a 94.58% accuracy rate by using a Cascade CNN algorithm for segmenting the tumours in the brain and a fine-tuned version of VGG19 for classifying them. Shanthakumar [29] employed watershed segmentation on brain MRI scans to localise tumours. The accuracy of tumour segmentation was improved to 94.52% using this segmentation approach, which makes use of a preset labelling scheme to achieve this result. Separating tumour regions in MR images of the brain is shown to be possible by Prastawa et al. [30]. Although this approach has an 88.17% success rate, it is only able to identify the outside, abnormal boundaries of the tumour area, and not the interior boundary. For the purpose of classifying brain tumours, Gumaei et al. [31] suggested a hybrid feature extraction technique based on a regularised extreme learning machine (RELM). Preprocessing is performed using the min-max normalisation contrast enhancement approach, feature extraction is performed using a hybrid PCA-NGIST method, and brain tumour classification is performed using the RELM method. The overall precision of this job was 94.23%. When used to contrast-enhanced magnetic resonance imaging (CE-MRI), a fine-tuned pre-trained VGG19 model improved outcomes for Swati et al. [32], who reported an average accuracy of 94.82%. After addressing the issue of overfitting using the ResNet50 CNN model and global average pooling, Kumar et al. [33] suggested a brain tumour technique with an average accuracy of 97.48%. These game-changing advances have garnered widespread attention in the field of medical image analysis. Brain picture categorization by machine learning and an understanding of brain architecture was proposed by Veeramuthuet al. [4]. Decomposing the picture and subsequently extracting its characteristics is made easier with the aid of Multi Level Discrete Wavelet Transform. In order to classify the severity of the illness in the brain picture, a PNN-RBF training and classification approach is used. The hybrid strategy was created by Sanjeev et al. [5]. This mixed method utilises discrete wavelet transformation (DWT) to filter out irrelevant information, a genetic algorithm to narrow the feature set, and a support vector machine (SVM) to classify different types of brain tumours. To enhance the effectiveness of motor imagery categorization, Gopal et al. [6] suggested a strategy based on feed forward backpropagation of the neural network (FFBPNN). Artificial neural networks (ANN), fuzzy clustering techniques (FCM), support vector machines (SVM), decision trees (DT), K-Nearest Neighbors (KNN), and Bayesian classification are only some of the approaches available for classifying medical pictures. This ANN, as well as SVM and KNN, are examples of supervised learning methods. Unsupervised learning methods, such K-means clustering and the Self- Organizing Map, are another category that may be used to group data into meaningful groups. The shift from manually crafted features to machine- learned ones has been slow, however. Several methods for learning features were in use prior to AlexNet's breakthrough. The methods will be thoroughly analysed by Bengio et al. [7]. Main component analysis, picture patch clustering, and dictionary techniques are just a few examples. At the conclusion of their study, in a section titled Global Training of Deep Models, Moosaet al.[8] will deploy CNNs that are taught from beginning to end. In this review, we focus on these foundational models and leave out the more surface-level ones. using traditional feature learning techniques on medical photos. Refer to International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 10 Issue: 02 | Feb 2023 www.irjet.net p-ISSN: 2395-0072
  • 4. © 2023, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 405 Ravi et al. (2017) [9] for a more comprehensive review of deep learning's role in health informatics, including a short discussion of its application to the interpretation of medical images. Shen et al. (2017) [10] revealed the results of a deep learning-specific study of medical picture analysis. While they certainly cover a lot of ground, we believe that certain crucial points have been overlooked. Medical image segmentation is essential for prompt treatment planning, and it is used for the categorization and identification of brain tumours from MR images. Methods for MRI Classification There are a lot of brain tumours. Imaging of the brain with MRI (Magnetic Resonance Imaging) is the standard diagnostic procedure for evaluating tumours in the head. Traditional machine learning methods often assign a classification to a brain tumour based on an arbitrary attribute or the discretion of a radiologist. In this study, we use ensemble modelling using the SVM & CNN classifier [5] on MRI scans of the brain to distinguish between benign and malignant tumours. In addition, when it comes to recognising brain tumours, threshold-based segmentation management causes blurred edges and limits. Using Resnet-50 and TL, a deep learning model was created for detecting and diagnosing brain tumours. The accuracy rate of their experiments is 95%. Researchers used block-wise based transfer learning to accomplish fivefold crossvalidation. 95 percent accuracy Their technology (CEMRI) was put to the test using a benchmark dataset constructed from T1-weighted MR images. Classifying MRI scans of the brain using Google's neural network architecture. Classification accuracy of 98% was achieved. As a classifier, a support vector machine-based method is utilized [7]. Classification and feature extraction are two applications of CNN. There are two convolutional layers and two fully linked layers used in this structure. In a study using nine deep learning models, they employed Transfer Learning to identify brain tumours with a TL accuracy of 97.39% [7]. In order to delve into MR data, they switched to deep learning mode. Classification of MRI scans was successful to the tune of 98.71 percent using the suggested approach. Despite the small size of the research, the findings were striking. CNN's blueprints were spot-on in every respect. VGG also achieved 96 percent accuracy, while ResNet50 achieved 89 percent and InceptionV3 achieved 89 percent. Accuracy of 75% [8] As reported by CNN, modern buildings are designed to work at lightning speeds while maintaining a 98.24 percent accuracy rate. Multi-scale analysis of MRI scans of brain tumours using CNN is highly recommended. They tested the suggested model on the MRI image dataset, and found that it had a 97.3 percent accuracy in classifying the images [9]. In order to classify brain tumours, the CNN model uses two convolutional layers and two fully connected layers to gather relevant data for feature extraction. Classifying brain tumours, they had a 97% success rate [10]. Researchers employed a convolutional neural network ResNet34 model and a transfer learning approach to categorise MRI scans of the brain into healthy and diseased categories [20]. They used a method for improving data photographs of brain tumours are normal or not to increase the number of shots and get to 100% accuracy [11]. The ANN model was combined with the optimization strategy of the Gray Wolf Optimizer (GWO). Using GWO- ANN, they were able to achieve a 98.91 percent accuracy in their classifications. They showed off a ResNet-50- and brain MR-trained deep CNN network [12]. With the help of the recommended data improvement technique, the model's accuracy increased to 97.48%. Using an MRI of the brain as training data, a Capsnet CNN model was proposed with 90.89 percent accuracy [13]. An ensemble model consisting of three separate convolutional neural network classifiers reached 98% accuracy [14]. The research found that transfer learning might be utilised to categorise brain tumours. A CNN with one of four possible architectures—DenseNet-2, VGG-16, VGG- 19, or ResNet-50—was chosen for this task. In this investigation, we used FigureShare. using 3064 magnetic resonance imaging (MRI) data to differentiate between three different types of brain tumours The produced model was refined with the help of a public test bed. The results demonstrated that the publicly available Figshare dataset promoted knowledge exchange. The creation of the ResNet-50 model was a triumph. Usually within 99.02 percent. The team picked on where they left off in 2020, continuing to use the same dataset in an attempt to improve the diagnostic accuracy for brain cancer. CNNs are suggested to include two convolution layers for feature extraction and two fully linked layers for classification [16]. A total of 97.39% of cases of brain cancer were correctly diagnosed using this CNN system. Using the data at hand, researchers were able to identify several subtypes of brain tumours. A variety of classifiers including KNN, ANN, RF, and LDA were applied. Accuracy of 95.56 percent was attained by combining the KNN model with the NLBP feature extraction method [17]. When working with a brain to identify and classify tumours, the shortcomings of the aforementioned methods of transfer learning— intrusiveness, complexity, and susceptibility to sampling errors—must be overcome. Systematically, there is a dearth of research on the effectiveness and dependability of such methods. This has led to the International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 10 Issue: 02 | Feb 2023 www.irjet.net p-ISSN: 2395-0072
  • 5. © 2023, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 406 development of transfer learning models for use in the diagnosis and classification of malignant brain tumours. Images were classified using deep learning, a unique and powerful classification approach, and tumour types were classified using faster region-based CNN (faster R-CNN). Khairandish et. al. [1] provided an explanation of how brain tumors actually behave, and with the aid of many methodologies and the analysis of research studies using a variety of criteria, it offers a clear image of this stage. The examination is conducted in relation to the dataset, proposed model, proposed model performance, and type of method used in each paper. Between 79 and 97.7% of the publications under study had accurate results. They employed Convolutional Neural Network, K-Nearest Neighbour, K-Means, and Random Forest algorithms, in that sequence (highest frequency of use to lowest). Here Convolutional Neural Network gave the highest accuracy of around 79-97.7% Someswararao et. al. [2] developed a new novel method for detecting tumors in MR images By using machine learning techniques, particularly the CNN model, in this study. This study combined a CNN model classification challenge for determining whether or not a subject has a brain tumor with a computer vision problem to automatically crop the brain from MRI scans. Other techniques used were Convolutional Neural Network, K-Means Clustering and the highest Accuracy is given by Convolutional Neural Network which is around 90%. Choudhury et. al. [3] proposed a new CNN-based system that can distinguish between different brain MRI images and label them as tumorous or not. The model's accuracy was 96.08%, and its f-score was 97.3. The model uses a CNN with three layers and only a few pre- processing steps to yield results in 35 epochs. The goal of this study is to emphasize the significance of predictive therapy and diagnostic machine learning applications. Other techniques used were Support Vector Machine, Convolutional Neural Network, k-Nearest Neighbour, Boosted trees, Random forest and Decision trees To detect, categorise, and segment brain tumours, the suggested approaches are certain to be very efficient and accurate. Automatic or semi-automatic precision This can only be done with the use of certain techniques. For this research, CNN was used to identify and categorise data with the use of a proposed automated segmentation technique. Convolutional neural networks, conditional random fields, support vector machines, and genetic algorithms are some of the other methods.CNN has the best efficiency and accuracy, at around 91% and 92.7%, respectively. In [1] In this study, we use GLCM features and a Multilayer Perceptron neutron to interpret MRI images. With MLP, the network is forwarded with one or more layers between the input and output layers. Segmentation with thresholding, feature vector extraction with GLCM by declaring the four angles-energy, entropy, contrast, and variance, and model learning are all components of the suggested method that make use of this neural network technology. Images that have been filtered or equalised prior to thresholding. Extracting features from data is a kind of data reduction. The extracted characteristics are used as training data for a neural classifier. Twenty magnetic resonance imaging (MRI) scans of the head are utilised for testing the suggested ISO algorithm. After the histogram has been equalised, segmentation is used to remove the tumour areas from the whole picture, allowing for a more accurate assessment of the tumor's location within the MRI. The retrieved photos are also used to make a call. In this study, we use the Support Vector Machine (SVM) technique for detecting brain tumours in MRI data. The Support Vector Machine (SVM) is a statistically-based supervised learning system. Representing a picture with DWT. When doing SVM classification, a Simulink model is employed. Using support vector machines (SVM), a prototype is shown in this article that combines fast performance with excellent detection accuracy. To classify tumours, it is necessary to first choose the appropriate pictures for analysis (known as "pre- processing"). Once the tumor's size and form have been identified, the next step is feature extraction. Following this, picture data is used to train a support vector machine. The DICOM format is then used for SVM classification. At last, a diagnosis of tumour is made. Predictive values (PPV) of 81.48 percent and negative predictive values (NPV) of 82 percent are determined. There were 22 genuine positives, 5 genuine negatives, 5 genuine positives, and 22 genuine negatives. In [3] In this study, we use the CNN technique to detect MRI images. Brain tumour identification relies on MRI scans, which are processed to increase precision. Neurons and convolution layers are the basis of a convolutional neural network (CNN). A system's behaviour may be more succinctly represented via clustering, which is used to find naturally occurring groups in huge data sets. The goal of cluster analysis is to reveal hidden patterns in large datasets. To locate the areas of malignancy in imaging, patch extraction is used. CNN's architecture is tailored to exploit the two- dimensional nature of an input picture. Segmentation, detection, and extraction are all components of post- processing for MRI scans. Among the system's benefits is improved segmentation. The accuracy rate for MRI scans is 88%. Applying a neural network improves accuracy. In [4] In this study, the MRI images are identified with the help of a Recurrent Neural Network (RNN). To scale up and scale down the network's nodes, the BP NN activation function was first used. The number of nodes in the hidden layer was set to 270 and then brought back down to 230 using the log sigmoid function. Finally, we've reached the optimal performance for RNN thanks to a bump in the node count to 300. For optimal International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 10 Issue: 02 | Feb 2023 www.irjet.net p-ISSN: 2395-0072
  • 6. © 2023, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 407 efficiency, we use an Elman network. When the number of nodes is increased, so does the amount of performance mistake. Elman networks, when used in the recognition process, were shown to be both quick and accurate compared to other ANN systems. When compared to Elman's 88.14%, our ratio was 76.47%. III.PROBLEM STATEMENT Image segmentation and classification has a number of challenges, such as the lack of a universally applicable standard model. However, picking the best approach for each given.Creating a positive image is challenging. So, there is no universally accepted method for picture recognition and categorization. It's still a huge roadblock for AI vision systems. The method did not take into account the classification of photographs of different clinical disorders, illness types, or disease stages. Because of the high concentration of pure nodes in the system, it is vulnerable to overfitting. Developers proposed a deep learning concept to identify brain tumours automatically using MRI scans, and then analysed the results to determine how well it worked The Contribution of Proposed Work  By using a novel boosted adaptive anisotrophic diffusion filter, histogram equalizationthe image get enhanced.  Segmentation is done in two phases. The first phase is brainsection; the tumour area is extracted using a hybrid deformable model with a fuzzy approach and a superpixel-based adaptive clustering.  The features are extracted based on texture and tetrolet transform, and the extracted features are combined by using Harish hawks optimization algorithm.  The suggested technique is intended to distinguish between normal brain tumours and abnormal brain MRI images by the CNN classifier Clusters of improperly developing brain tissue are called tumours, and they may have devastating effects on the central nervous system. Furthermore, the proliferation of tumour cells might cause abnormal capabilities of the mind. It's also important to note that many different kinds of tumours lead to the gradual enlargement and eventual death of brain cells [1]. However, the survival rate and number of treatment choices for people with brain tumours greatly improves if they are diagnosed at an early stage. While benign tumours develop more slowly and pose less risk, classifying them using a large number of MRI images is a time-consuming and labor-intensive process. / High- quality medical pictures may be obtained with the use of magnetic resonance imaging (MRI). Medical professionals often use this imaging method to spot problems in the brain. revealing the development of malignancies throughout time. Automatic medical analysis [2] relies heavily on MRI images. They improve the visual representation of the various brain areas by providing anatomical specifics. Researchers have developed a number of ways for identifying and classifying brain tumours using MRI images. There is a wide spectrum of methods, from more traditional medical image processing to cutting-edge machine learning methods. To learn from unstructured and unlabeled data without human supervision, deep learning (DL) is a kind of machine learning. In recent years, DL approaches and models have shown effective in solving a wide variety of difficult issues that need a high degree of accuracy and rely on hierarchical feature extraction and data-driven self-learning. Among its many uses, deep learning has been put to work in areas such as pattern recognition, object detection, voice recognition, and decision-making [3]. The largest obstacle facing DL is the massive quantity of training data required. As an example, in the field of healthcare, there is a lack of publicly accessible medical data that may be used to train deep learning models. The main reason for this is the concern for personal information security. Because of this, the medical industry has relied heavily on transfer learning to make up for the paucity of data. This is an example of transfer learning, in which a deep learning model that was first trained for one task is then applied to a new issue. This is often done in situations when there is insufficient training data [4]. In this research, we apply transfer learning to create a deep learning model that can identify and classify brain tumours in MRI scans. All that's needed to construct the proposed model are three pre- trained deeps: ➢ Here, we show off a CNN-based, fully-automatic method for identifying and classifying brain tumours. To further extract deep information from brain MR images, we use pre-trained models. On a dataset of brain MRI images with 2 classifications (normal/tumor), we tested 3 pre-trained models and an ensemble of these pre-trained and CNN models. ➢ The present study uses densenet121 and densenet169 with transfer learning on Kaggle datasets to distinguish between benign and malignant brain tumours. Tumor detection, classification, and growth rate estimation are all enhanced by MRI. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 10 Issue: 02 | Feb 2023 www.irjet.net p-ISSN: 2395-0072
  • 7. © 2023, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 408 A. Data Pre-processing step Preparing Data Training dataset size is very important for deep convolutional neural networks. The first version of the model was built using Keras TensorFlow's Image Data Generator. In order to train the proposed system on a sufficient number of MRI pictures, the image dataset is augmented with random modifications (rotations, height and breadth shifts, brightness changes, etc.). The recommended classifier will never be exposed to the same picture again because to the data augmentation parameters that were carefully selected. B. Data augmentation There is a strong correlation between the amount of datasets utilised during training and the final performance of deep convolutional neural networks. To better train the proposed system on MRI pictures, we augment the original dataset with additional MRI images. (pivots, scale adjustments, luminance) TensorFlow is modified with the use of the ImageDataGenerator function in Keras. What was done to improve the data so that the suggested classifier can make reliable identifications. You will never see a repeat of yourself. This process improves the model's generalisation abilities. C. Rezing and Crope The removal of the brain from the picture backdrop is the initial stage in this process [19]. The approach showed here use OpenCV to pinpoint a bounding box's extrema. It's worth noting that the MRI images utilised in this research come in a range of sizes according to their geographical origins. To ensure uniformity, the photos are resized to 64x64x1.. D. Data Spiliting In this analysis, we partition the data into smaller subgroups. The suggested approach will be tested, validated, and refined over the course of three phases. a case study in comprehensive education. The model can be fit using the first subset. About 80% of the whole dataset is included here. The remaining portion has vanished. The system will be tested and validated at the same rate of 80% each. E. Convolutional Neural Network CNN is well-known because it has gotten better at classifying images. CNN uses the data that is given to it to automatically collect features. It is a well-known DL architecture with feedforward connections between each layer. Deep architecture helps these networks learn complex functions that a simple neural network can't learn [20]. CNN is the brains behind computer vision, and it can be used to classify objects, monitor activities, and create medical images. Its preprocessing is simple and small compared to other neural classifiers because it has an internal filter. A typical CNN architecture includes the following components: (i)Convolution (ii), pooling (iii), activation (iv), and a dense layer (v) are all steps in the classification process. F. Transfer Learning Transfer learning (TL) is a deep learning strategy that marries an existing model trained on a large dataset with a new model trained on a different dataset to tackle the same problem. In general, CNN performs better with more data than with less. In CNN situations when the datasets are small, TL can be useful. The field of TL has expanded in recent years, finding applications in object detection, medical imaging, and picture categorization [21] Large Datasets, such as ImageNet, were utilized to train the models so that they can extract useful characteristics. Applications utilizing smaller data sets, such as brain MRI data, are common. One of the benefits of TL is that it shortens the time it takes to complete training. Procedures, avoiding snug fits, training with fewer data, and motivating improved performance. CNN model training is complete. Efficientb0, DenseNet121, and DenseNet169 were utilized in our study. Conclusion: In terms of classification, dense convolutional layers neural pathways (CNNs) have been greatly explored in the science and new firms. In this research paper, evaluate the effectiveness of a deep neural network model for a classification algorithm linked to the diagnosis of brain cancer. The extension applied to the ResNet model shows that the deep learning algorithm used in natural image processing can provide superior efficiency in the analysis of medical data. The proposed device was designed to identify the brain tumour of the Magnetic Resonance Imaging (MRI) brain. There are many phases novel boosted adaptive anisotrophic diffusion filter is used for noise removal, to detect the tumor part from the brain image. Segmentation extracts the tumor portion. Textural feature extraction is used in proposed system. Classification uses CNN classifiers; proposed system achieved 98.3% of accuracy. Future enhancement is to perform with the Perfusion based MRI images which is quite complex. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 10 Issue: 02 | Feb 2023 www.irjet.net p-ISSN: 2395-0072
  • 8. © 2023, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 409 REFERENCES: 1. Saba, T., Mohamed, A.S., El-Affendi, M., Amin, J., & Sharif, M. (2020). Brain tumor detection using fusion of hand crafted and deep learning features. Cognitive Systems Research, 59, 221-230. 2. Sharif, M., Amin, J., Nisar, M.W., Anjum, M.A., Muhammad, N., & Shad, S. A. (2020). A unified patch based method for brain tumor detection using features fusion. Cognitive Systems Research, 59, 273-286. 3. Toğaçar, M., Ergen, B., &Cömert, Z. (2020). BrainMRNet: Brain tumor detection using magnetic resonance images with a novel convolutional neural network model. Medical Hypotheses, 134, 109531. 4. Veeramuthu, A., Meenakshi, S., &Darsini, V.P. (2015). Brain image classification using learning machine approach and brain structure analysis. Procedia Computer Science, 50, 388-394. www.sciencedirect.com 5. Kumar, S., Dabas, C., &Godara, S. (2017). Classification of brain MRI tumor images: A hybrid approach. Procedia computer science, 122, 510-517. www.sciencedirect.com 6. Jana, G.C., Swetapadma, A., &Pattnaik, P.K. (2018). Enhancing the performance of motor imagery classification to design a robust brain computer interface using feed forward back-propagation neural network.Ain Shams Engineering Journal, 9(4), 2871-2878. www.sciencedirect.com 7. Sidhu, H.S., Benigno, S., Ganeshan, B., Dikaios, N., Johnston, E.W., Allen, C., & Taylor, S.A. (2017). Textural analysis of multiparametric MRI detects transition zone prostate cancer. European radiology, 27(6), 2348-2358. 8. Anaraki, A.K., Ayati, M., &Kazemi, F. (2019). Magnetic resonance imagingbased brain tumor grades classification and grading via convolutional neural networks and genetic algorithms. Biocybernetics and Biomedical Engineering, 39(1), 63-74. 9. Fabelo, H., Ortega, S., Lazcano, R., Madroñal, D., M Callicó, G., Juárez, E., &Piñeiro, J.F. (2018). An intraoperative visualization system using hyperspectral imaging to aid in brain tumor delineation. Sensors, 18(2), 430. 10. Zhang, J., Shen, X., Zhuo, T., & Zhou, H. (2017). Brain tumor segmentation based on refined fully convolutional neural networks with a hierarchical dice loss. arXiv preprint arXiv:1712.09093. 11. Wang, N., Shang, Y., Chen, Y., Yang, M., Zhang, Q., Liu, Y., &Gui, Z. (2018). A hybrid model for image denoising combining modified isotropic diffusion model and modified Perona-Malik model. IEEE Access, 6, 33568-33582. 12. Chang, Y., Jung, C., Ke, P., Song, H., & Hwang, J. (2018). Automatic contrast-limited adaptive histogram equalization with dual gamma correction.IEEE Access, 6, 11782-11792. 13. Haleem, M.S., Han, L., van Hemert, J., Li, B., Fleming, A., Pasquale, L.R., & Song, B.J. (2018). A novel adaptive deformable model for automated optic disc and cup segmentation to aid glaucoma diagnosis. Journal of medical systems, 42(1), 20. 14. Wang, Y., Karimi, H.R., Shen, H., Fang, Z., & Liu, M. (2018). Fuzzy-modelbased sliding mode control of nonlinear descriptor systems. IEEE transactions on cybernetics, 49(9), 3409-3419. 15. Uziel, R., Ronen, M., &Freifeld, O. (2019). Bayesian Adaptive Superpixel Segmentation. In Proceedings of the IEEE International Conference on Computer Vision, 8470-8479. 16. Pradhan, J., Kumar, S., Pal, A.K., & Banka, H. (2018). A hierarchical CBIR framework using adaptive tetrolet transform and novel histograms from color and shape features. Digital Signal Processing, 82, 258-281. 17. Yang, F., Dogan, N., Stoyanova, R., & Ford, J.C. (2018). Evaluation of radiomic texture feature error due to MRI acquisition and reconstruction: a simulation study utilizing ground truth. PhysicaMedica, 50, 26- 36. 18. Sihwail, R., Omar, K., Ariffin, K.A.Z., &Tubishat, M. (2020). Improved harrishawks optimization using elite opposition-based learning and novel search mechanism for feature selection. IEEE Access, 8, 121127- 121145. 19. Chuang, T.Y., Han, J.Y., Jhan, D.J., & Yang, M.D. (2020). Geometric Recognition of Moving Objects in Monocular Rotating Imagery Using Faster R-CNN. Remote Sensing, 12(12), 1908. 20. Ćiprijanović, A., Snyder, G.F., Nord, B., & Peek, J.E. (2020). DeepMerge: Classifying high-redshift merging galaxies with deep neural networks. Astronomy and Computing, 32, 100390. 21. Sarhan, A.M. (2020). Detection and Classification of Brain Tumor in MRI Images Using Wavelet Transform and Convolutional Neural Network. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 10 Issue: 02 | Feb 2023 www.irjet.net p-ISSN: 2395-0072
  • 9. © 2023, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 410 Journal of Advances in Medicine and Medical Research, 15- 26. 22. Singh, S.P., Wang, L., Gupta, S., Goli, H., Padmanabhan, P., &Gulyás, B. (2020). 3D Deep Learning on Medical Images: A Review. arXiv preprint arXiv:2004.00218. 23. Chaubey, N.K., &Jayanthi, P. (2020). Disease Diagnosis and Treatment Using Deep Learning Algorithms for the Healthcare System. In Applications of Deep Learning and Big IoT on Personalized Healthcare Services,IGI Global, 99-114. 24. Joseph, T., Kalaiselvan, S.A., Aswathy, S.U., Radhakrishnan, R., &Shamna, A. R. (2020). A multimodal biometric authentication scheme based on feature fusion for improving security in cloud environment. Journal of Ambient Intelligence and Humanized Computing, 1-9. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 10 Issue: 02 | Feb 2023 www.irjet.net p-ISSN: 2395-0072