SlideShare une entreprise Scribd logo
1  sur  85
Télécharger pour lire hors ligne
SIM UNIVERSITY
      SCHOOL OF SCIENCE AND TECHNOLOGY




      COMPUTER BASED DIAGNOSIS OF
         GLAUCOMA USING PRINCIPAL
       COMPONENT ANALYSIS (PCA): A
                COMPARATIVE STUDY



           STUDENT            : Lee You Tai Danny (B0704498)
           SUPERVISOR         : Dr. Rajendra Acharya Udyavara
           PROJECT CODE : JAN2011/ENG/015


A project report submitted to SIM University in partial fulfillment of the
   requirements for the degree of Bachelor of Electronic Engineering
                            November 2011
JAN2011/ENG/015                                           LEE YOU TAI DANNY
                                                                        (B0704498)




                                    ABSTRACT


       Glaucoma is one of the many eye diseases can lead to the blindness if it is not
detected and treated in proper time. It is often associated with the increased in the
intraocular pressure (IOP) of the fluid (known as aqueous humor) in the eye, and it has
been nicknamed as the “Silent Thief of Sight”. Glaucoma affects 40% of blindness in
Singapore and is the second leading cause of blindness in the world.


       The detection of glaucoma through Optical Coherence Tomography (OCT) and
Heidelberg Retinal Tomography (HRT) is very expensive. This paper presents a novel
method to diagnose glaucoma using digital fundus images. Digital image processing
techniques, such as image pre-processing, texture features extraction are widely used for
the automatic detection of the various features. We have extracted features such as
Homogeneity, Energy, Contrast, Moments, Fractal Dimension, Local Binary Patterns,
Laws‟ Texture Energy and Fuzzy Gray Level Co-occurrence Matrix of the eye.


       These features are validated by automatically classifying the normal and
glaucoma images using Probabilistic Neural Network (PNN) classifier. The images
were retrieved from the Kasturba Medical College, Manipal, India. The results
presented in this paper indicate that the features are clinically significant in the
diagnosis of glaucoma.


       The overall objective is to apply image processing techniques on the digital
fundus images of the eye for the analysis of glaucoma and normal eye. In this study, for
each subject, 30 images were analyzed. By extracting information of pixel average
value from the images, it is possible to obtain the necessary value for classification.




i|Page
JAN2011/ENG/015                                           LEE YOU TAI DANNY
                                                                         (B0704498)




                             ACKNOWLEDGEMENTS


Firstly, I would like to express my sincere and heartfelt appreciation to my project
supervisor, Dr. Rajendra U.Acharya for his exceptional guidance, invaluable advice and
wholehearted support in matters of practical and theoretical nature throughout the
project. His constant time check and meet ups had certainly motivated me in the
completion of the project.


Throughout my thesis-writing period, he provided encouragement, sound advice, good
teaching, good company, and lots of good ideas. Thanks to him for his tolerance and
patience to all my queries regardless be it an email or phone call, he had almost
response to it with no delays. The completion of the Final Year Project would not be
possible without his excellence supervision.


And I also would like to extend my warm appreciation to my colleagues and friends for
sharing their knowledge, valuable contributions and help with this project.


Last but not least, I would like to thank to Kasturba Medical College, Manipal, India for
providing me with the digital fundus images which play major role in this Final Year
Project.




ii | P a g e
JAN2011/ENG/015                                                                LEE YOU TAI DANNY
                                                                                               (B0704498)




                                                LIST OF FIGURES


Figure 1.1: Proposed Design for a detection of Glaucoma ........................................................ 3
Figure 2.1: Simple diagram of the parts of the eye .................................................................... 5
Figure 2.2: Glaucoma eye anatomy ........................................................................................... 6
Figure 2.3: (a) Raw image before histogram equalization ......................................................... 9
Figure 2.3: (b) Pre-processed image and its histogram equalization ....................................... 10
Figure 4.1: (a) Fundus Camera and (b) Fundus Image ............................................................ 19
Figure 4.2: (a) Normal Eye Fundus Image and (b) Glaucomatous Eye Fundus Image ........... 20
Figure 4.3: (a) Normal FD and (b) Glaucoma FD ................................................................... 22
Figure 4.4: Circularly symmetric neighbor sets for different P and R ..................................... 23
Figure 4.5: Square neighborhood and Circular neighborhood ................................................. 23
Figure 4.6: Uniform and Non-uniform patterns ....................................................................... 24
Figure 5.1: Architecture of PNN .............................................................................................. 32
Figure 5.2: Procedure of three-fold stratified cross validation ................................................ 34
Figure 5.3: The distribution plot of the GII for normal and glaucoma subjects ...................... 36




iii | P a g e
JAN2011/ENG/015                                                                         LEE YOU TAI DANNY
                                                                                                 (B0704498)




                                                  LIST OF TABLES

Table 3.1: Detail Project Plan .................................................................................................. 18
Table 4.1: LBP features for normal and glaucoma images with p-vale................................... 25
Table 4.2: LTE features of normal and glaucoma images with p-vale .................................... 27
Table 4.3: FGLCM features of normal and glaucoma images with p-vale.............................. 29
Table 5.1: 12 Features of normal and glaucoma PCA and their p-value ................................. 33
Table 5.2: PNN classification result......................................................................................... 34
Table 5.3: The GII values for normal and glaucoma subjects ................................................. 36




iv | P a g e
JAN2011/ENG/015                                                                         LEE YOU TAI DANNY
                                                                                                         (B0704498)




                                            TABLE OF CONTENTS


ABSTRACT ................................................................................................................................i
ACKNOWLEDGEMENTS ...................................................................................................... ii
LIST OF FIGURES ................................................................................................................. iii
LIST OF TABLES ....................................................................................................................iv
PART I .......................................................................................................................................1
   CHAPTER 1: PROJECT INTRODUCTION ........................................................................1
       1.1       Background and Motivation ........................................................................... 1
       1.2       Project Objectives ........................................................................................... 2
       1.3       Project Scope .................................................................................................. 2
   CHAPTER 2: THEORY AND LITERATURE REVIEW ....................................................4
       2.1       Anatomy of an Eye ......................................................................................... 4
       2.2       Overview of Glaucoma ................................................................................... 5
       2.3       Types of Glaucoma ......................................................................................... 6
           2.3.1 Primary open angle glaucoma ...................................................................... 6
           2.3.2 Angle closure glaucoma ............................................................................... 7
           2.3.3 Secondary Glaucoma ................................................................................... 7
       2.4       Detection of Glaucoma ................................................................................... 8
       2.4       Image Processing with MATLAB .................................................................. 9
       2.5       Statistical Application................................................................................... 10
           2.5.1 Run length matrix....................................................................................... 14
   CHAPTER 3: PROJECT MANAGEMENT........................................................................15
       3.1       Project Plan ................................................................................................... 15
           3.1.1 School facility ............................................................................................ 15
           3.1.2 Internet broadband ..................................................................................... 15
       3.2       Project Task and Schedule ............................................................................ 15
   CHAPTER 4: DESIGN AND ALGORITHM .....................................................................19
       4.1       Project Approach and Method ...................................................................... 19
       4.2       Different Texture Features Study ................................................................. 21
           4.2.1 Fractal Dimension ...................................................................................... 21


v|Page
JAN2011/ENG/015                                                                               LEE YOU TAI DANNY
                                                                                                       (B0704498)




           4.2.2 Local Binary Patterns ................................................................................. 22
           4.2.3 Laws‟ Texture Energy................................................................................ 26
           4.2.4 Fuzzy Gray Level Co-occurrence Matrix .................................................. 28
   CHAPTER 5: CLASSIFICATIONS AND RESULTS........................................................30
       5.1       Principal Component Analysis (PCA) .......................................................... 30
       5.2       Classifier Used .............................................................................................. 31
       5.3       Results........................................................................................................... 32
       5.4       Glaucoma Integrated Index........................................................................... 35
   CHAPTER 6: DISCUSSION, CONCLUSION AND RECOMMENDATION ..................37
       6.1       Discussion ..................................................................................................... 37
       6.2       Conclusion .................................................................................................... 38
       6.3       Recommendation .......................................................................................... 39
PART II ....................................................................................................................................40
   CRITICAL REVIEW AND REFLECTION........................................................................40
   REFERENCES.....................................................................................................................41
   APPENDIXES .....................................................................................................................43




vi | P a g e
JAN2011/ENG/015                                          LEE YOU TAI DANNY
                                                                        (B0704498)




                                      PART I

CHAPTER 1: PROJECT INTRODUCTION



1.1      Background and Motivation


The eyes are the most complex, sensitive and delicate organs in our human body, which
we view the world and are responsible for four fifth of all the information our brain
receives. Blindness affects 3.3 million in America aged above 40 years and this may
reach 5.5 million by the year 2020. The most leading causes of the visual impairment
and blindness are cataract, glaucoma, macular degeneration and diabetes retinopathy.
Among these, Glaucoma affects more than 3 million people living in the United States
and is the leading cause for African Americans. Worldwide, it is the second leading
cause of blindness [Global data on visual impairment in the year 2002]. It affects one in
two hundred people aged fifty years and younger, and one in ten over the age of eighty
years.
[http://www.afb.org/seniorsite.asp?SectionID=63&TopicID=286&DocumentID=3198]


There is no cure for glaucoma yet, hence early detection and prevention is the
only way to treat glaucoma and avoid total loss of vision. Optical Coherence
Tomography (OCT) and Heidelberg Retinal Tomography (HRT) are used to
detect the glaucoma but the cost is very high. This paper presents a novel method
for glaucoma detection using digital fundus images. Digital image processing
techniques,    morphological      operations,    histogram     equalization,    features
extraction, normalization, principal component analysis (PCA), statistical
analysis using Student T-Test and validated by classifying the normal and
glaucoma images using Probabilistic Neural Network (PNN).




1|Page
JAN2011/ENG/015                                                   LEE YOU TAI DANNY
                                                                           (B0704498)




1.2       Project Objectives


This project requires the following tasks to achieve the main objective:
         The main objective of this project is to analyze and diagnose the glaucoma using
          digital fundus images by using image processing technique.
         Detection and extraction of textures features and normalization of data.
         Apply principal component analysis (PCA) to extract from these normalized
          features.
         Study various data mining techniques such as: K-Nearest Neighbor (K_NN),
          Naïve Bayes Classifier (NBC) and Probabilistic Neural Network (PNN) for
          classification.
         The academic goal of this project is to develop the skill of research, MATLAB
          programming and analysis.




1.3       Project Scope


The project will include the following proposed scheme:
         Acquiring digital fundus images of normal and glaucoma eye with an age group
          of 20 to 70 years. The images were collected from the Kasturba Medical
          College, Manipal, India which photographed and certified by the doctors in the
          ophthalmology department.
         Image processing system which extract the relevant features for the automatic
          diagnosis of the glaucoma.
         Detection and extraction of various texture features.
         Normalization and analysis of texture features using MATLAB and Principal
          Component Analysis (PCA) method.
         Classification of data using Probabilistic Neural Network (PNN).




2|Page
JAN2011/ENG/015                                              LEE YOU TAI DANNY
                                                                      (B0704498)




            NORMAL                              GLAUCOMA




   TEXTURE FEATURES EXTRACTION USING IMAGE PRE-PROCESSING
                              TECHNIQUES




                       STATISTICAL ANALYSIS USING
          PRINCIPAL COMPONENT ANALYSIS (PCA) METHOD




                         CLASSIFICATION USING
              PROBABILISTIC NEURAL NETWORK (PNN)
              RESULT




                                                    RESULT




          NORMAL EYE                        GLAUCOMA EYE



Figure 1.1: Proposed Design for a detection of Glaucoma




3|Page
JAN2011/ENG/015                                                  LEE YOU TAI DANNY
                                                                          (B0704498)




CHAPTER 2: THEORY AND LITERATURE REVIEW



2.1    Anatomy of an Eye


Basically the human eye is an organ which reacts to light for many different purposes
and is made up of three coats, enclosing three transparent structures. The outermost
layer is composed of the cornea and sclera. The middle layer consists of the choroid,
ciliary body and iris. The innermost is the retina. The light rays from an object are
reflected off and enter through the cornea. It then refracts the rays that pass through the
pupil. Surrounding the pupils is the iris, the colored portion of the eye. The pupil opens
and closed to regulate the amount of light passing through it. Hence, we are able to see
the dilation of the pupils. Light rays will pass the lens that is located behind the pupils.
This lens change the shape of the rays by further bending and focusing them to the
retina located at the back of eyes.


The retina consists of two major types of light-sensitive receptors which are also called
tiny light-sensing nerve cells. They are cones nerve cells and rods nerve cells. The cones
enable us to see bright light which produces photonic vision. Cones are mostly
concentrated in and near the fovea. Only a few are present at the sides of the retina. It
gives a clear and sharp image vision as when one looks at an object directly.


It detects colors and detects fine details. The rods however, enable us to see in the dark
which produces isotopic vision. It detects motion in the dark. It is located outside the
macula and goes all the way to the outer of edge retina. Rod density is greater in the
peripheral retina than in the central retina. Hence it provides peripheral or side vision.
Cones and rods are connected through intermediate cells in the retina to nerve fibers of
the optic nerve. When rods and cones are stimulated by light, the nerves sendoff
impulses to the brain through these fibers. Figure 2.1 illustrate a simple diagram of the
parts of the eye.



4|Page
JAN2011/ENG/015                                                 LEE YOU TAI DANNY
                                                                         (B0704498)




Figure 2.1: Simple diagram of the parts of the eye




2.2    Overview of Glaucoma


Glaucoma is an eye disease in which the optic nerve damages by the elevation in the
intraocular pressure inside the eye caused by a build-up of excess fluid. This pressure
can impair vision by causing irreversible damage to the optic nerve and to the retina. It
can lead to the blindness if it is not detected and treated in proper time. Glaucoma result
in peripheral vision loss, and is an especially dangerous eye condition because it
frequently progresses without obvious symptoms. This is why it is often referred to as
“The Silent Thief of Sight.”


There is no cure of glaucoma yet, although it can be treated. Worldwide, it is the second
leading cause of blindness [Global data on visual impairment in the year 2002]. It
affects one in two hundred people aged fifty years and younger, and one in ten over the
age of eighty years. The damage to the optic nerve from glaucoma cannot be reversed.



5|Page
JAN2011/ENG/015                                                  LEE YOU TAI DANNY
                                                                          (B0704498)




However, lowering the pressure in the eye can prevent further damage to the optic nerve
and further peripheral vision loss.


There are various types of glaucoma that can occur and progress without obvious
symptoms or sign. Even there is no cure for glaucoma, early detection and prevention
can avoid total loss of vision. Glaucoma can be divided into two main types, (1)
Primary open angle glaucoma and (2) Angle closure glaucoma. Last but not least there
is another glaucoma known as secondary glaucoma, which will explain in next section.




Figure 2.2: Glaucoma eye anatomy




2.3      Types of Glaucoma



         2.3.1 Primary open angle glaucoma
      This type of glaucoma is the most common (sometimes called Chronic Glaucoma)
      and symptoms are slow to develop. As the glaucoma progress the side or peripheral
      vision is failing. It may cause a person to miss the objects out of the side or corner
      of the eye. It happens when the eye‟s drainage canals become clogged over time or
      the eye over-produces aqueous fluid which causes the pressure inside the eye to
      build to abnormal levels. The inner eye pressure (IOP) rises because the correct
6|Page
JAN2011/ENG/015                                               LEE YOU TAI DANNY
                                                                       (B0704498)




  amount of fluid can‟t drain out of the eye. It‟s affecting 70% to 80% of those who
  suffered from the disorder and accounts for 90% of glaucoma cases in the United
  States. It is painless and does not have acute attacks. It can develop gradually and go
  unnoticed for many years but this type of glaucoma usually responds well to
  medication, especially if caught early and treated.



     2.3.2 Angle closure glaucoma

  Also known as Acute Narrow Angle Glaucoma and accounts for less than 10% of
  glaucoma cases in the United States. Although it is rare and is different from open
  angle glaucoma it is the most serious form of disease. The problem occurs more
  commonly in farsighted elderly people, particularly in women and often occurs in
  both eyes. Angle closure glaucoma occurs primarily in patients who have a shallow
  space between the cornea at the front of the eye and the colored iris that lies just
  behind the cornea. As the eye ages, the pupil grows and becomes smaller, restricting
  the flow of fluid to the drainage site. As fluid buildup and blockage happens, a rapid
  rise in intraocular pressure can occur.


  This kind of glaucoma is normally very painful because of the sudden increase in
  pressure inside the eye. The symptoms of an acute attack are more severe and can be
  totally disabling. They include severe pain, often accompanied by nausea and
  vomiting. Diabetes can be a contributing cause to the development of glaucoma.
  Treatment of angle closure glaucoma is known as peripheral iridectomy and usually
  involves surgery to remove a small portion of the outer edge of the iris to allow
  aqueous fluid to flow easily to the drainage site.



     2.3.3 Secondary Glaucoma
  Both open angle glaucoma and angle closure glaucoma can be primary or secondary
  conditions. Primary conditions are when the cause is unknown, unlike secondary
  conditions which can be traced to a known cause. Secondary glaucoma may be
  caused by a variety of medical conditions, medications, eye abnormalities and

7|Page
JAN2011/ENG/015                                                  LEE YOU TAI DANNY
                                                                          (B0704498)




      physical injuries. The treatments of secondary glaucoma are frequently associated to
      eye surgery.


      Symptoms of glaucoma include:
          Headaches
          Intense pain
          Blurred vision
          Nausea or Vomiting
          Medium dilation of the pupil
          Bloodshot eyes and increased sensitivity
         However, in general the field of vision is being narrowed to such whereby one is
         unable to see clearly.


2.4      Detection of Glaucoma


There are three different tests which can detect the glaucoma:
       Ophthalmoscopy
       Tonometry
       Perimetry
Two routine eye tests for regular glaucoma check-ups include: Tonometry and
Ophthalmoscopy. Most glaucoma tests are very time consuming and also need special
skills and diagnostic equipment. Therefore, new techniques to diagnose glaucoma at
early stages are urgently needed with accuracy, speed and even with less skilled people.
In recent year‟s computer based systems made glaucoma screening easier. Imaging
systems, such as fundus camera, optical coherence tomography (OCT), Heidelberg
retina tomography (HRT) and scanning laser polarimetry, have been extensively used
for eye diagnosis. HRT, confocal laser scanning tomography and OCT can show retina
nerve fiber damage even before the damage to the visual fields. However, those
equipment are very expensive and only some hospitals able to afford them. Hence
fundus cameras can be used as an alternative by many ophthalmologists to diagnose
glaucoma. Images processing technique allows to extract the features that can provide
useful information to diagnose glaucoma.
8|Page
JAN2011/ENG/015                                                 LEE YOU TAI DANNY
                                                                         (B0704498)




2.4      Image Processing with MATLAB


MATLAB is high-level language and interactive environment that enables to perform
computationally intensive tasks faster than with traditional programming languages
such as C, C++, and FORTRAN. In this project, MATLAB is the main software to
implement the image processing part as well as texture features extraction and
classification. After texture features extraction, the data will be normalized and selected
Principal Component Analysis (PCA) features are fed to PNN classifier for
classification.


Image pre-processing is mainly to improve the image contrast using histogram
equalization whereas to increase the dynamic range of the histogram of an image and
intensity value of pixels in the input image. The output image contains a uniform
distribution of intensities and increased the contrast of an image. Figure 2.3: (a) and (b)
shows the raw image and pre-processed fundus image with histogram equalization
graph.




Figure 2.3: (a) Raw image before histogram equalization




9|Page
JAN2011/ENG/015                                                                      LEE YOU TAI DANNY
                                                                                                  (B0704498)




                                                          10000

                                                           9000

                                                           8000

                                                           7000

                                                           6000

                                                           5000

                                                           4000

                                                           3000

                                                           2000

                                                           1000

                                                              0


                                                                  0    50        100       150       200   250




    Figure 2.3: (b) Pre-processed image and its histogram equalization



    2.5     Statistical Application


    The statistical mathematical application is used in this project. The texture features were
    extracted from each digital fundus image by image pre-processing techniques such as
    Co-occurrence matrix, Contrast, Homogeneity, Entropy, Angular second moment,
    Energy, Mean, Run length matrix, Short run emphasis, Long run emphasis, Run
    Percentage, Gray Level Non-uniformity and Run Length Non-uniformity. The collected
    data were normalized and selected using Principal Component Analysis (PCA). Finally
    the selected PCA features were classified using Probabilistic Neural Network (PNN).


    Co-occurrence matrix,
    For an image of m x n, we can perform a second-order statistical textural analysis by
    constructing the gray level co-occurrence matrix (GLCM) [Tan et al., 2009] by


          Cd i, j          p, q ,  p  x, q  y  : I  p, q   i, I  p  x, q  y   j         (2.1)
                         
                          
    Where                 
                                      p, q ,  p  x, q  y   M  N , d  x, y 

    And  denotes the cardinality of a set. For a pixel in an image having a grey level i, the

    probability that the gray level of a pixel at a x, y  distance away is j is defined as:

    10 | P a g e
JAN2011/ENG/015                                                 LEE YOU TAI DANNY
                                                                         (B0704498)




                                                                                  
                             C i, j 
               Pd i, j   d                                                    
                             Cd i, j                                          (2.2)



With (1) and (2), we acquire the following features:



                                 P i, j 
                                                        2
Energy:                 E                         d                             (2.3)
                                    i       j




Energy is the sum of squared elements in the gray level co-occurrence matrix called
angular second moment. It is the measurement of the denseness or order in the image.


Contrast:               Co    i  j  2Pd i, j                             (2.4)
                                i           j




Contrast is the differences in the gray level co-occurrence matrix or is the measurement
of the local variations. Its measure the elements do not lie on the main diagonal and
returns a measure of the intensity contrast between a pixel and the neighboring pixels
over the whole image.


                                  P i, j 
Homogeneity:            H    d                                               (2.5)
                              j 1  i  j 
                                            2
                            i




Homogeneity measures the distribution of elements nearest to the diagonal in the
GLCM. It‟s inversely proportional to the contrast. The addition of value „1‟ in the
denominator is to prevent the value „0‟ during division.


Entropy:                En    Pd i, j   ln Pd i, j 
                                        i       j
                                                                                  (2.6)




11 | P a g e
JAN2011/ENG/015                                                 LEE YOU TAI DANNY
                                                                         (B0704498)




Entropy is a thermodynamic property and its present the degree of disorder in the

image. The entropy value is the largest when all the elements of the co-occurrence

matrix are the same and small when elements are unequal.


Moments- m1 , m 2 , m3 and m 4 are defined as:


        mg    i  j g Pd i, j                                                 (2.7)
               i   j




where g is the integer power exponent that defines the moment order. Moments are the
statistical expectation of certain power functions of a random variable that measure the
shape of a set of data points.
The first moment is the mean and which is the average of pixel values in an image.
The second moment is the standard deviation and is given by m2=E(x-µ)2 where E(x) is
the expected value of x. The standard deviation determines how much variation is from
the average or mean. It‟s denoted by Greek symbol σ.
The third moment measures the degree of asymmetry in the distribution. Also called
skewness and the value can be positive or negative or even undefined.


The fourth moment measures the relative peakedness or flatness of the probability
distribution of a real valued random variable and is also known as kurtosis. If a
distribution has a peak at the mean and long tails, the fourth moment will be high and
the kurtosis positive (platykurtic); and conversely, bounded distributions tend to have
low kurtosis (keptokurtic).


For difference statistics, which is a subset of co-occurrence matrix, it can be obtained by
[Tomita et al., 1990]:


          P k    Cd i, j 
                                                                                     (2.8)
                       i   j




12 | P a g e
JAN2011/ENG/015                                                  LEE YOU TAI DANNY
                                                                          (B0704498)




Where i  j  k , k = 0, 1, … n – 1, and n is the number of gray scale level [9]. For each

entry in the difference matrix, it is in fact the sum of the probability that the gray-level
difference is k between two points separated by δ. We can derive the following
properties from the difference matrix [Weszka et al., 1976]:


                                           n1
           AngularSecondMoment :  P  ) 2
                                      (k
                                       k 0                                           (2.9)



Angular second moment (also known as Uniformity) is large when certain values are
high and others are low. It‟s a measure of local homogeneity and the opposite of

Entropy.


                         n1
           Contrast :  k 2 P(k )
                         k 0                                                       (2.10)
Contrast is the moment of inertia about the origin and also known as the second moment
of Pδ .


                     n 1
Entropy:            P (k ) log P (k )
                       
                     k 0                                                            (2.11)



Entropy is directly proportional to unpredictability. Entropy is smallest when Pδ (k)

values are unequal and largest when Pδ (k) values are equal.


                  n 1
Mean:              kP (k )
                  k 0
                            
                                                                                     (2.12)



Mean is large when they are far from the origin and small when Pδ (k) values are
concentrated near the origin.


13 | P a g e
JAN2011/ENG/015                                                                               LEE YOU TAI DANNY
                                                                                                       (B0704498)




        2.5.1 Run length matrix
In run length matrix P (i, j), each cell in the matrix consists of the number of elements

where gray level „i‟ successively appears „j‟ times in the direction  . And the variable
„j‟ is termed as run length. The resultant matrix characterizes gray-level runs by the
gray tone, length and the direction of the run. This method allows extracting the higher
order statistical texture features. As a common practice, run length matrices of  = 0,
45, 90 and 135 were calculated and get the following features [Galloway et al., 1975]:


                                        P i, j 
Short run emphasis:    
                        i       j
                                         

                                           j2
                                                      P i, j 
                                                      i   j
                                                                                                           (2.13)



Long run emphasis:       j P i, j   P i, j 
                            i       j
                                            2
                                                
                                                          i   j
                                                                                                           (2.14)



                                         
                                          2
                             
Gray Level Non-uniformity:   P i, j 
                                                                      P i, j    
                                                                                                            (2.15)
                           i  j                                    i       j




                                           
                                           2
                                       
Run Length Non-uniformity:   P i, j 
                           j  i
                                 
                                         
                                                                        P i, j 
                                                                          i       j
                                                                                                           (2.16)



Run Percentage:         P i, j 
                        i       j
                                                    A                                                      (2.17)



where A is the area of interest in the image. This particular feature is the ratio between
the total number of observed runs in image and the total number of possible runs if all
runs had a length of one.




14 | P a g e
JAN2011/ENG/015                                               LEE YOU TAI DANNY
                                                                       (B0704498)




CHAPTER 3: PROJECT MANAGEMENT



3.1     Project Plan


From the start of the project, proper planning is very important as it contributes the
success of the project. Time management factors play a critical role in every project and
it leads to a good graded project. I need to juggle between both my projects, exam and
my full-time job, having a well plan is not only my own contribution but also from my
project Supervisor, Dr. Rajendra Acharya U, did his part to meet up and constantly
reminding me as per the planned schedule. For the project to be made possible, a Gantt
chart (shown in appendix Q), was used to create the project plan and monitor the
progress.




        3.1.1 School facility
The access to UniSIM library or any neighborhood library is a must. As most of my
research and read up on the various types of applications for image processing and
texture features extraction, came from the library shelf and online IEEE journals.
MATLAB Central also helps improve my programming skills for this project.



        3.1.2 Internet broadband
The access to internet is very important as most researches could be done by a “click” of
the mouse. It is especially essential as many datasheets are available in the World Wide
Web for information. The Broadband connection will help speed up the downloading of
the information we need.




3.2     Project Task and Schedule



15 | P a g e
JAN2011/ENG/015                                                  LEE YOU TAI DANNY
                                                                          (B0704498)




There were slight changes between the planned schedule and the actual dates. Such
changes were unavoidable; as there are many other commitments such as assignments,
exams, work and unforeseen circumstances. From the tight schedule, project plan from
initial report is presented, and discrepancy in the schedules will be discussed.


Project tasks are divided into nine sections.
1. Project Consideration and Selection Process
2. Literature Search
3. Preparing for Initial Report (TMA01) @ Project Proposal
4. Digital image processing using MATLAB
5. Statistical Analysis
6. Texture features extraction and normalization
7. Classification
8. Preparing for Final Report (Thesis)
9. Preparing for oral Presentation


     In Task 1, we are allowed to consider the project and need to select the
        interested project. It takes us about 7 days to choose. And the project committee
        makes allocation for proposed project. It takes about 8 days to approve.
     Since Literature research is one of the most important steps for understanding of
        the project, 31 days are used for Task 2. It is mainly focus on library reference
        books and online IEEE journals.
     Preparation of initial report partially depends on Task 2. Task 2 and 3 were
        carried out at the same time and additional 7 days were used to complete the
        proposal.
     Task 4 scheduled for 4 weeks to complete. This task is to study on digital image
        processing and how to write MATLAB code as well as the algorithms.
     We set to complete the project on 1st Oct 2010. The duration for Task 5, 6 and 7
        is 150 days. These tasks are very important and main tasks for the project. In
        these tasks, focus on texture features extraction and normalization as well as
        classification.


16 | P a g e
JAN2011/ENG/015                                                   LEE YOU TAI DANNY
                                                                           (B0704498)




     For task 8, preparation for final report is 84 days as it is portrayal of our whole
        project work and 40% of capstone project score is carried by this task. It will
        start from 1st Aug 11 and end at submission date 14th Nov 11 and it will also be
        carried out concurrently with task 7.
     For task 9, preparation for oral presentation will start 3 weeks before finish
        writing of the report. There are 21 days available for this task.


    Table 3.1 explained detailed project plan.




17 | P a g e
JAN2011/ENG/015                                                        LEE YOU TAI DANNY
                                                                                     (B0704498)




Computer Based Diagnosis of Glaucoma Using PCA: A Comparative Study
                                                                                        Duration
Tasks Description                                              Start Date   End Date               Resources
                                                                                        ( Days )
1. Project Consideration and Selection Process                 2-Jan-11     16-Jan-11      15
  1.1 Project consideration                                    2-Jan-11     8-Jan-11        7
                                                                                                   Library
  1.2 Selection of the project                                 9-Jan-11     16-Jan-11       8
                                                                                                   Resources
2. Literature Search                                           17-Jan-11    16-Feb-11      31
                                                                                                   (Reference
2.1 Research on IEEE online journals , relevant                                                    Books and
                                                               17-Jan-11    30-Jan-11      14
reference books and former student thesis report                                                   online
  2.2 Analyze and study relevant books and journals            31-Jan-11    16-Feb-11      17      IEEE
3. Preparation of initial report (TMA01)                       17-Feb-11    7-Mar-11       20      journals )
4. Research on project components                              9-Mar-11     8-Apr-11       29
  4.1 An introduction to image processing using MATLAB         9-Mar-11     20-Mar-11      12      MATLAB
  4.2 Extraction and compiling of MATLAB codes                 21-Mar-11    8-Apr-11       17      Software
5. Digital image processing using MATLAB                       9-Apr-11     2-May-11       26
  5.1 Familiarization with MATLAB codes                        7-Apr-11     15-Apr-11       9      Web

  5.2 Extraction and collecting texture features               16-Apr-11    24-Apr-11       9      Resources

  5.3 Overview of texture features                             25-Apr-11    2-May-11        8      (IEEE
                                                                                                   Journals,
6. Extraction features using PCA                               3-May-11     19-Jun-11      48
                                                                                                   Past
 6.1 Classification with various data mining techniques        3-May-11     25-May-11      23
                                                                                                   Thesis,
 6.2 Statistical Analysis                                      26-May-11    12-Jun-11      18
                                                                                                   NI
 6.3 Glaucoma integrated index                                 13-Jun-11    19-Jun-11       7
                                                                                                   website,
7. Classification result and comparison                        20-Jun-11    31-Aug-11      72
                                                                                                   Related
8. Preparation for final report                                1-Sep-11     31-Oct-11      61
                                                                                                   reference
 8.1 Writing skeleton of final report                          1-Sep-11     7-Sep-11        8      Books)
 8.2 Writing Literature search                                 8-Sep-11     15-Sep-11       8
 8.3 Writing Introduction of report                            16-Sep-11    22-Sep-11       8
 8.4 Writing Main body of report                               23-Sep-11    14-Oct-11      22
 8.5 Writing conclusion and further study                      15-Oct-11    20-Oct-11       5      School
 8.6 Finalizing and amendments of report                       21-Oct-11    31-Oct-11      10      Lab
9. Preparation for oral presentation                           1-Nov-10     2-Dec-11       32      Facility &
  9.1 Review the whole project for presentation                1-Nov-11     27-Nov-11      27      Personal
                                                                                                   computer
  9.2 Prepare poster for presentation                          28-Nov-11    2-Dec-11        5


                                          Table 3.1: Detail Project Plan

     18 | P a g e
JAN2011/ENG/015                                               LEE YOU TAI DANNY
                                                                       (B0704498)




CHAPTER 4: DESIGN AND ALGORITHM



4.1     Project Approach and Method


In this project, glaucoma is diagnosed using digital fundus images. The texture features
were extracted from the digital fundus images using Principal Component Analysis
(PCA) method. The 60 fundus images (30 normal images and 30 glaucoma images with
an age group of 20 to 70 years) were collected from the Kasturba Medical College,
Manipal, India. Images are stored in Bitmap format with an image size of 560x720
pixels. A fundus camera is designed to take pictures of the inner surface of the eye. A
fundus camera is one of the most popular devices used for Ophthalmoscopy and used by
doctor to diagnose eye diseased as well as to monitor their progression. Figure 4.1 (a)
shows the fundus camera (which consists of a microscope attached with a camera and
light source) and Figure 1 (b) shows fundus image.




Figure 4.1: (a) Fundus Camera and (b) Fundus Image


Figure 4.2(a) shows normal eye digital fundus image and Figure 2(b) shows glaucoma
eye digital fundus image with a resolution of 560 x 720.




19 | P a g e
JAN2011/ENG/015                                                LEE YOU TAI DANNY
                                                                        (B0704498)




Figure 4.2: (a) Normal Eye Fundus Image and (b) Glaucomatous Eye Fundus Image


The method we had employed for the analysis of the images involves the image
processing procedure described details in the following section. The block diagram of
the proposed system for the detection of glaucoma is shown in Figure 1.1. The first step
is pre-processing of image data to remove the non-uniform background which may be
due to non-uniform illumination or variation in the pigment color of eye. Adaptive
histogram equalization operation was performed to solve this problem. This technique
computes several histograms, each corresponding to a distinct section of the image, and
uses them to redistribute the lightness values of the image. Subsequently, these images
were converted to grayscale.


As a second step, various groups of texture features were extracted from each digital
fundus image. The two groups of normal and glaucoma features were normalized. The
p-value is calculated from the normalized data by using Student’s T-Test which used to
assess whether the means of two groups are statistically different from each other. This
p-value is calculated for each feature and p-values with below 0.05 are regarded as
statistically significant. A lower p-value indicates that the two groups are statistically
different. The ability to assess the difference between the data groups is important for
this study, because we want to assess the capability of the extracted features to
discriminate between neuropathy and normal data.



20 | P a g e
JAN2011/ENG/015                                                  LEE YOU TAI DANNY
                                                                          (B0704498)




PCA method is used to extract features from the images and fed to various data mining
techniques namely: K-Nearest Neighbor (K_NN), Naïve Bayes Classifier (NBC),
Decision Tree (DeTr) and Probabilistic Neural Network (PNN) for comparison.




4.2     Different Texture Features Study


Texture features derived from Gray Level Co-occurrence Matrix (GLCM), Entropy,
Energy, Homogeneity, Contrast, Symmetry, Correlation, Moment, Mean, Run-length
matrix such as Short run emphasis, Long run emphasis, Run Percentage, Gray level
non-uniformity and Run length non-uniformity are explained earlier. In this section,
brief explanations of Fractal Dimension, Local Binary Patterns, Laws‟ Texture Energy
and Fuzzy Gray Level Co-occurrence Matrix are described as follows.



        4.2.1 Fractal Dimension
The concept of fractal was first introduced by Benoit Mandelbrot in 1975. In his
opinion, fractals objects have three important properties, (1) Self similarity (2) Iterative
formation and (3) Fractional dimension. There are many instances of fractals in nature
such as Contour set, Sierpinski triangle, Koch snow, Julia set, Fern fractal, etc. There
are many specific definitions of fractal dimension. The most important theoretical
fractal dimensions are:
      1) Hausdorff dimension
      2) Packing dimension
      3) Rényi dimensions
      4) Box-Counting dimension
      5) Correlation dimension


Fractal dimension is estimated as the exponent of power law. Fractal analysis can be
seen in various application areas and most common interest is to determine the fractal
dimension of the concerned objects. Fractal dimension is a real number used to
characterize the geometric complexity of A. A bounded set A in Euclidean n-space is

21 | P a g e
JAN2011/ENG/015                                                                                                     LEE YOU TAI DANNY
                                                                                                                                 (B0704498)




    self-similar if A is the union of Nr distinct (non-overlapping) copies of itself scaled up or
    down by a factor of r. The fractal dimension D of A is given by relation (1).


                         1  Nr r D                                                                                                                      (4.1)


                               logr N 
                                  
                 D                                                                                                                                      (4.2)
                                   1 
                               log 
                                   r 




                14                                                                       14


                12                                                                       12


                10                                                                       10


                 8                                                                        8
       log(N)




                                                                                log(N)




                 6                                                                        6



                 4                                                                        4


                 2                                                                        2


                 0                                                                        0
                     1   1.5   2   2.5   3     3.5      4   4.5   5   5.5   6                 1   1.5   2   2.5   3     3.5      4   4.5   5   5.5   6
                                             log(1/r)                                                                 log(1/r)




                         Figure 4.3: (a) Normal FD and (b) Glaucoma FD


    Fractal Dimension (FD) will be extracted using MATLAB program fractal.m which
    generated fractal data associated with the graph as shown above. MATLAB codes are
    listed in appendix N.



                         4.2.2 Local Binary Patterns
    Local Binary Patterns (LBP) is a very powerful texture feature which discrete
    occurrence histogram of the “Uniform” patterns computed over an image or a region of
    image. It‟s defined as a gray-scale invariant texture measure, effectively combines
    structural and statistical approaches by computing the occurrence histogram. The local
    binary pattern detects microstructures (e.g., edges, lines, spots, flat areas) whose
    underlying distribution is estimated by the histogram. The LBP operator can be seen in


    22 | P a g e
JAN2011/ENG/015                                                LEE YOU TAI DANNY
                                                                        (B0704498)




most of the real world applications because of its simple computation and possible to
analyze images in real time as well as texture segmentation to face recognition. The
image texture is characterized by two orthogonal properties: spatial structure (pattern)
and contrast (the „amount‟ of local image texture) where spatial pattern is affected by
rotation and contrast is affected by gray scale.


The LBP operator provides a unified approach to the traditionally divergent statistical
and structural model of texture analysis. In this paper, the LBP feature vector is
calculated as follows. Arbitrary circular neighborhoods around the uniform pattern pixel
„P‟ points are chosen on the circumference of the circle with radius „R‟. On the circular
neighborhood, the grayscale intensity values at points that do not coincide exactly with
pixel locations are estimated by interpolation. Figure 4.4 shows circularly symmetric
neighbor sets for different values of P and R. Figure 4.5 depicts square neighborhood
and circular neighborhood.




Figure 4.4: Circularly symmetric neighbor sets for different P and R




Figure 4.5: Square neighborhood and Circular neighborhood

23 | P a g e
JAN2011/ENG/015                                                LEE YOU TAI DANNY
                                                                        (B0704498)




Let gc be the intensity of the center pixel and gp, p=0,…., P-1, be that of the P points.
These P points are converted to a circular bit-stream of 0s and 1s according to whether
the intensity of the pixel is less than or greater than the intensity of the center pixel.
From each pixel the uniform pixels are used for further computation of texture
descriptor and the non-uniform patterns are assigned to single bin. These “uniform”
fundamental patterns have a uniform circular structure that contains very few spatial
transitions U (number of spatial bitwise 0/1 transitions). The texture primitives for
microstructures detected by the uniform patterns of LBP such as bright spot (U=0), flat
area or dark spot (U=8), and edges of varying positive and negative curvature (U=1-7).
Figure 4.1.3 shows uniform and non-uniform patterns.




Figure 4.6: Uniform and Non-uniform patterns


Therefore, a rotation invariant measure called LBPP,R using uniformity measure U is
calculated based on the number of transitions in the neighborhood pattern. Only patterns
with U ≤ 2 are assigned the LBP code as:




                       P1
        LBPP,R ( x)   P
                       s( g  g c )   U ( x)  2
                       P0
                                                                                   (4.3)
                            P 1        otherwise


24 | P a g e
JAN2011/ENG/015                                                   LEE YOU TAI DANNY
                                                                           (B0704498)




where
                 1 x  0
         s( x)  
                 0 x  0


The center pixel is labeled as uniform if the number of bit-transitions in the circular bit-
stream is less than or equal to 2. A look up table is generally used to compute the bit-
transitions to reduce computational complexity. Multi-scale analysis is done by
combining the information provided by N-operators and summing up operator-wise
similarity scores into an aggregate similarity score. The LBP operator chooses the
circles with various degrees around the center pixels and constructing separate LBP
image for each scale. In this project, energy and entropy of the LBP images are
constructed over different scales (R=1, 2, and 3 with the corresponding pixel count P
being 8, 16 and 24 respectively) were used as feature descriptors. A total of nine LBP
based features were extracted from the studied image. Table 4.2 shows the nine LBP
based features for normal and glaucoma images with their respective p-value.


        Features                Normal              Glaucoma               P-Value
          LBP1                2.27 ± 0.363         1.93 ± 0.288            < 0.0002
         LBP2                 0.321 ± 0.131     0.183 ± 8.555E-02          < 0.0001
         LBP3               0.401 ± 7.747E-02   0.326 ± 5.124E-02          < 0.0001
         LBP4                 2.86 ± 0.592         2.45 ± 0.390            < 0.0026
         LBP5                 0.539 ± 0.148     0.394 ± 9.297E-02          < 0.0001
         LBP6               0.475 ± 1.087E-02   0.478 ± 8.898E-03           < 0.26
         LBP7                  3.53 ± 2.09         3.01 ± 0.503            < 0.0066
         LBP8                 0.604 ± 0.124     0.481 ± 7.595E-02          < 0.0001
         LBP9               0.473 ± 1.931E-02   0.494 ± 8.663E-03          < 0.0001

Table 4.1: LBP features for normal and glaucoma images with p-vale


Local Binary Patterns (LBP) will be extracted using MATLAB program lbpana.m
which calls another MATLAB function programs lbp.m. These MATLAB codes are
listed in appendix O.


25 | P a g e
JAN2011/ENG/015                                                     LEE YOU TAI DANNY
                                                                             (B0704498)




          4.2.3 Laws’ Texture Energy
Laws‟ texture energy measure is computed by applying small convolution kernels to a
digital image and then performing a non-linear window operation. It‟s another approach
of detecting various types of textures by using local masks. Laws marks represented
image features without referring to the frequency domain. Texture energy measure is
based on texture energy transforms applied to the image to estimate the energy within
the pass region of filters. The texture description uses such as:
       1) Average gray level
       2) Edges
       3) Spots


All the masks were derived from one dimensional (1-D) vectors of three pixels length:
    1) L3 = [1 2 1]          averaging
    2) E3 = [-1 0 1] first difference-edges
    3) S3 = [-1 2 -1] second difference-spots


Nine 2-D masks of size 3 x 3 can be generated by convolving any vertical 1-D vector
with a horizontal one as shown in Equation (4.4).



1                         1 0 1 
                                  
 2  1
    *             0    1   2 0 2                                            (4.4)
                                       
1                          1 0 1 
               E3


 
  L3
                                    
                                   L3 E 3




To extract texture information from an image I(i,j), the image was first convoluted with
each 2-D mask. To filter the image I(i,j),with E3S3, the result is a Texture Image
(TIE3E3) as shown in Equation (4.5).


                      TIE3E3 = I(i,j)*E3S3                                        (4.5)


26 | P a g e
JAN2011/ENG/015                                                      LEE YOU TAI DANNY
                                                                              (B0704498)




According to Laws‟ suggestion, all the convolution kernels used is zero mean with an
exception of the L3L3 kernel. And this image (TIL3L3) was used to normalized the
contrast of the remaining texture images TI(i,j) with the eight zero-sum masks
numbered 1 to 8. Equation (4.6) shows the normalization of Texture Image.


                                      TI(i , j ) m a sk
               NormalizeTI mask                                                (4.6)
                                     TI (i , j )L 3 L 3


The outputs (TI) from Laws‟ masks were passed to Texture Energy Measurement
(TEM) filters (4.7). These consisted of moving non-linear window average of absolute
values. The images were filtered using the eight masks and energies were computed.
The computed features were chosen to quantify the changes in the levels, edges, and
spots in the studied image. Eight LTE based features were extracted from the image by
using above eight masks. Table 4.3: shows all the eight LTE features of normal and
glaucoma with their respective p-value.


                                 3   3
               TEM (i , j )      TI
                                u3v3
                                           (iu , j v )
                                                                                   (4.7)




    Features            Normal                   Glaucoma     P-Value
       LTE1         0.341 ± 0.220             0.740 ± 0.279   < 0.0001
       LTE2         0.346 ± 0.306             0.692 ± 0.299   < 0.0001
       LTE3         0.298 ± 0.285             0.726 ± 0.228   < 0.0001
       LTE4         0.241 ± 0.269             0.731 ± 0.297   < 0.0001
       LTE5         0.164 ± 0.255             0.682 ± 0.291   < 0.0001
       LTE6         0.294 ± 0.407             0.702 ± 0.277   < 0.0001
       LTE7         0.223 ± 0.244             0.689 ± 0.280   < 0.0001
       LTE8         0.173 ± 0.251             0.675 ± 0.278   < 0.0001

Table 4.2: LTE features of normal and glaucoma images with p-vale

27 | P a g e
JAN2011/ENG/015                                                    LEE YOU TAI DANNY
                                                                            (B0704498)




Laws‟ Texture Energy (LTE) will be extracted using MATLAB program MAIN.m
which calls four other MATLAB function programs lawsanalysis.m, lawsmask.m,
lawsfilter.m and lawsimg.m. These MATLAB codes are listed in appendix P.



        4.2.4 Fuzzy Gray Level Co-occurrence Matrix
Gray Level Co-occurrence Matrix (GLCM) is a matrix that represents second-order
texture moments, and it‟s described the frequency of one gray level appearing in a
specified spatial linear relationship with another gray level within the neighborhood of
interest. The Fuzzy Gray Level Co-occurrence Matrix (FGLCM) of an image I of size L
x L is given by:


                   Fd m, n    f mn 
                                        LxL
                                                                                       (4.8)


where fmn corresponds to the frequency of occurrence of a gray value „around m‟ and
different pixel with gray value „around n‟, which are the relative distance between the
pixel pair d measure in relative orientation θ. It is represented as: F = f (I,d,θ).
In this paper, θ is quantized in four direction (0º, 45º, 90º, 135º) for a distance d = 20
with a rotational invariant co-occurrence matrix „F‟, to find the texture feature value.
Equation (4.9) and (4.10) depict FGLCM based features.

                                                                                       

        Energy: E fuzzy    Fd m, n 2
                                                                                      (4.9)
                                m   n




        Entropy: H fuzzy    Fd m, n . ln Fd m, n                              (4.10)
                                    m   n




The homogeneity feature is used to measure the similarity between two pixels that are
apart, (Δm,Δn) and the contrast feature is used to captured the local variation between
those two pixels. The degree of disorder and the denseness in an image are measured by
energy and entropy features. The entropy feature will have a maximum value when all


28 | P a g e
JAN2011/ENG/015                                                   LEE YOU TAI DANNY
                                                                           (B0704498)




elements of the co-occurrence matrix are the same. FGLCM features of normal and
glaucoma images and their respective p-value are shown in Table 4.4.


       Features                 Normal                   Glaucoma          P-Value
    Homogeneity           0.1965±0.01860             0.2239±0.0336         < 0.0006
        Energy        7.354E+03 ±5.13E+02        8.402E+03±5.29E+02        < 0.0005
       Entropy             3.6614 ±.056417           3.543 ±0.046          < 0.0002
       Contrast               15.2557 ±0.5096            13.8617 ±0.5728   < 0.0014
      Symmetry            1.000 ± 3.608E-04          1.000 ± 3.191E-04      < 0.079
     Correlation       8.704E-03 ± 6.622E-04      8.051E-03 ± 4.727E-04    < 0.0001
      Moment1          -2.621E-02 ±1.170E-02      -1.81E-02 ±2.940E-02      < 0.016
      Moment2         1.2162E+03 ±1.054E+03      4.63842E+02 ±1.156E+02    < 0.0014
      Moment3         -1.209E+03 ± 4.04E+03      -4.2751E+02 ±1.528E+03     < 0.016
      Moment4         7.372E+05 ±6.65E+05     1.253150E+06 ±1.211E+06      < 0.0014
 Angular2ndMoment     1.238E+10 ± 9.827E+08      1.324E+10 ± 2.335E+09      < 0.067
       Contrast       1.943E+06 ± 1.399E+06      9.061E+05 ± 1.001E+06     < 0.0017
         Mean         8.812E+03 ± 5.574E+03      4.713E+03 ± 3.988E+03     < 0.0018
       Entropy           -1.808E+03 ± 98.3          -1.848E+03 ± 228.       < 0.39
 ShortRunEmphasis         0.855 ± 1.931E-02          0.834 ± 2.857E-02     < 0.0012
  LongRunEmphasis             6.21 ± 5.17                10.2 ± 4.38       < 0.0019
        RunPer               2.19 ± 0.210               2.06 ± 0.169       < 0.0091
    GrayLvlNonUni     4.0523E+04 ±5.960E+03      3.6804E+04 ±4.429E+03     < 0.0011
  RunLengthNonUni     6.8813E+02 ±53.99                    849 ±183         < 0.13

Table 4.3: FGLCM features of normal and glaucoma images with p-vale


Fuzzy Gray Level Co-occurrence Matrix (FGLCM) will be extracted using MATLAB
program Main.m which calls six other MATLAB function programs fuzzycoocc.m,
jtxtAnalyCCMFea.m, jfDStats.m, jrLengthMat.m, jrRLength.m and jcoMatrix.m.
These MATLAB codes are listed in appendix Q.




29 | P a g e
JAN2011/ENG/015                                                    LEE YOU TAI DANNY
                                                                            (B0704498)




CHAPTER 5: CLASSIFICATIONS AND RESULTS



5.1     Principal Component Analysis (PCA)


In this project, the normalizing features were extracted by using Principal Component
Analysis (PCA) method and the features were fed to the PNN classifier. Principal
Component Analysis (PCA) is a mathematical procedure that transforms a number of
possible correlated variables into a smaller number of uncorrelated variables called
principal components. PCA is a powerful tool for analyzing data because it is a simple,
non-parametric method of extracting relevant information from confusing data sets. The
other main advantage of PCA is when identifying patterns in the compress data (or by
reducing the number of dimension) the information loss is very less. The number of
principal components is less than or equal to the number of original variables.


In this work, the 33 dimension of data set were reduced to 12 number of dimension. For
PCA to work properly, the mean value is subtracted from each of the data dimensions
and its produces a data set whose mean is zero. The eigenvalues and eigenvectors were
calculated from the covariance matrix. These are important as it gives useful
information about the data. After forming a feature vector, multiplying it with the
original data set by taking the transpose of the feature vector.


        FinalData  RowFeatureVector  RowDataAdjust


In order to get the original data back, we used the equation shows below.


        RowDataAdjust  RowFeatureVector 1  FinalData


        RowDataAdjust  RowFeatureVector T  FinalData




30 | P a g e
JAN2011/ENG/015                                                  LEE YOU TAI DANNY
                                                                          (B0704498)




          RowOriginalData  (RowFeatureVector T  FinalData )  OriginalMean


The equation still gives the correct transform without present of all the eigenvectors.


Principal Component Analysis (PCA) will be extracted using MATLAB program
Test1.m which generated 12 selected PCA components. MATLAB codes are listed in
appendix R.




5.2       Classifier Used


A Probabilistic Neural Network (PNN) is an implementation of a statistical algorithm
called kernel (Kernels are also called “Parzen Windows”) discriminant analysis in
which the operation are organized into a multilayered feed forward network with four
layers:
       Input Layer
       Pattern Layer
       Summation Layer
       Output Layer


Figure 5.2: shows the architecture of a typical PNN. PNN architecture is composed of
several sub-networks or neurons organized in successive layers and each of which a
normalized RBF network is also known as “kernels”. These “Parzen Windows” are
usually the probability density functions such as Gaussian function. The input nodes are
the set of measurements and its does not perform any computation. It‟s simply
distributes the input to the neurons in the pattern layer. The hidden-to-output weights
are usually 1 or 0. The neuron of the pattern layer computes its output after receiving a
pattern from the input layer. The dimension of the pattern vector is the smoothing
parameter and is the neuron vector. The summation layer neurons compute the
maximum likelihood of pattern being classified into by summarizing and averaging the
output of all neurons that belong to the same class. The prior probabilities for each class


31 | P a g e
JAN2011/ENG/015                                                LEE YOU TAI DANNY
                                                                        (B0704498)




are the same, and the losses figures are added to the PDF as a weight. The decision layer
unit classifies the pattern in accordance with the Bayes‟s decision rule based on the
output of all the summation layer neurons.




Figure 5.1: Architecture of PNN


Data enters at the inputs and passes through the layer of hidden layers, where the actual
information is processed and the result is available at the output layer. Usually feed-
forward architecture is used, where there is no feedback between the layers. The
supervised learning algorithm was used for training the neural network. In the case,
initially the system weights are randomly chosen and then slowly modified during the
training in order to get the desired outputs. The difference between the actual output and
desired output is calculated for each input at every iteration. These errors are used to
change the weights proportionately. This process continues until the preset mean square
error is reached (0.001 in this work). This algorithm of reducing the errors in order to
achieve the correct class by incrementing weights is known as back-propagation.




5.3     Results


In this study, 60 fundus images which consist of 30 Normal fundus images and 30
Glaucoma fundus images with an age group of 20 to 70 years were used. Out of 33


32 | P a g e
JAN2011/ENG/015                                                 LEE YOU TAI DANNY
                                                                         (B0704498)




features retrieved with the processing technique discussed above, 12 features were
computed by using Principal Component Analysis (PCA). A student T-Test was
conducted on these two groups whether the mean value of each texture feature was
significantly different between the two classes. It was found that all these 12 features
tested were clinically significant which p-value is less than 0.05. The mean and standard
deviation values of computed features are shown in Table 5.1. From the table we can
see that p-value of all PCA features are clinically significant which means lover p-value.
The p-value is the probability of rejecting the null hypothesis assuming that the null
hypothesis is true.


   Features            Normal              Glaucoma           P-Value
    PCA1          0.400 ± 7.311E-02    0.320 ± 4.731E-02      < 0.0001
    PCA2          0.388 ± 7.855E-02    0.317 ± 4.760E-02      < 0.0001
    PCA3          0.382 ± 7.566E-02    0.312 ± 4.835E-02      < 0.0001
    PCA4          0.373 ± 7.315E-02    0.306 ± 4.708E-02      < 0.0001
    PCA5          0.357 ± 6.772E-02    0.290 ± 4.864E-02      < 0.0001
    PCA6          0.345 ± 6.820E-02    0.257 ± 5.815E-02      < 0.0001
    PCA7          0.334 ± 6.982E-02    0.237 ± 6.198E-02      < 0.0001
    PCA8          0.323 ± 6.901E-02    0.223 ± 5.513E-02      < 0.0001
    PCA9          0.308 ± 6.192E-02    0.212 ± 5.441E-02      < 0.0001
    PCA10         0.278 ± 4.804E-02    0.193 ± 5.278E-02      < 0.0001
    PCA11         0.265 ± 4.561E-02    0.180 ± 5.825E-02      < 0.0001
    PCA12         0.253 ± 4.543E-02    0.157 ± 6.248E-02      < 0.0001

Table 5.1: 12 Features of normal and glaucoma PCA and their p-value


In order to test the performance of classifier, we choose the three-fold stratified cross
validation method. The advantage of this method is that all observations are used for
both training and testing (validation), and each observation is exactly used for validation
once. Figure 5.2: shows the three-fold stratified cross validation. Two parts of the data
set were used for training and the remaining one set of data was used to test the
performance (i.e. 21 images were used for training and 9 images were used for testing
each time). This process was repeated three times using different sessions of the test
data each time.



33 | P a g e
JAN2011/ENG/015                                                   LEE YOU TAI DANNY
                                                                           (B0704498)




                                                                       TRAIN


                                                                       MODEL


                                                                        TEST


                                                                      RESULT


Figure 5.2: Procedure of three-fold stratified cross validation


After processing three different test, TP (true positive), TN (true negative), FP (false
positive), FN (false negative), accuracy, sensitivity, specificity and positive predictive
accuracy were obtained by taking the average of the values computed in the three
iterations. Table 5.2: shows the PNN classification result.


          TN     FN     TP     FP     ACC    SENSI     SPECI    PPV
        9.0000 2.0000 7.0000 0.0000 76.1905 100.0000 81.8182 100.0000
  PNN   9.0000 0.0000 9.0000 0.0000 85.7143 100.0000 100.0000 100.0000
        8.0000 0.0000 9.0000 1.0000 80.9524 90.0000 100.0000 90.0000
AVERAGE                             80.9524 96.6667 93.9394 96.6667
Table 5.2: PNN classification result


True Negative (TN) is the number of normal images classified as normal images.
False Negative (FN) is the number of glaucomatous images classified as normal.
True Positive (TP) is the number of glaucoma images classified as glaucoma.
False Positive (FP) is the number of normal images classified as glaucomatous.
In our work, PNN able to classified successfully with accuracy rate of 80.9%, sensitivity
96.7%, specificity 93.9% and positive predictive accuracy (PPV) 96.7% which is
clinically significant.


Sensitivity is the probability of abnormal class is classified as abnormal.



34 | P a g e
JAN2011/ENG/015                                                  LEE YOU TAI DANNY
                                                                              (B0704498)




                           TP 
            Sensitivity           *100%                                              (5.1)
                           TP  FN 

    Specificity is defined as the probability of normal class is identified as normal.


                            TN 
            Specificit y           *100%                                             (5.2)
                            TN  FP 
    The Positive Predictive Accuracy (PPV) shows the accuracy of detecting the normal and
    abnormal cases.


                   TP 
            PPV          *100%                                                      (5.3)
                   TP  FP 



    PNN classifier has orders of magnitude faster than back-propagation and able to
    converge to an optimal classifier as the size of the training set increases. The most
    important characteristic is that training samples can be added or removed without
    extensive retraining.


    Probabilistic Neural Network (PNN) will be extracted using MATLAB program
    aatrain.m which trains the data and another MATLAB program aatest.m is used to test
    the input data. MATLAB codes are listed in appendix S.




    5.4     Glaucoma Integrated Index


    Integrated index provide better way of tracking how much each of the 12 features varies
    from their respective normal values to make a diagnosis. It is more useful to combine
    the features into an integrated index, in such a way that the value of this index is
    significantly different between normal and glaucoma subjects. Integrated index was
    used for biomedical applications such as Ghista 2004, 2009a, 2009b and Acharya et al.
    2011a, 2011b respectively. We came up with an integrated index by combining the
    features to get the integrated index value is distinctly different for normal and glaucoma
    subjects. The proposed mathematical formulation of GII is:

    35 | P a g e
JAN2011/ENG/015                                                  LEE YOU TAI DANNY
                                                                          (B0704498)




                 200  PCA12  2   PCA1
        GII                                                                        (5.4)
                           100


Although all the parameters in Table 5.1 were clinically significant, the range of PCA1
and PCA12 for normal and glaucoma classes was wide compared to the rest of the
features. Hence, we have selected PCA1 and PCA12 in the GII formula. And also, the
combination of PCA1 and PCA12 yielded the best separation of the two classes
compared to the other combinations using PCA2 to PCA11. The computed GII values
for normal and glaucoma subjects are shown in Table 5.3.


      Index               Normal              Glaucoma               P-Value
        GII             4.79 ± 0.158         4.30 ± 0.117            < 0.0001
Table 5.3: The GII values for normal and glaucoma subjects


Figure 5.3: shows the distribution plot of this integrated index for normal and glaucoma
subjects. The distinctive difference of this index between the two classes indicates that it
can be effectively employed to differentiate and diagnose normal and glaucoma
subjects.




      Figure 5.3: The distribution plot of the GII for normal and glaucoma
36 | P a g e
JAN2011/ENG/015                                                 LEE YOU TAI DANNY
                                                                         (B0704498)




CHAPTER 6: DISCUSSION, CONCLUSION AND
RECOMMENDATION



6.1      Discussion


Principal Component Analysis (PCA) is proposes method for automatic glaucoma
identification using texture features extracted from fundus images. The Student T-Test
shown that our features are clinically significant to detect the glaucoma. There are
several methods of treatment available to impede progression glaucoma. For this reason,
it is important to diagnose glaucoma as early as possible to minimize the damage to the
optic nerve.


In the past, Fuzzy sets were used to provide for medical diagnosis and the six fuzzy
classification algorithms performed less than 76% in identifying the correct class. The
high diagnostic performance of ANN based on refined input visual field data achieved a
sensitivity of 93% at a specificity level of 94% with an area under the receiver operating
characteristic curve of 0.984. Glaucoma Hemifield test attained a sensitivity of 92% at
91% specificity. Heidelberg retina tomography (HRT) was used to differentiate between
glaucoma and non-glaucoma eye using neural network. The ROC (receiver operating
characteristics) curves for SVM (support vector machine) was 0.938 and SVM Gaussian
was 0.945. MLP (multi-layer perceptron) and the current LDF (linear discriminant
function) were 0.941 and 0.906 respectively. For the best previously proposed LDF was
0.890.


In our work, we have extracted 12 texture features using image processing techniques
and PCA methods. The significant features were selected (for lower p-values) using the
Student T-Test. The capability of the features for good diagnosis was studied using PNN
classifiers. The low p-value obtained using T-Test and the high accuracy values
obtained by using these features in classifiers indicate the usefulness of these features.
However, for a medical specialist, the meaning of these features might not be apparent.

37 | P a g e
JAN2011/ENG/015                                               LEE YOU TAI DANNY
                                                                       (B0704498)




In addition, to improve the accuracy of diagnosis based on these features, due to some
features have larger values for glaucoma subjects (than for normal subjects) and other
features have smaller values for glaucoma subjects (than for normal subjects). Hence,
for more comprehensibility and transparency, we have developed an integrated index
using these features. The Glaucoma Integrated Index has demonstrated good
discriminative power for the normal subjects and glaucoma subjects.


The interpretation using this index is simple – a GII value below 4.417 indicates the
presence of glaucoma. This simplicity, transparency and objectivity of the GII make it
user-friendly and enable it to become a valuable addition to Glaucoma analysis software
and hardware. The cost of installing the GII feature into the software of the existing
diagnosis systems is much lower, because the calculation of the index value involves
only digital signal processing tools and this type of processing is cheap and readily
available.


The results show superior to some of those existing methods due to the higher
percentage of correct classification. However, we can improve the accuracy by using
more parameters or by increasing the number of training and testing images. The
percentage of correct classification also depends on the environmental lighting
conditions. This method can be used as an adjunct tool for the physicians to cross check
their diagnosis.




6.2     Conclusion


A computer based system for detection of glaucoma abnormal eyes through fundus
images is developed algorithm using image processing techniques, Principal Component
Analysis (PCA) and Probabilistic Neural Network classifier. The features are computed
automatically and this gives us a high degree of accuracy. These features were tested by
using Student T-Test, which showed that all 12 PCA features are clinically significant.
The system we propose can identify the presence of glaucoma to the accuracy of 80.9%.
Another proposed method of this work is to combine the features into an integrated
38 | P a g e
JAN2011/ENG/015                                                 LEE YOU TAI DANNY
                                                                         (B0704498)




index in such a way that its value is distinctly different for normal and glaucoma
subjects. Moreover, the result of the system can further be improved by taking more
diverse images. This system can be used as an adjunct tool by the physicians to cross
check their diagnosis. However, early detection is important to prevent the progression
of the disease.




6.3     Recommendation


This project carried out by extracting 33 texture features form digital fundus images and
12 principal components were computed with Principal Component Analysis (PCA)
method. These features were tested by means of t-test, which showed that the 12 PCA
features are all clinically significant. These 12 features were fed into the PNN classifier
to generate the classification result. The development of Glaucoma Integrated Index
demonstrated good discriminative power for normal subjects and glaucoma subjects.
Our system produces encouraging result but it can further be improved by taking more
diverse images and evaluating better features.




39 | P a g e
JAN2011/ENG/015                                               LEE YOU TAI DANNY
                                                                       (B0704498)




                                     PART II

                CRITICAL REVIEW AND REFLECTION



Throughout this work, I had spent huge amount of time on Literature research on the
MATLAB programming such as digital image processing, texture features extraction
codes, PNN classification and so on. Among all the tasks, project management and time
management skills are the most important part over the entire project work. From the
initial start of the project, I knew it would not be easy because of tight schedule and
limited knowledge about MATLAB. However upon completion of the project, I was
able to better understand my strengths and weakness.


A lot of research and reading was done over the reference books form library and IEEE
and other published journals to explore more about the project. Understanding all the
texture features is not possible without my supervisor guidance and explanation. Skills
like project management and information researches are greatly enhanced over the
period of the entire project. I would say Project management skill is the most important
part in making the project a successful one.


Technical report writing is one of the weak points that I need to overcome. The “Project
Report Writing and Poster Design Briefing” provided by UniSIM was really helpful.
Report writing skills also gradually improved throughout the progress. Since, MATLAB
is the backbone of this project; I have to understand the concept of how the features
were extracted from images and their algorithms using MATLAB codes. I gained a lot
of experiences after completion of this project; such as problem solving skill, MATLAB
programming skill, analytical skill, project management skill and last but not least
technical report writing skill.




40 | P a g e
JAN2011/ENG/015                                                    LEE YOU TAI DANNY
                                                                            (B0704498)




                                   REFERENCES

    1. http://paper.ijcsns.org/07_book/201006/20100616.pdf
    2. http://paper.ijcsns.org/07_book/201006/20100616.pdf
    3. http://mmlab.ie.cuhk.edu.hk/2000/IP00_Texture.pdf
    4. http://zernike.uwinnipeg.ca/~s_liao/pdf/thesis.pdf
    5. http://www.iro.umontreal.ca/~pift6266/A06/cours/pca.pdf
    6. http://www.ee.oulu.fi/mvg/files/File/ICCV2009_tutorial_Matti_guoying-
        Local%20Texture%20Descriptors%20in%20Computer%20Vision.pdf
    7. http://www.macs.hw.ac.uk/bmvc2006/papers/052.pdf
    8. http://www.szabist.edu.pk/ncet2004/docs/session%20vii%20paper%20no%201
        %20(p%20137-140).pdf
    9. http://www.psi.toronto.edu/~vincent/research/presentations/PNN.pdf
    10. http://www.cs.otago.ac.nz/cosc453/student_tutorials/principal_components.pdf
    11. http://support.sas.com/publishing/pubcat/chaps/55129.pdf
    12. http://courses.cs.tamu.edu/rgutier/cpsc636_s10/specht1990pnn.pdf
    13. http://sci2s.ugr.es/keel/pdf/specific/articulo/dkg00.pdf
    14. http://www.public.asu.edu/~ltang9/papers/ency-cross-validation.pdf
    15. Ooi, E.H, Ng, E. Y. K., Purslow, C., Acharya, R., “Variations in the
        corneal surface temperature with contact lens wear”, Journal of
        Engineering in Medicine 221, 2007, 337–349
    16. Tan, T.G., Acharya, U.R., Ng, E.Y.K., “Automated identification of
        eye diseases using higher order spectra”, Journal of Mechanics and
        Medicine in Biology, 8(1), 2008, 121-136
    17. Acharya, U.R., Ng, E.Y.K., Min L.C., Chee, C., Gupta, M., Suri, J.S.,
        “Automatic identification of anterior segment eye abnormalities in
        optical images”, Chapter 4, Image Modeling of Human Eye (Book),
        Artech House, USA, May, 2008a
    18. Acharya, U.R., Ng, E.Y.K., Suri, J.S., “Imaging Systems of Human
        Eye: A Review”, Journal of Medical Systems, USA, 2008b (In Press).



41 | P a g e
JAN2011/ENG/015                                               LEE YOU TAI DANNY
                                                                       (B0704498)




    19. Galassi, F., Giambene, B., Corvi, A., Falaschi, G.,    “Evaluation of
        ocular surface temperature and retrobulbar haemodynamics by
        infrared thermography and colour Doppler imaging in patients with
        glaucoma”, British Journal of Ophthalmology, 91(7), 2007, 878-881
    20. Morgan, P.B., Soh, M. P.,       Efron, N.,   Tullo, A. B., “Potential
        Applications of Ocular Thermography”, Optometry and Vision
        Science, 70(7), 1993, 568-576
    21. Tan, T.G., Acharya, U.R., Ng, E.Y.K., “Automated identification of
        eye diseases using higher order spectra”, Journal of Mechanics and
        Medicine in Biology, 8(1), 2008, 121-136
    22. http://www.mathworks.com/matlabcentral/fileexchange/17482-gray-level-run-
        length-matrix-toolbox
    23. http://www.singapore-
        glaucoma.org/index.php?option=com_content&view=article&id=112:savh-new-
        referrals-to-lvc-27-of-cases-are-glaucoma&catid=3:newsflash&Itemid=37
    24. http://grass.fbk.eu/gdp/html_grass64/r.texture.html
    25. http://en.wikipedia.org/wiki/Kurtosis
    26. http://en.wikipedia.org/wiki/Fractal_dimension
    27. http://www.comp.hkbu.edu.hk/~icpr06/tutorials/Pietikainen.html
    28. http://www.ccs3.lanl.gov/~kelly/ZTRANSITION/notebook/laws.shtml
    29. http://www.google.com.sg/url?sa=t&rct=j&q=fuzzy%20gray%20level%20co-
        occurrence%20matrix&source=web&cd=1&ved=0CCIQFjAA&url=http%3A%
        2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.150.5
        807%26rep%3Drep1%26type%3Dpdf&ei=FCGuTpShEMiJrAf7i53ZDA&usg=
        AFQjCNGkegRckD_89kQRrJHXl1bknj0CiQ
    30. http://my.fit.edu/~dpetruss/papers%20to%20read/Gray%20Level%20Co-
        Occurrence%20Matrix%20Computation%20Based%20On%20Haar%20Wavele
        t.pdf
    31. http://www.mathworks.com/help/toolbox/images/ref/graycomatrix.html
    32. http://www.argentumsolutions.com/tutorials/neural_tutorialpg8.html
    33. http://herselfsai.com/2007/03/probabilistic-neural-networks.html


42 | P a g e
JAN2011/ENG/015                                                              LEE YOU TAI DANNY
                                                                                      (B0704498)




                                        APPENDIXES


Appendix A: Moments and FD features with their normalized value for normal images

Moment1               Moment2               Moment3               Moment4                 FD
 39.5686   0.486781   5.45E+03   3.79E-01   1.42E+05   1.35E-01   9.53E+06   7.79E-02   2.1416   0.988051
 40.8667    0.50275   4.12E+03   2.87E-01   1.39E+05   1.32E-01   8.42E+06   6.88E-02   2.1103    0.97361
 39.8039   0.489675   3.97E+03   2.76E-01   1.21E+05   1.15E-01   8.74E+06   7.14E-02    2.103   0.970242
 40.2196   0.494789   3.60E+03   2.51E-01   1.38E+05   1.31E-01   8.33E+06   6.81E-02   2.1215   0.978777
 38.8314   0.477711   4.02E+03   2.80E-01   1.28E+05   1.22E-01   8.40E+06   6.87E-02   2.1377   0.986251
 81.2863       1      5.59E+03   3.89E-01   2.77E+05   2.64E-01   1.64E+07   1.34E-01   2.0891   0.963829
 79.5176   0.978241   1.02E+04   7.12E-01   1.05E+06   1.00E+00   1.22E+08   1.00E+00   2.1005   0.969089
 39.6157    0.48736   3.10E+03   2.16E-01   1.32E+05   1.26E-01   8.20E+06   6.71E-02   2.0926   0.965444
 5.3412    0.065708   1.26E+03   8.75E-02   5.21E+03   4.96E-03   4.83E+05   3.95E-03   2.0694    0.95474
 40.251    0.495176   7.38E+03   5.14E-01   2.58E+05   2.46E-01   2.95E+07   2.41E-01   2.1258   0.980761
 10.749    0.132236   1.49E+03   1.04E-01   8.76E+03   8.33E-03   5.23E+05   4.28E-03    2.075   0.957324
 14.8431   0.182603   2.45E+03   1.70E-01   2.21E+04   2.10E-02   2.54E+06   2.07E-02   2.0848   0.961845
 13.698    0.168515   1.16E+03   8.04E-02    49.9725   4.76E-05   3.95E+05   3.23E-03   2.0866   0.962676
 7.9882    0.098272   1.18E+03   8.20E-02   5.62E+03   5.35E-03   2.57E+05   2.10E-03   2.0973   0.967612
 22.9373   0.282179   3.68E+03   2.56E-01   6.01E+03   5.72E-03   2.03E+06   1.66E-02   2.1123   0.974533
 14.8941    0.18323   2.60E+03   1.81E-01   2.41E+04   2.29E-02   1.24E+06   1.02E-02   2.1054   0.971349
 26.7686   0.329313   4.51E+03   3.14E-01   1.61E+04   1.53E-02   4.59E+06   3.76E-02   2.1258   0.980761
 40.251    0.495176   7.38E+03   5.14E-01   2.58E+05   2.46E-01   2.95E+07   2.41E-01   2.1258   0.980761
 41.8196   0.514473   6.91E+03   4.81E-01   1.27E+05   1.21E-01   1.52E+07   1.24E-01   2.1246   0.980208
 14.3137    0.17609   6.74E+03   4.69E-01   2.11E+03   2.01E-03   2.48E+07   2.03E-01   2.1073   0.972226
 65.8588   0.810208   7.18E+03   5.00E-01   3.05E+05   2.90E-01   2.47E+07   2.02E-01   2.1175   0.976932
 66.3333   0.816045   7.15E+03   4.97E-01   3.06E+05   2.91E-01   2.48E+07   2.02E-01   2.1159   0.976194
 64.698    0.795927   6.71E+03   4.67E-01   2.96E+05   2.82E-01   2.49E+07   2.03E-01   2.1074   0.972272
 67.8078   0.834185   7.01E+03   4.88E-01   3.25E+05   3.09E-01   2.73E+07   2.23E-01   2.0982   0.968028
 51.1216   0.628908   1.04E+04   7.26E-01   1.89E+05   1.80E-01   4.71E+07   3.85E-01   2.1153   0.975917
 42.8824   0.527548   1.08E+04   7.52E-01   7.17E+03   6.83E-03   7.03E+07   5.75E-01   2.1675       1
 64.5137    0.79366   1.44E+04   1.00E+00   4.81E+05   4.58E-01   1.03E+08   8.41E-01   2.1305    0.98293
 67.1804   0.826466   6.98E+03   4.86E-01   2.60E+05   2.47E-01   2.20E+07   1.80E-01   2.1203   0.978224
 63.1176   0.776485   5.77E+03   4.02E-01   3.14E+05   2.98E-01   2.43E+07   1.99E-01   2.0913   0.964844
 69.051    0.849479   7.20E+03   5.01E-01   2.52E+05   2.40E-01   2.22E+07   1.82E-01   2.1211   0.978593




43 | P a g e
Automated Glaucoma Detection
Automated Glaucoma Detection
Automated Glaucoma Detection
Automated Glaucoma Detection
Automated Glaucoma Detection
Automated Glaucoma Detection
Automated Glaucoma Detection
Automated Glaucoma Detection
Automated Glaucoma Detection
Automated Glaucoma Detection
Automated Glaucoma Detection
Automated Glaucoma Detection
Automated Glaucoma Detection
Automated Glaucoma Detection
Automated Glaucoma Detection
Automated Glaucoma Detection
Automated Glaucoma Detection
Automated Glaucoma Detection
Automated Glaucoma Detection
Automated Glaucoma Detection
Automated Glaucoma Detection
Automated Glaucoma Detection
Automated Glaucoma Detection
Automated Glaucoma Detection
Automated Glaucoma Detection
Automated Glaucoma Detection
Automated Glaucoma Detection
Automated Glaucoma Detection
Automated Glaucoma Detection
Automated Glaucoma Detection
Automated Glaucoma Detection
Automated Glaucoma Detection
Automated Glaucoma Detection
Automated Glaucoma Detection
Automated Glaucoma Detection

Contenu connexe

Tendances

DISEASE PREDICTION SYSTEM USING DATA MINING
DISEASE PREDICTION SYSTEM USING  DATA MININGDISEASE PREDICTION SYSTEM USING  DATA MINING
DISEASE PREDICTION SYSTEM USING DATA MININGshivaniyadav112
 
Early Detection of Alzheimer’s Disease Using Machine Learning Techniques
Early Detection of Alzheimer’s Disease Using Machine Learning TechniquesEarly Detection of Alzheimer’s Disease Using Machine Learning Techniques
Early Detection of Alzheimer’s Disease Using Machine Learning TechniquesIRJET Journal
 
auto-assistance system for visually impaired person
auto-assistance system for visually impaired personauto-assistance system for visually impaired person
auto-assistance system for visually impaired personshahsamkit73
 
Image classification using CNN
Image classification using CNNImage classification using CNN
Image classification using CNNNoura Hussein
 
Image classification with Deep Neural Networks
Image classification with Deep Neural NetworksImage classification with Deep Neural Networks
Image classification with Deep Neural NetworksYogendra Tamang
 
Clustering for Stream and Parallelism (DATA ANALYTICS)
Clustering for Stream and Parallelism (DATA ANALYTICS)Clustering for Stream and Parallelism (DATA ANALYTICS)
Clustering for Stream and Parallelism (DATA ANALYTICS)DheerajPachauri
 
Computer vision introduction
Computer vision  introduction Computer vision  introduction
Computer vision introduction Wael Badawy
 
Face Mask Detection group 14.pptx
Face Mask Detection group 14.pptxFace Mask Detection group 14.pptx
Face Mask Detection group 14.pptxNavyaParashir
 
face recognition system using LBP
face recognition system using LBPface recognition system using LBP
face recognition system using LBPMarwan H. Noman
 
Loan Prediction System Using Machine Learning.pptx
Loan Prediction System Using Machine Learning.pptxLoan Prediction System Using Machine Learning.pptx
Loan Prediction System Using Machine Learning.pptxBhoirRitesh19ET5008
 
Presentation on FACE MASK DETECTION
Presentation on FACE MASK DETECTIONPresentation on FACE MASK DETECTION
Presentation on FACE MASK DETECTIONShantaJha2
 
LEAF DISEASE DETECTION USING IMAGE PROCESSING AND SUPPORT VECTOR MACHINE (SVM)
LEAF DISEASE DETECTION USING IMAGE PROCESSING AND SUPPORT VECTOR MACHINE (SVM)LEAF DISEASE DETECTION USING IMAGE PROCESSING AND SUPPORT VECTOR MACHINE (SVM)
LEAF DISEASE DETECTION USING IMAGE PROCESSING AND SUPPORT VECTOR MACHINE (SVM)Journal For Research
 
Optical character recognition IEEE Paper Study
Optical character recognition IEEE Paper StudyOptical character recognition IEEE Paper Study
Optical character recognition IEEE Paper StudyEr. Ashish Pandey
 
Image Caption Generation using Convolutional Neural Network and LSTM
Image Caption Generation using Convolutional Neural Network and LSTMImage Caption Generation using Convolutional Neural Network and LSTM
Image Caption Generation using Convolutional Neural Network and LSTMOmkar Reddy
 
Tomato leaves diseases detection approach based on support vector machines
Tomato leaves diseases detection approach based on support vector machinesTomato leaves diseases detection approach based on support vector machines
Tomato leaves diseases detection approach based on support vector machinesAboul Ella Hassanien
 

Tendances (20)

Computer vision
Computer vision Computer vision
Computer vision
 
DISEASE PREDICTION SYSTEM USING DATA MINING
DISEASE PREDICTION SYSTEM USING  DATA MININGDISEASE PREDICTION SYSTEM USING  DATA MINING
DISEASE PREDICTION SYSTEM USING DATA MINING
 
Early Detection of Alzheimer’s Disease Using Machine Learning Techniques
Early Detection of Alzheimer’s Disease Using Machine Learning TechniquesEarly Detection of Alzheimer’s Disease Using Machine Learning Techniques
Early Detection of Alzheimer’s Disease Using Machine Learning Techniques
 
auto-assistance system for visually impaired person
auto-assistance system for visually impaired personauto-assistance system for visually impaired person
auto-assistance system for visually impaired person
 
Image classification using CNN
Image classification using CNNImage classification using CNN
Image classification using CNN
 
Image classification with Deep Neural Networks
Image classification with Deep Neural NetworksImage classification with Deep Neural Networks
Image classification with Deep Neural Networks
 
Clustering for Stream and Parallelism (DATA ANALYTICS)
Clustering for Stream and Parallelism (DATA ANALYTICS)Clustering for Stream and Parallelism (DATA ANALYTICS)
Clustering for Stream and Parallelism (DATA ANALYTICS)
 
Computer vision introduction
Computer vision  introduction Computer vision  introduction
Computer vision introduction
 
Face Mask Detection group 14.pptx
Face Mask Detection group 14.pptxFace Mask Detection group 14.pptx
Face Mask Detection group 14.pptx
 
face recognition system using LBP
face recognition system using LBPface recognition system using LBP
face recognition system using LBP
 
Loan Prediction System Using Machine Learning.pptx
Loan Prediction System Using Machine Learning.pptxLoan Prediction System Using Machine Learning.pptx
Loan Prediction System Using Machine Learning.pptx
 
Presentation on FACE MASK DETECTION
Presentation on FACE MASK DETECTIONPresentation on FACE MASK DETECTION
Presentation on FACE MASK DETECTION
 
LEAF DISEASE DETECTION USING IMAGE PROCESSING AND SUPPORT VECTOR MACHINE (SVM)
LEAF DISEASE DETECTION USING IMAGE PROCESSING AND SUPPORT VECTOR MACHINE (SVM)LEAF DISEASE DETECTION USING IMAGE PROCESSING AND SUPPORT VECTOR MACHINE (SVM)
LEAF DISEASE DETECTION USING IMAGE PROCESSING AND SUPPORT VECTOR MACHINE (SVM)
 
Optical character recognition IEEE Paper Study
Optical character recognition IEEE Paper StudyOptical character recognition IEEE Paper Study
Optical character recognition IEEE Paper Study
 
Computer vision ppt
Computer vision pptComputer vision ppt
Computer vision ppt
 
Image Caption Generation using Convolutional Neural Network and LSTM
Image Caption Generation using Convolutional Neural Network and LSTMImage Caption Generation using Convolutional Neural Network and LSTM
Image Caption Generation using Convolutional Neural Network and LSTM
 
Disease Prediction by Machine Learning Over Big Data From Healthcare Communities
Disease Prediction by Machine Learning Over Big Data From Healthcare CommunitiesDisease Prediction by Machine Learning Over Big Data From Healthcare Communities
Disease Prediction by Machine Learning Over Big Data From Healthcare Communities
 
Deep Learning for Computer Vision: Medical Imaging (UPC 2016)
Deep Learning for Computer Vision: Medical Imaging (UPC 2016)Deep Learning for Computer Vision: Medical Imaging (UPC 2016)
Deep Learning for Computer Vision: Medical Imaging (UPC 2016)
 
Final year ppt
Final year pptFinal year ppt
Final year ppt
 
Tomato leaves diseases detection approach based on support vector machines
Tomato leaves diseases detection approach based on support vector machinesTomato leaves diseases detection approach based on support vector machines
Tomato leaves diseases detection approach based on support vector machines
 

En vedette

Glaucoma Detection in Retinal Images Using Image Processing Techniques: A Survey
Glaucoma Detection in Retinal Images Using Image Processing Techniques: A SurveyGlaucoma Detection in Retinal Images Using Image Processing Techniques: A Survey
Glaucoma Detection in Retinal Images Using Image Processing Techniques: A SurveyEswar Publications
 
RGB colour detection and tracking on MATLAB
RGB colour detection and tracking on MATLABRGB colour detection and tracking on MATLAB
RGB colour detection and tracking on MATLABNirma University
 
Smartphone-powered Ophthalmic Diagnostics
Smartphone-powered Ophthalmic DiagnosticsSmartphone-powered Ophthalmic Diagnostics
Smartphone-powered Ophthalmic DiagnosticsPetteriTeikariPhD
 
Color based image processing , tracking and automation using matlab
Color based image processing , tracking and automation using matlabColor based image processing , tracking and automation using matlab
Color based image processing , tracking and automation using matlabKamal Pradhan
 
AI in Ophthalmology | Startup Landscape
AI in Ophthalmology | Startup LandscapeAI in Ophthalmology | Startup Landscape
AI in Ophthalmology | Startup LandscapePetteriTeikariPhD
 
Shallow introduction for Deep Learning Retinal Image Analysis
Shallow introduction for Deep Learning Retinal Image AnalysisShallow introduction for Deep Learning Retinal Image Analysis
Shallow introduction for Deep Learning Retinal Image AnalysisPetteriTeikariPhD
 
Diabetic retinopathy
Diabetic retinopathyDiabetic retinopathy
Diabetic retinopathyDevashree N
 
Diabetic retinopathy
Diabetic retinopathyDiabetic retinopathy
Diabetic retinopathyPaavan Kalra
 
Diabetic retinopathy 30-3-2011
Diabetic retinopathy 30-3-2011Diabetic retinopathy 30-3-2011
Diabetic retinopathy 30-3-2011Paweena Phangs
 
Early detection of glaucoma through retinal nerve fiber layer analysis using ...
Early detection of glaucoma through retinal nerve fiber layer analysis using ...Early detection of glaucoma through retinal nerve fiber layer analysis using ...
Early detection of glaucoma through retinal nerve fiber layer analysis using ...eSAT Publishing House
 
Deep Learning Computer Build
Deep Learning Computer BuildDeep Learning Computer Build
Deep Learning Computer BuildPetteriTeikariPhD
 
Diabetic retinopathy.ppt
Diabetic retinopathy.pptDiabetic retinopathy.ppt
Diabetic retinopathy.pptSushant Agarwal
 

En vedette (20)

Glaucoma Detection in Retinal Images Using Image Processing Techniques: A Survey
Glaucoma Detection in Retinal Images Using Image Processing Techniques: A SurveyGlaucoma Detection in Retinal Images Using Image Processing Techniques: A Survey
Glaucoma Detection in Retinal Images Using Image Processing Techniques: A Survey
 
IMAGING TECHNIQUES IN GLAUCOMA
IMAGING TECHNIQUES IN GLAUCOMAIMAGING TECHNIQUES IN GLAUCOMA
IMAGING TECHNIQUES IN GLAUCOMA
 
What is Glaucoma?
What is Glaucoma?What is Glaucoma?
What is Glaucoma?
 
RGB colour detection and tracking on MATLAB
RGB colour detection and tracking on MATLABRGB colour detection and tracking on MATLAB
RGB colour detection and tracking on MATLAB
 
PROJECT FINAL PPT
PROJECT FINAL PPTPROJECT FINAL PPT
PROJECT FINAL PPT
 
Smartphone-powered Ophthalmic Diagnostics
Smartphone-powered Ophthalmic DiagnosticsSmartphone-powered Ophthalmic Diagnostics
Smartphone-powered Ophthalmic Diagnostics
 
Diabetic Retinopathy
Diabetic RetinopathyDiabetic Retinopathy
Diabetic Retinopathy
 
Medical ImageNet
Medical ImageNetMedical ImageNet
Medical ImageNet
 
Color based image processing , tracking and automation using matlab
Color based image processing , tracking and automation using matlabColor based image processing , tracking and automation using matlab
Color based image processing , tracking and automation using matlab
 
AI in Ophthalmology | Startup Landscape
AI in Ophthalmology | Startup LandscapeAI in Ophthalmology | Startup Landscape
AI in Ophthalmology | Startup Landscape
 
Diabetic retinopathy
Diabetic retinopathyDiabetic retinopathy
Diabetic retinopathy
 
Shallow introduction for Deep Learning Retinal Image Analysis
Shallow introduction for Deep Learning Retinal Image AnalysisShallow introduction for Deep Learning Retinal Image Analysis
Shallow introduction for Deep Learning Retinal Image Analysis
 
New diabetic retinopathy
New diabetic retinopathyNew diabetic retinopathy
New diabetic retinopathy
 
Diabetic Retinopathy
Diabetic RetinopathyDiabetic Retinopathy
Diabetic Retinopathy
 
Diabetic retinopathy
Diabetic retinopathyDiabetic retinopathy
Diabetic retinopathy
 
Diabetic retinopathy
Diabetic retinopathyDiabetic retinopathy
Diabetic retinopathy
 
Diabetic retinopathy 30-3-2011
Diabetic retinopathy 30-3-2011Diabetic retinopathy 30-3-2011
Diabetic retinopathy 30-3-2011
 
Early detection of glaucoma through retinal nerve fiber layer analysis using ...
Early detection of glaucoma through retinal nerve fiber layer analysis using ...Early detection of glaucoma through retinal nerve fiber layer analysis using ...
Early detection of glaucoma through retinal nerve fiber layer analysis using ...
 
Deep Learning Computer Build
Deep Learning Computer BuildDeep Learning Computer Build
Deep Learning Computer Build
 
Diabetic retinopathy.ppt
Diabetic retinopathy.pptDiabetic retinopathy.ppt
Diabetic retinopathy.ppt
 

Similaire à Automated Glaucoma Detection

Research on land-cover classification methodologies for optical satellite ima...
Research on land-cover classification methodologies for optical satellite ima...Research on land-cover classification methodologies for optical satellite ima...
Research on land-cover classification methodologies for optical satellite ima...NuioKila
 
RFP_2016_Zhenjie_CEN
RFP_2016_Zhenjie_CENRFP_2016_Zhenjie_CEN
RFP_2016_Zhenjie_CENZhenjie Cen
 
Project report on Eye tracking interpretation system
Project report on Eye tracking interpretation systemProject report on Eye tracking interpretation system
Project report on Eye tracking interpretation systemkurkute1994
 
Analysis and Classification of ECG Signal using Neural Network
Analysis and Classification of ECG Signal using Neural NetworkAnalysis and Classification of ECG Signal using Neural Network
Analysis and Classification of ECG Signal using Neural NetworkZHENG YAN LAM
 
patient monitoring using surface electromyography
patient monitoring using surface electromyographypatient monitoring using surface electromyography
patient monitoring using surface electromyographyMeghayu Adhvaryu
 
Automated Image Detection Of Retinal Pathology.pdf
Automated Image Detection Of Retinal Pathology.pdfAutomated Image Detection Of Retinal Pathology.pdf
Automated Image Detection Of Retinal Pathology.pdfMohammad Bawtag
 
Deep Learning for Health Informatics
Deep Learning for Health InformaticsDeep Learning for Health Informatics
Deep Learning for Health InformaticsJason J Pulikkottil
 
Thesis Fabian Brull
Thesis Fabian BrullThesis Fabian Brull
Thesis Fabian BrullFabian Brull
 
Wearable Biosensors Report
Wearable Biosensors ReportWearable Biosensors Report
Wearable Biosensors ReportShubham Rokade
 
Fluorescence Angiography in Ophthalmology.pdf
Fluorescence Angiography in Ophthalmology.pdfFluorescence Angiography in Ophthalmology.pdf
Fluorescence Angiography in Ophthalmology.pdfMohammad Bawtag
 
Trade-off between recognition an reconstruction: Application of Robotics Visi...
Trade-off between recognition an reconstruction: Application of Robotics Visi...Trade-off between recognition an reconstruction: Application of Robotics Visi...
Trade-off between recognition an reconstruction: Application of Robotics Visi...stainvai
 

Similaire à Automated Glaucoma Detection (20)

02whole
02whole02whole
02whole
 
Medically applied artificial intelligence from bench to bedside
Medically applied artificial intelligence from bench to bedsideMedically applied artificial intelligence from bench to bedside
Medically applied artificial intelligence from bench to bedside
 
Isl1408681688437
Isl1408681688437Isl1408681688437
Isl1408681688437
 
Research on land-cover classification methodologies for optical satellite ima...
Research on land-cover classification methodologies for optical satellite ima...Research on land-cover classification methodologies for optical satellite ima...
Research on land-cover classification methodologies for optical satellite ima...
 
RFP_2016_Zhenjie_CEN
RFP_2016_Zhenjie_CENRFP_2016_Zhenjie_CEN
RFP_2016_Zhenjie_CEN
 
main
mainmain
main
 
Project report on Eye tracking interpretation system
Project report on Eye tracking interpretation systemProject report on Eye tracking interpretation system
Project report on Eye tracking interpretation system
 
Analysis and Classification of ECG Signal using Neural Network
Analysis and Classification of ECG Signal using Neural NetworkAnalysis and Classification of ECG Signal using Neural Network
Analysis and Classification of ECG Signal using Neural Network
 
patient monitoring using surface electromyography
patient monitoring using surface electromyographypatient monitoring using surface electromyography
patient monitoring using surface electromyography
 
Automated Image Detection Of Retinal Pathology.pdf
Automated Image Detection Of Retinal Pathology.pdfAutomated Image Detection Of Retinal Pathology.pdf
Automated Image Detection Of Retinal Pathology.pdf
 
Deep Learning for Health Informatics
Deep Learning for Health InformaticsDeep Learning for Health Informatics
Deep Learning for Health Informatics
 
exjobb Telia
exjobb Teliaexjobb Telia
exjobb Telia
 
Thesis Fabian Brull
Thesis Fabian BrullThesis Fabian Brull
Thesis Fabian Brull
 
ThesisB
ThesisBThesisB
ThesisB
 
Wearable Biosensors Report
Wearable Biosensors ReportWearable Biosensors Report
Wearable Biosensors Report
 
Fluorescence Angiography in Ophthalmology.pdf
Fluorescence Angiography in Ophthalmology.pdfFluorescence Angiography in Ophthalmology.pdf
Fluorescence Angiography in Ophthalmology.pdf
 
M2 - Graphene on-chip THz
M2 - Graphene on-chip THzM2 - Graphene on-chip THz
M2 - Graphene on-chip THz
 
Fulltext02
Fulltext02Fulltext02
Fulltext02
 
Trade-off between recognition an reconstruction: Application of Robotics Visi...
Trade-off between recognition an reconstruction: Application of Robotics Visi...Trade-off between recognition an reconstruction: Application of Robotics Visi...
Trade-off between recognition an reconstruction: Application of Robotics Visi...
 
main
mainmain
main
 

Dernier

2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptxMaritesTamaniVerdade
 
Basic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptxBasic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptxDenish Jangid
 
Understanding Accommodations and Modifications
Understanding  Accommodations and ModificationsUnderstanding  Accommodations and Modifications
Understanding Accommodations and ModificationsMJDuyan
 
Application orientated numerical on hev.ppt
Application orientated numerical on hev.pptApplication orientated numerical on hev.ppt
Application orientated numerical on hev.pptRamjanShidvankar
 
Wellbeing inclusion and digital dystopias.pptx
Wellbeing inclusion and digital dystopias.pptxWellbeing inclusion and digital dystopias.pptx
Wellbeing inclusion and digital dystopias.pptxJisc
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfAdmir Softic
 
Interdisciplinary_Insights_Data_Collection_Methods.pptx
Interdisciplinary_Insights_Data_Collection_Methods.pptxInterdisciplinary_Insights_Data_Collection_Methods.pptx
Interdisciplinary_Insights_Data_Collection_Methods.pptxPooja Bhuva
 
Salient Features of India constitution especially power and functions
Salient Features of India constitution especially power and functionsSalient Features of India constitution especially power and functions
Salient Features of India constitution especially power and functionsKarakKing
 
SOC 101 Demonstration of Learning Presentation
SOC 101 Demonstration of Learning PresentationSOC 101 Demonstration of Learning Presentation
SOC 101 Demonstration of Learning Presentationcamerronhm
 
HMCS Vancouver Pre-Deployment Brief - May 2024 (Web Version).pptx
HMCS Vancouver Pre-Deployment Brief - May 2024 (Web Version).pptxHMCS Vancouver Pre-Deployment Brief - May 2024 (Web Version).pptx
HMCS Vancouver Pre-Deployment Brief - May 2024 (Web Version).pptxmarlenawright1
 
On National Teacher Day, meet the 2024-25 Kenan Fellows
On National Teacher Day, meet the 2024-25 Kenan FellowsOn National Teacher Day, meet the 2024-25 Kenan Fellows
On National Teacher Day, meet the 2024-25 Kenan FellowsMebane Rash
 
How to Create and Manage Wizard in Odoo 17
How to Create and Manage Wizard in Odoo 17How to Create and Manage Wizard in Odoo 17
How to Create and Manage Wizard in Odoo 17Celine George
 
ICT role in 21st century education and it's challenges.
ICT role in 21st century education and it's challenges.ICT role in 21st century education and it's challenges.
ICT role in 21st century education and it's challenges.MaryamAhmad92
 
SKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptx
SKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptxSKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptx
SKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptxAmanpreet Kaur
 
80 ĐỀ THI THỬ TUYỂN SINH TIẾNG ANH VÀO 10 SỞ GD – ĐT THÀNH PHỐ HỒ CHÍ MINH NĂ...
80 ĐỀ THI THỬ TUYỂN SINH TIẾNG ANH VÀO 10 SỞ GD – ĐT THÀNH PHỐ HỒ CHÍ MINH NĂ...80 ĐỀ THI THỬ TUYỂN SINH TIẾNG ANH VÀO 10 SỞ GD – ĐT THÀNH PHỐ HỒ CHÍ MINH NĂ...
80 ĐỀ THI THỬ TUYỂN SINH TIẾNG ANH VÀO 10 SỞ GD – ĐT THÀNH PHỐ HỒ CHÍ MINH NĂ...Nguyen Thanh Tu Collection
 
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...ZurliaSoop
 
How to setup Pycharm environment for Odoo 17.pptx
How to setup Pycharm environment for Odoo 17.pptxHow to setup Pycharm environment for Odoo 17.pptx
How to setup Pycharm environment for Odoo 17.pptxCeline George
 
Python Notes for mca i year students osmania university.docx
Python Notes for mca i year students osmania university.docxPython Notes for mca i year students osmania university.docx
Python Notes for mca i year students osmania university.docxRamakrishna Reddy Bijjam
 

Dernier (20)

2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
 
Basic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptxBasic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptx
 
Understanding Accommodations and Modifications
Understanding  Accommodations and ModificationsUnderstanding  Accommodations and Modifications
Understanding Accommodations and Modifications
 
Application orientated numerical on hev.ppt
Application orientated numerical on hev.pptApplication orientated numerical on hev.ppt
Application orientated numerical on hev.ppt
 
Wellbeing inclusion and digital dystopias.pptx
Wellbeing inclusion and digital dystopias.pptxWellbeing inclusion and digital dystopias.pptx
Wellbeing inclusion and digital dystopias.pptx
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdf
 
Mehran University Newsletter Vol-X, Issue-I, 2024
Mehran University Newsletter Vol-X, Issue-I, 2024Mehran University Newsletter Vol-X, Issue-I, 2024
Mehran University Newsletter Vol-X, Issue-I, 2024
 
Interdisciplinary_Insights_Data_Collection_Methods.pptx
Interdisciplinary_Insights_Data_Collection_Methods.pptxInterdisciplinary_Insights_Data_Collection_Methods.pptx
Interdisciplinary_Insights_Data_Collection_Methods.pptx
 
Salient Features of India constitution especially power and functions
Salient Features of India constitution especially power and functionsSalient Features of India constitution especially power and functions
Salient Features of India constitution especially power and functions
 
SOC 101 Demonstration of Learning Presentation
SOC 101 Demonstration of Learning PresentationSOC 101 Demonstration of Learning Presentation
SOC 101 Demonstration of Learning Presentation
 
HMCS Vancouver Pre-Deployment Brief - May 2024 (Web Version).pptx
HMCS Vancouver Pre-Deployment Brief - May 2024 (Web Version).pptxHMCS Vancouver Pre-Deployment Brief - May 2024 (Web Version).pptx
HMCS Vancouver Pre-Deployment Brief - May 2024 (Web Version).pptx
 
On National Teacher Day, meet the 2024-25 Kenan Fellows
On National Teacher Day, meet the 2024-25 Kenan FellowsOn National Teacher Day, meet the 2024-25 Kenan Fellows
On National Teacher Day, meet the 2024-25 Kenan Fellows
 
How to Create and Manage Wizard in Odoo 17
How to Create and Manage Wizard in Odoo 17How to Create and Manage Wizard in Odoo 17
How to Create and Manage Wizard in Odoo 17
 
ICT role in 21st century education and it's challenges.
ICT role in 21st century education and it's challenges.ICT role in 21st century education and it's challenges.
ICT role in 21st century education and it's challenges.
 
SKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptx
SKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptxSKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptx
SKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptx
 
80 ĐỀ THI THỬ TUYỂN SINH TIẾNG ANH VÀO 10 SỞ GD – ĐT THÀNH PHỐ HỒ CHÍ MINH NĂ...
80 ĐỀ THI THỬ TUYỂN SINH TIẾNG ANH VÀO 10 SỞ GD – ĐT THÀNH PHỐ HỒ CHÍ MINH NĂ...80 ĐỀ THI THỬ TUYỂN SINH TIẾNG ANH VÀO 10 SỞ GD – ĐT THÀNH PHỐ HỒ CHÍ MINH NĂ...
80 ĐỀ THI THỬ TUYỂN SINH TIẾNG ANH VÀO 10 SỞ GD – ĐT THÀNH PHỐ HỒ CHÍ MINH NĂ...
 
Spatium Project Simulation student brief
Spatium Project Simulation student briefSpatium Project Simulation student brief
Spatium Project Simulation student brief
 
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
 
How to setup Pycharm environment for Odoo 17.pptx
How to setup Pycharm environment for Odoo 17.pptxHow to setup Pycharm environment for Odoo 17.pptx
How to setup Pycharm environment for Odoo 17.pptx
 
Python Notes for mca i year students osmania university.docx
Python Notes for mca i year students osmania university.docxPython Notes for mca i year students osmania university.docx
Python Notes for mca i year students osmania university.docx
 

Automated Glaucoma Detection

  • 1. SIM UNIVERSITY SCHOOL OF SCIENCE AND TECHNOLOGY COMPUTER BASED DIAGNOSIS OF GLAUCOMA USING PRINCIPAL COMPONENT ANALYSIS (PCA): A COMPARATIVE STUDY STUDENT : Lee You Tai Danny (B0704498) SUPERVISOR : Dr. Rajendra Acharya Udyavara PROJECT CODE : JAN2011/ENG/015 A project report submitted to SIM University in partial fulfillment of the requirements for the degree of Bachelor of Electronic Engineering November 2011
  • 2. JAN2011/ENG/015 LEE YOU TAI DANNY (B0704498) ABSTRACT Glaucoma is one of the many eye diseases can lead to the blindness if it is not detected and treated in proper time. It is often associated with the increased in the intraocular pressure (IOP) of the fluid (known as aqueous humor) in the eye, and it has been nicknamed as the “Silent Thief of Sight”. Glaucoma affects 40% of blindness in Singapore and is the second leading cause of blindness in the world. The detection of glaucoma through Optical Coherence Tomography (OCT) and Heidelberg Retinal Tomography (HRT) is very expensive. This paper presents a novel method to diagnose glaucoma using digital fundus images. Digital image processing techniques, such as image pre-processing, texture features extraction are widely used for the automatic detection of the various features. We have extracted features such as Homogeneity, Energy, Contrast, Moments, Fractal Dimension, Local Binary Patterns, Laws‟ Texture Energy and Fuzzy Gray Level Co-occurrence Matrix of the eye. These features are validated by automatically classifying the normal and glaucoma images using Probabilistic Neural Network (PNN) classifier. The images were retrieved from the Kasturba Medical College, Manipal, India. The results presented in this paper indicate that the features are clinically significant in the diagnosis of glaucoma. The overall objective is to apply image processing techniques on the digital fundus images of the eye for the analysis of glaucoma and normal eye. In this study, for each subject, 30 images were analyzed. By extracting information of pixel average value from the images, it is possible to obtain the necessary value for classification. i|Page
  • 3. JAN2011/ENG/015 LEE YOU TAI DANNY (B0704498) ACKNOWLEDGEMENTS Firstly, I would like to express my sincere and heartfelt appreciation to my project supervisor, Dr. Rajendra U.Acharya for his exceptional guidance, invaluable advice and wholehearted support in matters of practical and theoretical nature throughout the project. His constant time check and meet ups had certainly motivated me in the completion of the project. Throughout my thesis-writing period, he provided encouragement, sound advice, good teaching, good company, and lots of good ideas. Thanks to him for his tolerance and patience to all my queries regardless be it an email or phone call, he had almost response to it with no delays. The completion of the Final Year Project would not be possible without his excellence supervision. And I also would like to extend my warm appreciation to my colleagues and friends for sharing their knowledge, valuable contributions and help with this project. Last but not least, I would like to thank to Kasturba Medical College, Manipal, India for providing me with the digital fundus images which play major role in this Final Year Project. ii | P a g e
  • 4. JAN2011/ENG/015 LEE YOU TAI DANNY (B0704498) LIST OF FIGURES Figure 1.1: Proposed Design for a detection of Glaucoma ........................................................ 3 Figure 2.1: Simple diagram of the parts of the eye .................................................................... 5 Figure 2.2: Glaucoma eye anatomy ........................................................................................... 6 Figure 2.3: (a) Raw image before histogram equalization ......................................................... 9 Figure 2.3: (b) Pre-processed image and its histogram equalization ....................................... 10 Figure 4.1: (a) Fundus Camera and (b) Fundus Image ............................................................ 19 Figure 4.2: (a) Normal Eye Fundus Image and (b) Glaucomatous Eye Fundus Image ........... 20 Figure 4.3: (a) Normal FD and (b) Glaucoma FD ................................................................... 22 Figure 4.4: Circularly symmetric neighbor sets for different P and R ..................................... 23 Figure 4.5: Square neighborhood and Circular neighborhood ................................................. 23 Figure 4.6: Uniform and Non-uniform patterns ....................................................................... 24 Figure 5.1: Architecture of PNN .............................................................................................. 32 Figure 5.2: Procedure of three-fold stratified cross validation ................................................ 34 Figure 5.3: The distribution plot of the GII for normal and glaucoma subjects ...................... 36 iii | P a g e
  • 5. JAN2011/ENG/015 LEE YOU TAI DANNY (B0704498) LIST OF TABLES Table 3.1: Detail Project Plan .................................................................................................. 18 Table 4.1: LBP features for normal and glaucoma images with p-vale................................... 25 Table 4.2: LTE features of normal and glaucoma images with p-vale .................................... 27 Table 4.3: FGLCM features of normal and glaucoma images with p-vale.............................. 29 Table 5.1: 12 Features of normal and glaucoma PCA and their p-value ................................. 33 Table 5.2: PNN classification result......................................................................................... 34 Table 5.3: The GII values for normal and glaucoma subjects ................................................. 36 iv | P a g e
  • 6. JAN2011/ENG/015 LEE YOU TAI DANNY (B0704498) TABLE OF CONTENTS ABSTRACT ................................................................................................................................i ACKNOWLEDGEMENTS ...................................................................................................... ii LIST OF FIGURES ................................................................................................................. iii LIST OF TABLES ....................................................................................................................iv PART I .......................................................................................................................................1 CHAPTER 1: PROJECT INTRODUCTION ........................................................................1 1.1 Background and Motivation ........................................................................... 1 1.2 Project Objectives ........................................................................................... 2 1.3 Project Scope .................................................................................................. 2 CHAPTER 2: THEORY AND LITERATURE REVIEW ....................................................4 2.1 Anatomy of an Eye ......................................................................................... 4 2.2 Overview of Glaucoma ................................................................................... 5 2.3 Types of Glaucoma ......................................................................................... 6 2.3.1 Primary open angle glaucoma ...................................................................... 6 2.3.2 Angle closure glaucoma ............................................................................... 7 2.3.3 Secondary Glaucoma ................................................................................... 7 2.4 Detection of Glaucoma ................................................................................... 8 2.4 Image Processing with MATLAB .................................................................. 9 2.5 Statistical Application................................................................................... 10 2.5.1 Run length matrix....................................................................................... 14 CHAPTER 3: PROJECT MANAGEMENT........................................................................15 3.1 Project Plan ................................................................................................... 15 3.1.1 School facility ............................................................................................ 15 3.1.2 Internet broadband ..................................................................................... 15 3.2 Project Task and Schedule ............................................................................ 15 CHAPTER 4: DESIGN AND ALGORITHM .....................................................................19 4.1 Project Approach and Method ...................................................................... 19 4.2 Different Texture Features Study ................................................................. 21 4.2.1 Fractal Dimension ...................................................................................... 21 v|Page
  • 7. JAN2011/ENG/015 LEE YOU TAI DANNY (B0704498) 4.2.2 Local Binary Patterns ................................................................................. 22 4.2.3 Laws‟ Texture Energy................................................................................ 26 4.2.4 Fuzzy Gray Level Co-occurrence Matrix .................................................. 28 CHAPTER 5: CLASSIFICATIONS AND RESULTS........................................................30 5.1 Principal Component Analysis (PCA) .......................................................... 30 5.2 Classifier Used .............................................................................................. 31 5.3 Results........................................................................................................... 32 5.4 Glaucoma Integrated Index........................................................................... 35 CHAPTER 6: DISCUSSION, CONCLUSION AND RECOMMENDATION ..................37 6.1 Discussion ..................................................................................................... 37 6.2 Conclusion .................................................................................................... 38 6.3 Recommendation .......................................................................................... 39 PART II ....................................................................................................................................40 CRITICAL REVIEW AND REFLECTION........................................................................40 REFERENCES.....................................................................................................................41 APPENDIXES .....................................................................................................................43 vi | P a g e
  • 8. JAN2011/ENG/015 LEE YOU TAI DANNY (B0704498) PART I CHAPTER 1: PROJECT INTRODUCTION 1.1 Background and Motivation The eyes are the most complex, sensitive and delicate organs in our human body, which we view the world and are responsible for four fifth of all the information our brain receives. Blindness affects 3.3 million in America aged above 40 years and this may reach 5.5 million by the year 2020. The most leading causes of the visual impairment and blindness are cataract, glaucoma, macular degeneration and diabetes retinopathy. Among these, Glaucoma affects more than 3 million people living in the United States and is the leading cause for African Americans. Worldwide, it is the second leading cause of blindness [Global data on visual impairment in the year 2002]. It affects one in two hundred people aged fifty years and younger, and one in ten over the age of eighty years. [http://www.afb.org/seniorsite.asp?SectionID=63&TopicID=286&DocumentID=3198] There is no cure for glaucoma yet, hence early detection and prevention is the only way to treat glaucoma and avoid total loss of vision. Optical Coherence Tomography (OCT) and Heidelberg Retinal Tomography (HRT) are used to detect the glaucoma but the cost is very high. This paper presents a novel method for glaucoma detection using digital fundus images. Digital image processing techniques, morphological operations, histogram equalization, features extraction, normalization, principal component analysis (PCA), statistical analysis using Student T-Test and validated by classifying the normal and glaucoma images using Probabilistic Neural Network (PNN). 1|Page
  • 9. JAN2011/ENG/015 LEE YOU TAI DANNY (B0704498) 1.2 Project Objectives This project requires the following tasks to achieve the main objective:  The main objective of this project is to analyze and diagnose the glaucoma using digital fundus images by using image processing technique.  Detection and extraction of textures features and normalization of data.  Apply principal component analysis (PCA) to extract from these normalized features.  Study various data mining techniques such as: K-Nearest Neighbor (K_NN), Naïve Bayes Classifier (NBC) and Probabilistic Neural Network (PNN) for classification.  The academic goal of this project is to develop the skill of research, MATLAB programming and analysis. 1.3 Project Scope The project will include the following proposed scheme:  Acquiring digital fundus images of normal and glaucoma eye with an age group of 20 to 70 years. The images were collected from the Kasturba Medical College, Manipal, India which photographed and certified by the doctors in the ophthalmology department.  Image processing system which extract the relevant features for the automatic diagnosis of the glaucoma.  Detection and extraction of various texture features.  Normalization and analysis of texture features using MATLAB and Principal Component Analysis (PCA) method.  Classification of data using Probabilistic Neural Network (PNN). 2|Page
  • 10. JAN2011/ENG/015 LEE YOU TAI DANNY (B0704498) NORMAL GLAUCOMA TEXTURE FEATURES EXTRACTION USING IMAGE PRE-PROCESSING TECHNIQUES STATISTICAL ANALYSIS USING PRINCIPAL COMPONENT ANALYSIS (PCA) METHOD CLASSIFICATION USING PROBABILISTIC NEURAL NETWORK (PNN) RESULT RESULT NORMAL EYE GLAUCOMA EYE Figure 1.1: Proposed Design for a detection of Glaucoma 3|Page
  • 11. JAN2011/ENG/015 LEE YOU TAI DANNY (B0704498) CHAPTER 2: THEORY AND LITERATURE REVIEW 2.1 Anatomy of an Eye Basically the human eye is an organ which reacts to light for many different purposes and is made up of three coats, enclosing three transparent structures. The outermost layer is composed of the cornea and sclera. The middle layer consists of the choroid, ciliary body and iris. The innermost is the retina. The light rays from an object are reflected off and enter through the cornea. It then refracts the rays that pass through the pupil. Surrounding the pupils is the iris, the colored portion of the eye. The pupil opens and closed to regulate the amount of light passing through it. Hence, we are able to see the dilation of the pupils. Light rays will pass the lens that is located behind the pupils. This lens change the shape of the rays by further bending and focusing them to the retina located at the back of eyes. The retina consists of two major types of light-sensitive receptors which are also called tiny light-sensing nerve cells. They are cones nerve cells and rods nerve cells. The cones enable us to see bright light which produces photonic vision. Cones are mostly concentrated in and near the fovea. Only a few are present at the sides of the retina. It gives a clear and sharp image vision as when one looks at an object directly. It detects colors and detects fine details. The rods however, enable us to see in the dark which produces isotopic vision. It detects motion in the dark. It is located outside the macula and goes all the way to the outer of edge retina. Rod density is greater in the peripheral retina than in the central retina. Hence it provides peripheral or side vision. Cones and rods are connected through intermediate cells in the retina to nerve fibers of the optic nerve. When rods and cones are stimulated by light, the nerves sendoff impulses to the brain through these fibers. Figure 2.1 illustrate a simple diagram of the parts of the eye. 4|Page
  • 12. JAN2011/ENG/015 LEE YOU TAI DANNY (B0704498) Figure 2.1: Simple diagram of the parts of the eye 2.2 Overview of Glaucoma Glaucoma is an eye disease in which the optic nerve damages by the elevation in the intraocular pressure inside the eye caused by a build-up of excess fluid. This pressure can impair vision by causing irreversible damage to the optic nerve and to the retina. It can lead to the blindness if it is not detected and treated in proper time. Glaucoma result in peripheral vision loss, and is an especially dangerous eye condition because it frequently progresses without obvious symptoms. This is why it is often referred to as “The Silent Thief of Sight.” There is no cure of glaucoma yet, although it can be treated. Worldwide, it is the second leading cause of blindness [Global data on visual impairment in the year 2002]. It affects one in two hundred people aged fifty years and younger, and one in ten over the age of eighty years. The damage to the optic nerve from glaucoma cannot be reversed. 5|Page
  • 13. JAN2011/ENG/015 LEE YOU TAI DANNY (B0704498) However, lowering the pressure in the eye can prevent further damage to the optic nerve and further peripheral vision loss. There are various types of glaucoma that can occur and progress without obvious symptoms or sign. Even there is no cure for glaucoma, early detection and prevention can avoid total loss of vision. Glaucoma can be divided into two main types, (1) Primary open angle glaucoma and (2) Angle closure glaucoma. Last but not least there is another glaucoma known as secondary glaucoma, which will explain in next section. Figure 2.2: Glaucoma eye anatomy 2.3 Types of Glaucoma 2.3.1 Primary open angle glaucoma This type of glaucoma is the most common (sometimes called Chronic Glaucoma) and symptoms are slow to develop. As the glaucoma progress the side or peripheral vision is failing. It may cause a person to miss the objects out of the side or corner of the eye. It happens when the eye‟s drainage canals become clogged over time or the eye over-produces aqueous fluid which causes the pressure inside the eye to build to abnormal levels. The inner eye pressure (IOP) rises because the correct 6|Page
  • 14. JAN2011/ENG/015 LEE YOU TAI DANNY (B0704498) amount of fluid can‟t drain out of the eye. It‟s affecting 70% to 80% of those who suffered from the disorder and accounts for 90% of glaucoma cases in the United States. It is painless and does not have acute attacks. It can develop gradually and go unnoticed for many years but this type of glaucoma usually responds well to medication, especially if caught early and treated. 2.3.2 Angle closure glaucoma Also known as Acute Narrow Angle Glaucoma and accounts for less than 10% of glaucoma cases in the United States. Although it is rare and is different from open angle glaucoma it is the most serious form of disease. The problem occurs more commonly in farsighted elderly people, particularly in women and often occurs in both eyes. Angle closure glaucoma occurs primarily in patients who have a shallow space between the cornea at the front of the eye and the colored iris that lies just behind the cornea. As the eye ages, the pupil grows and becomes smaller, restricting the flow of fluid to the drainage site. As fluid buildup and blockage happens, a rapid rise in intraocular pressure can occur. This kind of glaucoma is normally very painful because of the sudden increase in pressure inside the eye. The symptoms of an acute attack are more severe and can be totally disabling. They include severe pain, often accompanied by nausea and vomiting. Diabetes can be a contributing cause to the development of glaucoma. Treatment of angle closure glaucoma is known as peripheral iridectomy and usually involves surgery to remove a small portion of the outer edge of the iris to allow aqueous fluid to flow easily to the drainage site. 2.3.3 Secondary Glaucoma Both open angle glaucoma and angle closure glaucoma can be primary or secondary conditions. Primary conditions are when the cause is unknown, unlike secondary conditions which can be traced to a known cause. Secondary glaucoma may be caused by a variety of medical conditions, medications, eye abnormalities and 7|Page
  • 15. JAN2011/ENG/015 LEE YOU TAI DANNY (B0704498) physical injuries. The treatments of secondary glaucoma are frequently associated to eye surgery. Symptoms of glaucoma include:  Headaches  Intense pain  Blurred vision  Nausea or Vomiting  Medium dilation of the pupil  Bloodshot eyes and increased sensitivity However, in general the field of vision is being narrowed to such whereby one is unable to see clearly. 2.4 Detection of Glaucoma There are three different tests which can detect the glaucoma:  Ophthalmoscopy  Tonometry  Perimetry Two routine eye tests for regular glaucoma check-ups include: Tonometry and Ophthalmoscopy. Most glaucoma tests are very time consuming and also need special skills and diagnostic equipment. Therefore, new techniques to diagnose glaucoma at early stages are urgently needed with accuracy, speed and even with less skilled people. In recent year‟s computer based systems made glaucoma screening easier. Imaging systems, such as fundus camera, optical coherence tomography (OCT), Heidelberg retina tomography (HRT) and scanning laser polarimetry, have been extensively used for eye diagnosis. HRT, confocal laser scanning tomography and OCT can show retina nerve fiber damage even before the damage to the visual fields. However, those equipment are very expensive and only some hospitals able to afford them. Hence fundus cameras can be used as an alternative by many ophthalmologists to diagnose glaucoma. Images processing technique allows to extract the features that can provide useful information to diagnose glaucoma. 8|Page
  • 16. JAN2011/ENG/015 LEE YOU TAI DANNY (B0704498) 2.4 Image Processing with MATLAB MATLAB is high-level language and interactive environment that enables to perform computationally intensive tasks faster than with traditional programming languages such as C, C++, and FORTRAN. In this project, MATLAB is the main software to implement the image processing part as well as texture features extraction and classification. After texture features extraction, the data will be normalized and selected Principal Component Analysis (PCA) features are fed to PNN classifier for classification. Image pre-processing is mainly to improve the image contrast using histogram equalization whereas to increase the dynamic range of the histogram of an image and intensity value of pixels in the input image. The output image contains a uniform distribution of intensities and increased the contrast of an image. Figure 2.3: (a) and (b) shows the raw image and pre-processed fundus image with histogram equalization graph. Figure 2.3: (a) Raw image before histogram equalization 9|Page
  • 17. JAN2011/ENG/015 LEE YOU TAI DANNY (B0704498) 10000 9000 8000 7000 6000 5000 4000 3000 2000 1000 0 0 50 100 150 200 250 Figure 2.3: (b) Pre-processed image and its histogram equalization 2.5 Statistical Application The statistical mathematical application is used in this project. The texture features were extracted from each digital fundus image by image pre-processing techniques such as Co-occurrence matrix, Contrast, Homogeneity, Entropy, Angular second moment, Energy, Mean, Run length matrix, Short run emphasis, Long run emphasis, Run Percentage, Gray Level Non-uniformity and Run Length Non-uniformity. The collected data were normalized and selected using Principal Component Analysis (PCA). Finally the selected PCA features were classified using Probabilistic Neural Network (PNN). Co-occurrence matrix, For an image of m x n, we can perform a second-order statistical textural analysis by constructing the gray level co-occurrence matrix (GLCM) [Tan et al., 2009] by Cd i, j    p, q ,  p  x, q  y  : I  p, q   i, I  p  x, q  y   j (2.1)    Where   p, q ,  p  x, q  y   M  N , d  x, y  And  denotes the cardinality of a set. For a pixel in an image having a grey level i, the probability that the gray level of a pixel at a x, y  distance away is j is defined as: 10 | P a g e
  • 18. JAN2011/ENG/015 LEE YOU TAI DANNY (B0704498)  C i, j  Pd i, j   d   Cd i, j  (2.2) With (1) and (2), we acquire the following features:   P i, j  2 Energy: E  d (2.3) i j Energy is the sum of squared elements in the gray level co-occurrence matrix called angular second moment. It is the measurement of the denseness or order in the image. Contrast: Co    i  j  2Pd i, j  (2.4) i j Contrast is the differences in the gray level co-occurrence matrix or is the measurement of the local variations. Its measure the elements do not lie on the main diagonal and returns a measure of the intensity contrast between a pixel and the neighboring pixels over the whole image. P i, j  Homogeneity: H    d  (2.5) j 1  i  j  2 i Homogeneity measures the distribution of elements nearest to the diagonal in the GLCM. It‟s inversely proportional to the contrast. The addition of value „1‟ in the denominator is to prevent the value „0‟ during division. Entropy: En    Pd i, j   ln Pd i, j  i j (2.6) 11 | P a g e
  • 19. JAN2011/ENG/015 LEE YOU TAI DANNY (B0704498) Entropy is a thermodynamic property and its present the degree of disorder in the image. The entropy value is the largest when all the elements of the co-occurrence matrix are the same and small when elements are unequal. Moments- m1 , m 2 , m3 and m 4 are defined as: mg    i  j g Pd i, j  (2.7) i j where g is the integer power exponent that defines the moment order. Moments are the statistical expectation of certain power functions of a random variable that measure the shape of a set of data points. The first moment is the mean and which is the average of pixel values in an image. The second moment is the standard deviation and is given by m2=E(x-µ)2 where E(x) is the expected value of x. The standard deviation determines how much variation is from the average or mean. It‟s denoted by Greek symbol σ. The third moment measures the degree of asymmetry in the distribution. Also called skewness and the value can be positive or negative or even undefined. The fourth moment measures the relative peakedness or flatness of the probability distribution of a real valued random variable and is also known as kurtosis. If a distribution has a peak at the mean and long tails, the fourth moment will be high and the kurtosis positive (platykurtic); and conversely, bounded distributions tend to have low kurtosis (keptokurtic). For difference statistics, which is a subset of co-occurrence matrix, it can be obtained by [Tomita et al., 1990]: P k    Cd i, j   (2.8) i j 12 | P a g e
  • 20. JAN2011/ENG/015 LEE YOU TAI DANNY (B0704498) Where i  j  k , k = 0, 1, … n – 1, and n is the number of gray scale level [9]. For each entry in the difference matrix, it is in fact the sum of the probability that the gray-level difference is k between two points separated by δ. We can derive the following properties from the difference matrix [Weszka et al., 1976]: n1 AngularSecondMoment :  P  ) 2 (k k 0 (2.9) Angular second moment (also known as Uniformity) is large when certain values are high and others are low. It‟s a measure of local homogeneity and the opposite of Entropy. n1 Contrast :  k 2 P(k ) k 0 (2.10) Contrast is the moment of inertia about the origin and also known as the second moment of Pδ . n 1 Entropy:   P (k ) log P (k )  k 0 (2.11) Entropy is directly proportional to unpredictability. Entropy is smallest when Pδ (k) values are unequal and largest when Pδ (k) values are equal. n 1 Mean:  kP (k ) k 0  (2.12) Mean is large when they are far from the origin and small when Pδ (k) values are concentrated near the origin. 13 | P a g e
  • 21. JAN2011/ENG/015 LEE YOU TAI DANNY (B0704498) 2.5.1 Run length matrix In run length matrix P (i, j), each cell in the matrix consists of the number of elements where gray level „i‟ successively appears „j‟ times in the direction  . And the variable „j‟ is termed as run length. The resultant matrix characterizes gray-level runs by the gray tone, length and the direction of the run. This method allows extracting the higher order statistical texture features. As a common practice, run length matrices of  = 0, 45, 90 and 135 were calculated and get the following features [Galloway et al., 1975]: P i, j  Short run emphasis:  i j  j2  P i, j  i j  (2.13) Long run emphasis:  j P i, j   P i, j  i j 2  i j  (2.14)  2  Gray Level Non-uniformity:   P i, j    P i, j   (2.15) i  j  i j  2   Run Length Non-uniformity:   P i, j  j  i     P i, j  i j  (2.16) Run Percentage:  P i, j  i j  A (2.17) where A is the area of interest in the image. This particular feature is the ratio between the total number of observed runs in image and the total number of possible runs if all runs had a length of one. 14 | P a g e
  • 22. JAN2011/ENG/015 LEE YOU TAI DANNY (B0704498) CHAPTER 3: PROJECT MANAGEMENT 3.1 Project Plan From the start of the project, proper planning is very important as it contributes the success of the project. Time management factors play a critical role in every project and it leads to a good graded project. I need to juggle between both my projects, exam and my full-time job, having a well plan is not only my own contribution but also from my project Supervisor, Dr. Rajendra Acharya U, did his part to meet up and constantly reminding me as per the planned schedule. For the project to be made possible, a Gantt chart (shown in appendix Q), was used to create the project plan and monitor the progress. 3.1.1 School facility The access to UniSIM library or any neighborhood library is a must. As most of my research and read up on the various types of applications for image processing and texture features extraction, came from the library shelf and online IEEE journals. MATLAB Central also helps improve my programming skills for this project. 3.1.2 Internet broadband The access to internet is very important as most researches could be done by a “click” of the mouse. It is especially essential as many datasheets are available in the World Wide Web for information. The Broadband connection will help speed up the downloading of the information we need. 3.2 Project Task and Schedule 15 | P a g e
  • 23. JAN2011/ENG/015 LEE YOU TAI DANNY (B0704498) There were slight changes between the planned schedule and the actual dates. Such changes were unavoidable; as there are many other commitments such as assignments, exams, work and unforeseen circumstances. From the tight schedule, project plan from initial report is presented, and discrepancy in the schedules will be discussed. Project tasks are divided into nine sections. 1. Project Consideration and Selection Process 2. Literature Search 3. Preparing for Initial Report (TMA01) @ Project Proposal 4. Digital image processing using MATLAB 5. Statistical Analysis 6. Texture features extraction and normalization 7. Classification 8. Preparing for Final Report (Thesis) 9. Preparing for oral Presentation  In Task 1, we are allowed to consider the project and need to select the interested project. It takes us about 7 days to choose. And the project committee makes allocation for proposed project. It takes about 8 days to approve.  Since Literature research is one of the most important steps for understanding of the project, 31 days are used for Task 2. It is mainly focus on library reference books and online IEEE journals.  Preparation of initial report partially depends on Task 2. Task 2 and 3 were carried out at the same time and additional 7 days were used to complete the proposal.  Task 4 scheduled for 4 weeks to complete. This task is to study on digital image processing and how to write MATLAB code as well as the algorithms.  We set to complete the project on 1st Oct 2010. The duration for Task 5, 6 and 7 is 150 days. These tasks are very important and main tasks for the project. In these tasks, focus on texture features extraction and normalization as well as classification. 16 | P a g e
  • 24. JAN2011/ENG/015 LEE YOU TAI DANNY (B0704498)  For task 8, preparation for final report is 84 days as it is portrayal of our whole project work and 40% of capstone project score is carried by this task. It will start from 1st Aug 11 and end at submission date 14th Nov 11 and it will also be carried out concurrently with task 7.  For task 9, preparation for oral presentation will start 3 weeks before finish writing of the report. There are 21 days available for this task. Table 3.1 explained detailed project plan. 17 | P a g e
  • 25. JAN2011/ENG/015 LEE YOU TAI DANNY (B0704498) Computer Based Diagnosis of Glaucoma Using PCA: A Comparative Study Duration Tasks Description Start Date End Date Resources ( Days ) 1. Project Consideration and Selection Process 2-Jan-11 16-Jan-11 15 1.1 Project consideration 2-Jan-11 8-Jan-11 7 Library 1.2 Selection of the project 9-Jan-11 16-Jan-11 8 Resources 2. Literature Search 17-Jan-11 16-Feb-11 31 (Reference 2.1 Research on IEEE online journals , relevant Books and 17-Jan-11 30-Jan-11 14 reference books and former student thesis report online 2.2 Analyze and study relevant books and journals 31-Jan-11 16-Feb-11 17 IEEE 3. Preparation of initial report (TMA01) 17-Feb-11 7-Mar-11 20 journals ) 4. Research on project components 9-Mar-11 8-Apr-11 29 4.1 An introduction to image processing using MATLAB 9-Mar-11 20-Mar-11 12 MATLAB 4.2 Extraction and compiling of MATLAB codes 21-Mar-11 8-Apr-11 17 Software 5. Digital image processing using MATLAB 9-Apr-11 2-May-11 26 5.1 Familiarization with MATLAB codes 7-Apr-11 15-Apr-11 9 Web 5.2 Extraction and collecting texture features 16-Apr-11 24-Apr-11 9 Resources 5.3 Overview of texture features 25-Apr-11 2-May-11 8 (IEEE Journals, 6. Extraction features using PCA 3-May-11 19-Jun-11 48 Past 6.1 Classification with various data mining techniques 3-May-11 25-May-11 23 Thesis, 6.2 Statistical Analysis 26-May-11 12-Jun-11 18 NI 6.3 Glaucoma integrated index 13-Jun-11 19-Jun-11 7 website, 7. Classification result and comparison 20-Jun-11 31-Aug-11 72 Related 8. Preparation for final report 1-Sep-11 31-Oct-11 61 reference 8.1 Writing skeleton of final report 1-Sep-11 7-Sep-11 8 Books) 8.2 Writing Literature search 8-Sep-11 15-Sep-11 8 8.3 Writing Introduction of report 16-Sep-11 22-Sep-11 8 8.4 Writing Main body of report 23-Sep-11 14-Oct-11 22 8.5 Writing conclusion and further study 15-Oct-11 20-Oct-11 5 School 8.6 Finalizing and amendments of report 21-Oct-11 31-Oct-11 10 Lab 9. Preparation for oral presentation 1-Nov-10 2-Dec-11 32 Facility & 9.1 Review the whole project for presentation 1-Nov-11 27-Nov-11 27 Personal computer 9.2 Prepare poster for presentation 28-Nov-11 2-Dec-11 5 Table 3.1: Detail Project Plan 18 | P a g e
  • 26. JAN2011/ENG/015 LEE YOU TAI DANNY (B0704498) CHAPTER 4: DESIGN AND ALGORITHM 4.1 Project Approach and Method In this project, glaucoma is diagnosed using digital fundus images. The texture features were extracted from the digital fundus images using Principal Component Analysis (PCA) method. The 60 fundus images (30 normal images and 30 glaucoma images with an age group of 20 to 70 years) were collected from the Kasturba Medical College, Manipal, India. Images are stored in Bitmap format with an image size of 560x720 pixels. A fundus camera is designed to take pictures of the inner surface of the eye. A fundus camera is one of the most popular devices used for Ophthalmoscopy and used by doctor to diagnose eye diseased as well as to monitor their progression. Figure 4.1 (a) shows the fundus camera (which consists of a microscope attached with a camera and light source) and Figure 1 (b) shows fundus image. Figure 4.1: (a) Fundus Camera and (b) Fundus Image Figure 4.2(a) shows normal eye digital fundus image and Figure 2(b) shows glaucoma eye digital fundus image with a resolution of 560 x 720. 19 | P a g e
  • 27. JAN2011/ENG/015 LEE YOU TAI DANNY (B0704498) Figure 4.2: (a) Normal Eye Fundus Image and (b) Glaucomatous Eye Fundus Image The method we had employed for the analysis of the images involves the image processing procedure described details in the following section. The block diagram of the proposed system for the detection of glaucoma is shown in Figure 1.1. The first step is pre-processing of image data to remove the non-uniform background which may be due to non-uniform illumination or variation in the pigment color of eye. Adaptive histogram equalization operation was performed to solve this problem. This technique computes several histograms, each corresponding to a distinct section of the image, and uses them to redistribute the lightness values of the image. Subsequently, these images were converted to grayscale. As a second step, various groups of texture features were extracted from each digital fundus image. The two groups of normal and glaucoma features were normalized. The p-value is calculated from the normalized data by using Student’s T-Test which used to assess whether the means of two groups are statistically different from each other. This p-value is calculated for each feature and p-values with below 0.05 are regarded as statistically significant. A lower p-value indicates that the two groups are statistically different. The ability to assess the difference between the data groups is important for this study, because we want to assess the capability of the extracted features to discriminate between neuropathy and normal data. 20 | P a g e
  • 28. JAN2011/ENG/015 LEE YOU TAI DANNY (B0704498) PCA method is used to extract features from the images and fed to various data mining techniques namely: K-Nearest Neighbor (K_NN), Naïve Bayes Classifier (NBC), Decision Tree (DeTr) and Probabilistic Neural Network (PNN) for comparison. 4.2 Different Texture Features Study Texture features derived from Gray Level Co-occurrence Matrix (GLCM), Entropy, Energy, Homogeneity, Contrast, Symmetry, Correlation, Moment, Mean, Run-length matrix such as Short run emphasis, Long run emphasis, Run Percentage, Gray level non-uniformity and Run length non-uniformity are explained earlier. In this section, brief explanations of Fractal Dimension, Local Binary Patterns, Laws‟ Texture Energy and Fuzzy Gray Level Co-occurrence Matrix are described as follows. 4.2.1 Fractal Dimension The concept of fractal was first introduced by Benoit Mandelbrot in 1975. In his opinion, fractals objects have three important properties, (1) Self similarity (2) Iterative formation and (3) Fractional dimension. There are many instances of fractals in nature such as Contour set, Sierpinski triangle, Koch snow, Julia set, Fern fractal, etc. There are many specific definitions of fractal dimension. The most important theoretical fractal dimensions are: 1) Hausdorff dimension 2) Packing dimension 3) Rényi dimensions 4) Box-Counting dimension 5) Correlation dimension Fractal dimension is estimated as the exponent of power law. Fractal analysis can be seen in various application areas and most common interest is to determine the fractal dimension of the concerned objects. Fractal dimension is a real number used to characterize the geometric complexity of A. A bounded set A in Euclidean n-space is 21 | P a g e
  • 29. JAN2011/ENG/015 LEE YOU TAI DANNY (B0704498) self-similar if A is the union of Nr distinct (non-overlapping) copies of itself scaled up or down by a factor of r. The fractal dimension D of A is given by relation (1). 1  Nr r D (4.1) logr N   D (4.2)  1  log   r      14 14 12 12 10 10 8 8 log(N) log(N) 6 6 4 4 2 2 0 0 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6 log(1/r) log(1/r) Figure 4.3: (a) Normal FD and (b) Glaucoma FD Fractal Dimension (FD) will be extracted using MATLAB program fractal.m which generated fractal data associated with the graph as shown above. MATLAB codes are listed in appendix N. 4.2.2 Local Binary Patterns Local Binary Patterns (LBP) is a very powerful texture feature which discrete occurrence histogram of the “Uniform” patterns computed over an image or a region of image. It‟s defined as a gray-scale invariant texture measure, effectively combines structural and statistical approaches by computing the occurrence histogram. The local binary pattern detects microstructures (e.g., edges, lines, spots, flat areas) whose underlying distribution is estimated by the histogram. The LBP operator can be seen in 22 | P a g e
  • 30. JAN2011/ENG/015 LEE YOU TAI DANNY (B0704498) most of the real world applications because of its simple computation and possible to analyze images in real time as well as texture segmentation to face recognition. The image texture is characterized by two orthogonal properties: spatial structure (pattern) and contrast (the „amount‟ of local image texture) where spatial pattern is affected by rotation and contrast is affected by gray scale. The LBP operator provides a unified approach to the traditionally divergent statistical and structural model of texture analysis. In this paper, the LBP feature vector is calculated as follows. Arbitrary circular neighborhoods around the uniform pattern pixel „P‟ points are chosen on the circumference of the circle with radius „R‟. On the circular neighborhood, the grayscale intensity values at points that do not coincide exactly with pixel locations are estimated by interpolation. Figure 4.4 shows circularly symmetric neighbor sets for different values of P and R. Figure 4.5 depicts square neighborhood and circular neighborhood. Figure 4.4: Circularly symmetric neighbor sets for different P and R Figure 4.5: Square neighborhood and Circular neighborhood 23 | P a g e
  • 31. JAN2011/ENG/015 LEE YOU TAI DANNY (B0704498) Let gc be the intensity of the center pixel and gp, p=0,…., P-1, be that of the P points. These P points are converted to a circular bit-stream of 0s and 1s according to whether the intensity of the pixel is less than or greater than the intensity of the center pixel. From each pixel the uniform pixels are used for further computation of texture descriptor and the non-uniform patterns are assigned to single bin. These “uniform” fundamental patterns have a uniform circular structure that contains very few spatial transitions U (number of spatial bitwise 0/1 transitions). The texture primitives for microstructures detected by the uniform patterns of LBP such as bright spot (U=0), flat area or dark spot (U=8), and edges of varying positive and negative curvature (U=1-7). Figure 4.1.3 shows uniform and non-uniform patterns. Figure 4.6: Uniform and Non-uniform patterns Therefore, a rotation invariant measure called LBPP,R using uniformity measure U is calculated based on the number of transitions in the neighborhood pattern. Only patterns with U ≤ 2 are assigned the LBP code as:  P1 LBPP,R ( x)   P  s( g  g c ) U ( x)  2 P0 (4.3) P 1 otherwise 24 | P a g e
  • 32. JAN2011/ENG/015 LEE YOU TAI DANNY (B0704498) where 1 x  0 s( x)   0 x  0 The center pixel is labeled as uniform if the number of bit-transitions in the circular bit- stream is less than or equal to 2. A look up table is generally used to compute the bit- transitions to reduce computational complexity. Multi-scale analysis is done by combining the information provided by N-operators and summing up operator-wise similarity scores into an aggregate similarity score. The LBP operator chooses the circles with various degrees around the center pixels and constructing separate LBP image for each scale. In this project, energy and entropy of the LBP images are constructed over different scales (R=1, 2, and 3 with the corresponding pixel count P being 8, 16 and 24 respectively) were used as feature descriptors. A total of nine LBP based features were extracted from the studied image. Table 4.2 shows the nine LBP based features for normal and glaucoma images with their respective p-value. Features Normal Glaucoma P-Value LBP1 2.27 ± 0.363 1.93 ± 0.288 < 0.0002 LBP2 0.321 ± 0.131 0.183 ± 8.555E-02 < 0.0001 LBP3 0.401 ± 7.747E-02 0.326 ± 5.124E-02 < 0.0001 LBP4 2.86 ± 0.592 2.45 ± 0.390 < 0.0026 LBP5 0.539 ± 0.148 0.394 ± 9.297E-02 < 0.0001 LBP6 0.475 ± 1.087E-02 0.478 ± 8.898E-03 < 0.26 LBP7 3.53 ± 2.09 3.01 ± 0.503 < 0.0066 LBP8 0.604 ± 0.124 0.481 ± 7.595E-02 < 0.0001 LBP9 0.473 ± 1.931E-02 0.494 ± 8.663E-03 < 0.0001 Table 4.1: LBP features for normal and glaucoma images with p-vale Local Binary Patterns (LBP) will be extracted using MATLAB program lbpana.m which calls another MATLAB function programs lbp.m. These MATLAB codes are listed in appendix O. 25 | P a g e
  • 33. JAN2011/ENG/015 LEE YOU TAI DANNY (B0704498) 4.2.3 Laws’ Texture Energy Laws‟ texture energy measure is computed by applying small convolution kernels to a digital image and then performing a non-linear window operation. It‟s another approach of detecting various types of textures by using local masks. Laws marks represented image features without referring to the frequency domain. Texture energy measure is based on texture energy transforms applied to the image to estimate the energy within the pass region of filters. The texture description uses such as: 1) Average gray level 2) Edges 3) Spots All the masks were derived from one dimensional (1-D) vectors of three pixels length: 1) L3 = [1 2 1] averaging 2) E3 = [-1 0 1] first difference-edges 3) S3 = [-1 2 -1] second difference-spots Nine 2-D masks of size 3 x 3 can be generated by convolving any vertical 1-D vector with a horizontal one as shown in Equation (4.4). 1   1 0 1       2  1 * 0 1   2 0 2  (4.4)  1   1 0 1  E3   L3   L3 E 3 To extract texture information from an image I(i,j), the image was first convoluted with each 2-D mask. To filter the image I(i,j),with E3S3, the result is a Texture Image (TIE3E3) as shown in Equation (4.5). TIE3E3 = I(i,j)*E3S3 (4.5) 26 | P a g e
  • 34. JAN2011/ENG/015 LEE YOU TAI DANNY (B0704498) According to Laws‟ suggestion, all the convolution kernels used is zero mean with an exception of the L3L3 kernel. And this image (TIL3L3) was used to normalized the contrast of the remaining texture images TI(i,j) with the eight zero-sum masks numbered 1 to 8. Equation (4.6) shows the normalization of Texture Image. TI(i , j ) m a sk NormalizeTI mask   (4.6) TI (i , j )L 3 L 3 The outputs (TI) from Laws‟ masks were passed to Texture Energy Measurement (TEM) filters (4.7). These consisted of moving non-linear window average of absolute values. The images were filtered using the eight masks and energies were computed. The computed features were chosen to quantify the changes in the levels, edges, and spots in the studied image. Eight LTE based features were extracted from the image by using above eight masks. Table 4.3: shows all the eight LTE features of normal and glaucoma with their respective p-value. 3 3 TEM (i , j )    TI u3v3 (iu , j v ) (4.7) Features Normal Glaucoma P-Value LTE1 0.341 ± 0.220 0.740 ± 0.279 < 0.0001 LTE2 0.346 ± 0.306 0.692 ± 0.299 < 0.0001 LTE3 0.298 ± 0.285 0.726 ± 0.228 < 0.0001 LTE4 0.241 ± 0.269 0.731 ± 0.297 < 0.0001 LTE5 0.164 ± 0.255 0.682 ± 0.291 < 0.0001 LTE6 0.294 ± 0.407 0.702 ± 0.277 < 0.0001 LTE7 0.223 ± 0.244 0.689 ± 0.280 < 0.0001 LTE8 0.173 ± 0.251 0.675 ± 0.278 < 0.0001 Table 4.2: LTE features of normal and glaucoma images with p-vale 27 | P a g e
  • 35. JAN2011/ENG/015 LEE YOU TAI DANNY (B0704498) Laws‟ Texture Energy (LTE) will be extracted using MATLAB program MAIN.m which calls four other MATLAB function programs lawsanalysis.m, lawsmask.m, lawsfilter.m and lawsimg.m. These MATLAB codes are listed in appendix P. 4.2.4 Fuzzy Gray Level Co-occurrence Matrix Gray Level Co-occurrence Matrix (GLCM) is a matrix that represents second-order texture moments, and it‟s described the frequency of one gray level appearing in a specified spatial linear relationship with another gray level within the neighborhood of interest. The Fuzzy Gray Level Co-occurrence Matrix (FGLCM) of an image I of size L x L is given by: Fd m, n    f mn  LxL (4.8) where fmn corresponds to the frequency of occurrence of a gray value „around m‟ and different pixel with gray value „around n‟, which are the relative distance between the pixel pair d measure in relative orientation θ. It is represented as: F = f (I,d,θ). In this paper, θ is quantized in four direction (0º, 45º, 90º, 135º) for a distance d = 20 with a rotational invariant co-occurrence matrix „F‟, to find the texture feature value. Equation (4.9) and (4.10) depict FGLCM based features.  Energy: E fuzzy    Fd m, n 2  (4.9) m n Entropy: H fuzzy    Fd m, n . ln Fd m, n  (4.10) m n The homogeneity feature is used to measure the similarity between two pixels that are apart, (Δm,Δn) and the contrast feature is used to captured the local variation between those two pixels. The degree of disorder and the denseness in an image are measured by energy and entropy features. The entropy feature will have a maximum value when all 28 | P a g e
  • 36. JAN2011/ENG/015 LEE YOU TAI DANNY (B0704498) elements of the co-occurrence matrix are the same. FGLCM features of normal and glaucoma images and their respective p-value are shown in Table 4.4. Features Normal Glaucoma P-Value Homogeneity 0.1965±0.01860 0.2239±0.0336 < 0.0006 Energy 7.354E+03 ±5.13E+02 8.402E+03±5.29E+02 < 0.0005 Entropy 3.6614 ±.056417 3.543 ±0.046 < 0.0002 Contrast 15.2557 ±0.5096 13.8617 ±0.5728 < 0.0014 Symmetry 1.000 ± 3.608E-04 1.000 ± 3.191E-04 < 0.079 Correlation 8.704E-03 ± 6.622E-04 8.051E-03 ± 4.727E-04 < 0.0001 Moment1 -2.621E-02 ±1.170E-02 -1.81E-02 ±2.940E-02 < 0.016 Moment2 1.2162E+03 ±1.054E+03 4.63842E+02 ±1.156E+02 < 0.0014 Moment3 -1.209E+03 ± 4.04E+03 -4.2751E+02 ±1.528E+03 < 0.016 Moment4 7.372E+05 ±6.65E+05 1.253150E+06 ±1.211E+06 < 0.0014 Angular2ndMoment 1.238E+10 ± 9.827E+08 1.324E+10 ± 2.335E+09 < 0.067 Contrast 1.943E+06 ± 1.399E+06 9.061E+05 ± 1.001E+06 < 0.0017 Mean 8.812E+03 ± 5.574E+03 4.713E+03 ± 3.988E+03 < 0.0018 Entropy -1.808E+03 ± 98.3 -1.848E+03 ± 228. < 0.39 ShortRunEmphasis 0.855 ± 1.931E-02 0.834 ± 2.857E-02 < 0.0012 LongRunEmphasis 6.21 ± 5.17 10.2 ± 4.38 < 0.0019 RunPer 2.19 ± 0.210 2.06 ± 0.169 < 0.0091 GrayLvlNonUni 4.0523E+04 ±5.960E+03 3.6804E+04 ±4.429E+03 < 0.0011 RunLengthNonUni 6.8813E+02 ±53.99 849 ±183 < 0.13 Table 4.3: FGLCM features of normal and glaucoma images with p-vale Fuzzy Gray Level Co-occurrence Matrix (FGLCM) will be extracted using MATLAB program Main.m which calls six other MATLAB function programs fuzzycoocc.m, jtxtAnalyCCMFea.m, jfDStats.m, jrLengthMat.m, jrRLength.m and jcoMatrix.m. These MATLAB codes are listed in appendix Q. 29 | P a g e
  • 37. JAN2011/ENG/015 LEE YOU TAI DANNY (B0704498) CHAPTER 5: CLASSIFICATIONS AND RESULTS 5.1 Principal Component Analysis (PCA) In this project, the normalizing features were extracted by using Principal Component Analysis (PCA) method and the features were fed to the PNN classifier. Principal Component Analysis (PCA) is a mathematical procedure that transforms a number of possible correlated variables into a smaller number of uncorrelated variables called principal components. PCA is a powerful tool for analyzing data because it is a simple, non-parametric method of extracting relevant information from confusing data sets. The other main advantage of PCA is when identifying patterns in the compress data (or by reducing the number of dimension) the information loss is very less. The number of principal components is less than or equal to the number of original variables. In this work, the 33 dimension of data set were reduced to 12 number of dimension. For PCA to work properly, the mean value is subtracted from each of the data dimensions and its produces a data set whose mean is zero. The eigenvalues and eigenvectors were calculated from the covariance matrix. These are important as it gives useful information about the data. After forming a feature vector, multiplying it with the original data set by taking the transpose of the feature vector. FinalData  RowFeatureVector  RowDataAdjust In order to get the original data back, we used the equation shows below. RowDataAdjust  RowFeatureVector 1  FinalData RowDataAdjust  RowFeatureVector T  FinalData 30 | P a g e
  • 38. JAN2011/ENG/015 LEE YOU TAI DANNY (B0704498) RowOriginalData  (RowFeatureVector T  FinalData )  OriginalMean The equation still gives the correct transform without present of all the eigenvectors. Principal Component Analysis (PCA) will be extracted using MATLAB program Test1.m which generated 12 selected PCA components. MATLAB codes are listed in appendix R. 5.2 Classifier Used A Probabilistic Neural Network (PNN) is an implementation of a statistical algorithm called kernel (Kernels are also called “Parzen Windows”) discriminant analysis in which the operation are organized into a multilayered feed forward network with four layers:  Input Layer  Pattern Layer  Summation Layer  Output Layer Figure 5.2: shows the architecture of a typical PNN. PNN architecture is composed of several sub-networks or neurons organized in successive layers and each of which a normalized RBF network is also known as “kernels”. These “Parzen Windows” are usually the probability density functions such as Gaussian function. The input nodes are the set of measurements and its does not perform any computation. It‟s simply distributes the input to the neurons in the pattern layer. The hidden-to-output weights are usually 1 or 0. The neuron of the pattern layer computes its output after receiving a pattern from the input layer. The dimension of the pattern vector is the smoothing parameter and is the neuron vector. The summation layer neurons compute the maximum likelihood of pattern being classified into by summarizing and averaging the output of all neurons that belong to the same class. The prior probabilities for each class 31 | P a g e
  • 39. JAN2011/ENG/015 LEE YOU TAI DANNY (B0704498) are the same, and the losses figures are added to the PDF as a weight. The decision layer unit classifies the pattern in accordance with the Bayes‟s decision rule based on the output of all the summation layer neurons. Figure 5.1: Architecture of PNN Data enters at the inputs and passes through the layer of hidden layers, where the actual information is processed and the result is available at the output layer. Usually feed- forward architecture is used, where there is no feedback between the layers. The supervised learning algorithm was used for training the neural network. In the case, initially the system weights are randomly chosen and then slowly modified during the training in order to get the desired outputs. The difference between the actual output and desired output is calculated for each input at every iteration. These errors are used to change the weights proportionately. This process continues until the preset mean square error is reached (0.001 in this work). This algorithm of reducing the errors in order to achieve the correct class by incrementing weights is known as back-propagation. 5.3 Results In this study, 60 fundus images which consist of 30 Normal fundus images and 30 Glaucoma fundus images with an age group of 20 to 70 years were used. Out of 33 32 | P a g e
  • 40. JAN2011/ENG/015 LEE YOU TAI DANNY (B0704498) features retrieved with the processing technique discussed above, 12 features were computed by using Principal Component Analysis (PCA). A student T-Test was conducted on these two groups whether the mean value of each texture feature was significantly different between the two classes. It was found that all these 12 features tested were clinically significant which p-value is less than 0.05. The mean and standard deviation values of computed features are shown in Table 5.1. From the table we can see that p-value of all PCA features are clinically significant which means lover p-value. The p-value is the probability of rejecting the null hypothesis assuming that the null hypothesis is true. Features Normal Glaucoma P-Value PCA1 0.400 ± 7.311E-02 0.320 ± 4.731E-02 < 0.0001 PCA2 0.388 ± 7.855E-02 0.317 ± 4.760E-02 < 0.0001 PCA3 0.382 ± 7.566E-02 0.312 ± 4.835E-02 < 0.0001 PCA4 0.373 ± 7.315E-02 0.306 ± 4.708E-02 < 0.0001 PCA5 0.357 ± 6.772E-02 0.290 ± 4.864E-02 < 0.0001 PCA6 0.345 ± 6.820E-02 0.257 ± 5.815E-02 < 0.0001 PCA7 0.334 ± 6.982E-02 0.237 ± 6.198E-02 < 0.0001 PCA8 0.323 ± 6.901E-02 0.223 ± 5.513E-02 < 0.0001 PCA9 0.308 ± 6.192E-02 0.212 ± 5.441E-02 < 0.0001 PCA10 0.278 ± 4.804E-02 0.193 ± 5.278E-02 < 0.0001 PCA11 0.265 ± 4.561E-02 0.180 ± 5.825E-02 < 0.0001 PCA12 0.253 ± 4.543E-02 0.157 ± 6.248E-02 < 0.0001 Table 5.1: 12 Features of normal and glaucoma PCA and their p-value In order to test the performance of classifier, we choose the three-fold stratified cross validation method. The advantage of this method is that all observations are used for both training and testing (validation), and each observation is exactly used for validation once. Figure 5.2: shows the three-fold stratified cross validation. Two parts of the data set were used for training and the remaining one set of data was used to test the performance (i.e. 21 images were used for training and 9 images were used for testing each time). This process was repeated three times using different sessions of the test data each time. 33 | P a g e
  • 41. JAN2011/ENG/015 LEE YOU TAI DANNY (B0704498) TRAIN MODEL TEST RESULT Figure 5.2: Procedure of three-fold stratified cross validation After processing three different test, TP (true positive), TN (true negative), FP (false positive), FN (false negative), accuracy, sensitivity, specificity and positive predictive accuracy were obtained by taking the average of the values computed in the three iterations. Table 5.2: shows the PNN classification result. TN FN TP FP ACC SENSI SPECI PPV 9.0000 2.0000 7.0000 0.0000 76.1905 100.0000 81.8182 100.0000 PNN 9.0000 0.0000 9.0000 0.0000 85.7143 100.0000 100.0000 100.0000 8.0000 0.0000 9.0000 1.0000 80.9524 90.0000 100.0000 90.0000 AVERAGE 80.9524 96.6667 93.9394 96.6667 Table 5.2: PNN classification result True Negative (TN) is the number of normal images classified as normal images. False Negative (FN) is the number of glaucomatous images classified as normal. True Positive (TP) is the number of glaucoma images classified as glaucoma. False Positive (FP) is the number of normal images classified as glaucomatous. In our work, PNN able to classified successfully with accuracy rate of 80.9%, sensitivity 96.7%, specificity 93.9% and positive predictive accuracy (PPV) 96.7% which is clinically significant. Sensitivity is the probability of abnormal class is classified as abnormal. 34 | P a g e
  • 42. JAN2011/ENG/015 LEE YOU TAI DANNY (B0704498)  TP  Sensitivity    *100% (5.1)  TP  FN   Specificity is defined as the probability of normal class is identified as normal.  TN  Specificit y    *100% (5.2)  TN  FP  The Positive Predictive Accuracy (PPV) shows the accuracy of detecting the normal and abnormal cases.  TP  PPV   *100% (5.3)  TP  FP     PNN classifier has orders of magnitude faster than back-propagation and able to converge to an optimal classifier as the size of the training set increases. The most important characteristic is that training samples can be added or removed without extensive retraining. Probabilistic Neural Network (PNN) will be extracted using MATLAB program aatrain.m which trains the data and another MATLAB program aatest.m is used to test the input data. MATLAB codes are listed in appendix S. 5.4 Glaucoma Integrated Index Integrated index provide better way of tracking how much each of the 12 features varies from their respective normal values to make a diagnosis. It is more useful to combine the features into an integrated index, in such a way that the value of this index is significantly different between normal and glaucoma subjects. Integrated index was used for biomedical applications such as Ghista 2004, 2009a, 2009b and Acharya et al. 2011a, 2011b respectively. We came up with an integrated index by combining the features to get the integrated index value is distinctly different for normal and glaucoma subjects. The proposed mathematical formulation of GII is: 35 | P a g e
  • 43. JAN2011/ENG/015 LEE YOU TAI DANNY (B0704498) 200  PCA12  2   PCA1 GII  (5.4) 100 Although all the parameters in Table 5.1 were clinically significant, the range of PCA1 and PCA12 for normal and glaucoma classes was wide compared to the rest of the features. Hence, we have selected PCA1 and PCA12 in the GII formula. And also, the combination of PCA1 and PCA12 yielded the best separation of the two classes compared to the other combinations using PCA2 to PCA11. The computed GII values for normal and glaucoma subjects are shown in Table 5.3. Index Normal Glaucoma P-Value GII 4.79 ± 0.158 4.30 ± 0.117 < 0.0001 Table 5.3: The GII values for normal and glaucoma subjects Figure 5.3: shows the distribution plot of this integrated index for normal and glaucoma subjects. The distinctive difference of this index between the two classes indicates that it can be effectively employed to differentiate and diagnose normal and glaucoma subjects. Figure 5.3: The distribution plot of the GII for normal and glaucoma 36 | P a g e
  • 44. JAN2011/ENG/015 LEE YOU TAI DANNY (B0704498) CHAPTER 6: DISCUSSION, CONCLUSION AND RECOMMENDATION 6.1 Discussion Principal Component Analysis (PCA) is proposes method for automatic glaucoma identification using texture features extracted from fundus images. The Student T-Test shown that our features are clinically significant to detect the glaucoma. There are several methods of treatment available to impede progression glaucoma. For this reason, it is important to diagnose glaucoma as early as possible to minimize the damage to the optic nerve. In the past, Fuzzy sets were used to provide for medical diagnosis and the six fuzzy classification algorithms performed less than 76% in identifying the correct class. The high diagnostic performance of ANN based on refined input visual field data achieved a sensitivity of 93% at a specificity level of 94% with an area under the receiver operating characteristic curve of 0.984. Glaucoma Hemifield test attained a sensitivity of 92% at 91% specificity. Heidelberg retina tomography (HRT) was used to differentiate between glaucoma and non-glaucoma eye using neural network. The ROC (receiver operating characteristics) curves for SVM (support vector machine) was 0.938 and SVM Gaussian was 0.945. MLP (multi-layer perceptron) and the current LDF (linear discriminant function) were 0.941 and 0.906 respectively. For the best previously proposed LDF was 0.890. In our work, we have extracted 12 texture features using image processing techniques and PCA methods. The significant features were selected (for lower p-values) using the Student T-Test. The capability of the features for good diagnosis was studied using PNN classifiers. The low p-value obtained using T-Test and the high accuracy values obtained by using these features in classifiers indicate the usefulness of these features. However, for a medical specialist, the meaning of these features might not be apparent. 37 | P a g e
  • 45. JAN2011/ENG/015 LEE YOU TAI DANNY (B0704498) In addition, to improve the accuracy of diagnosis based on these features, due to some features have larger values for glaucoma subjects (than for normal subjects) and other features have smaller values for glaucoma subjects (than for normal subjects). Hence, for more comprehensibility and transparency, we have developed an integrated index using these features. The Glaucoma Integrated Index has demonstrated good discriminative power for the normal subjects and glaucoma subjects. The interpretation using this index is simple – a GII value below 4.417 indicates the presence of glaucoma. This simplicity, transparency and objectivity of the GII make it user-friendly and enable it to become a valuable addition to Glaucoma analysis software and hardware. The cost of installing the GII feature into the software of the existing diagnosis systems is much lower, because the calculation of the index value involves only digital signal processing tools and this type of processing is cheap and readily available. The results show superior to some of those existing methods due to the higher percentage of correct classification. However, we can improve the accuracy by using more parameters or by increasing the number of training and testing images. The percentage of correct classification also depends on the environmental lighting conditions. This method can be used as an adjunct tool for the physicians to cross check their diagnosis. 6.2 Conclusion A computer based system for detection of glaucoma abnormal eyes through fundus images is developed algorithm using image processing techniques, Principal Component Analysis (PCA) and Probabilistic Neural Network classifier. The features are computed automatically and this gives us a high degree of accuracy. These features were tested by using Student T-Test, which showed that all 12 PCA features are clinically significant. The system we propose can identify the presence of glaucoma to the accuracy of 80.9%. Another proposed method of this work is to combine the features into an integrated 38 | P a g e
  • 46. JAN2011/ENG/015 LEE YOU TAI DANNY (B0704498) index in such a way that its value is distinctly different for normal and glaucoma subjects. Moreover, the result of the system can further be improved by taking more diverse images. This system can be used as an adjunct tool by the physicians to cross check their diagnosis. However, early detection is important to prevent the progression of the disease. 6.3 Recommendation This project carried out by extracting 33 texture features form digital fundus images and 12 principal components were computed with Principal Component Analysis (PCA) method. These features were tested by means of t-test, which showed that the 12 PCA features are all clinically significant. These 12 features were fed into the PNN classifier to generate the classification result. The development of Glaucoma Integrated Index demonstrated good discriminative power for normal subjects and glaucoma subjects. Our system produces encouraging result but it can further be improved by taking more diverse images and evaluating better features. 39 | P a g e
  • 47. JAN2011/ENG/015 LEE YOU TAI DANNY (B0704498) PART II CRITICAL REVIEW AND REFLECTION Throughout this work, I had spent huge amount of time on Literature research on the MATLAB programming such as digital image processing, texture features extraction codes, PNN classification and so on. Among all the tasks, project management and time management skills are the most important part over the entire project work. From the initial start of the project, I knew it would not be easy because of tight schedule and limited knowledge about MATLAB. However upon completion of the project, I was able to better understand my strengths and weakness. A lot of research and reading was done over the reference books form library and IEEE and other published journals to explore more about the project. Understanding all the texture features is not possible without my supervisor guidance and explanation. Skills like project management and information researches are greatly enhanced over the period of the entire project. I would say Project management skill is the most important part in making the project a successful one. Technical report writing is one of the weak points that I need to overcome. The “Project Report Writing and Poster Design Briefing” provided by UniSIM was really helpful. Report writing skills also gradually improved throughout the progress. Since, MATLAB is the backbone of this project; I have to understand the concept of how the features were extracted from images and their algorithms using MATLAB codes. I gained a lot of experiences after completion of this project; such as problem solving skill, MATLAB programming skill, analytical skill, project management skill and last but not least technical report writing skill. 40 | P a g e
  • 48. JAN2011/ENG/015 LEE YOU TAI DANNY (B0704498) REFERENCES 1. http://paper.ijcsns.org/07_book/201006/20100616.pdf 2. http://paper.ijcsns.org/07_book/201006/20100616.pdf 3. http://mmlab.ie.cuhk.edu.hk/2000/IP00_Texture.pdf 4. http://zernike.uwinnipeg.ca/~s_liao/pdf/thesis.pdf 5. http://www.iro.umontreal.ca/~pift6266/A06/cours/pca.pdf 6. http://www.ee.oulu.fi/mvg/files/File/ICCV2009_tutorial_Matti_guoying- Local%20Texture%20Descriptors%20in%20Computer%20Vision.pdf 7. http://www.macs.hw.ac.uk/bmvc2006/papers/052.pdf 8. http://www.szabist.edu.pk/ncet2004/docs/session%20vii%20paper%20no%201 %20(p%20137-140).pdf 9. http://www.psi.toronto.edu/~vincent/research/presentations/PNN.pdf 10. http://www.cs.otago.ac.nz/cosc453/student_tutorials/principal_components.pdf 11. http://support.sas.com/publishing/pubcat/chaps/55129.pdf 12. http://courses.cs.tamu.edu/rgutier/cpsc636_s10/specht1990pnn.pdf 13. http://sci2s.ugr.es/keel/pdf/specific/articulo/dkg00.pdf 14. http://www.public.asu.edu/~ltang9/papers/ency-cross-validation.pdf 15. Ooi, E.H, Ng, E. Y. K., Purslow, C., Acharya, R., “Variations in the corneal surface temperature with contact lens wear”, Journal of Engineering in Medicine 221, 2007, 337–349 16. Tan, T.G., Acharya, U.R., Ng, E.Y.K., “Automated identification of eye diseases using higher order spectra”, Journal of Mechanics and Medicine in Biology, 8(1), 2008, 121-136 17. Acharya, U.R., Ng, E.Y.K., Min L.C., Chee, C., Gupta, M., Suri, J.S., “Automatic identification of anterior segment eye abnormalities in optical images”, Chapter 4, Image Modeling of Human Eye (Book), Artech House, USA, May, 2008a 18. Acharya, U.R., Ng, E.Y.K., Suri, J.S., “Imaging Systems of Human Eye: A Review”, Journal of Medical Systems, USA, 2008b (In Press). 41 | P a g e
  • 49. JAN2011/ENG/015 LEE YOU TAI DANNY (B0704498) 19. Galassi, F., Giambene, B., Corvi, A., Falaschi, G., “Evaluation of ocular surface temperature and retrobulbar haemodynamics by infrared thermography and colour Doppler imaging in patients with glaucoma”, British Journal of Ophthalmology, 91(7), 2007, 878-881 20. Morgan, P.B., Soh, M. P., Efron, N., Tullo, A. B., “Potential Applications of Ocular Thermography”, Optometry and Vision Science, 70(7), 1993, 568-576 21. Tan, T.G., Acharya, U.R., Ng, E.Y.K., “Automated identification of eye diseases using higher order spectra”, Journal of Mechanics and Medicine in Biology, 8(1), 2008, 121-136 22. http://www.mathworks.com/matlabcentral/fileexchange/17482-gray-level-run- length-matrix-toolbox 23. http://www.singapore- glaucoma.org/index.php?option=com_content&view=article&id=112:savh-new- referrals-to-lvc-27-of-cases-are-glaucoma&catid=3:newsflash&Itemid=37 24. http://grass.fbk.eu/gdp/html_grass64/r.texture.html 25. http://en.wikipedia.org/wiki/Kurtosis 26. http://en.wikipedia.org/wiki/Fractal_dimension 27. http://www.comp.hkbu.edu.hk/~icpr06/tutorials/Pietikainen.html 28. http://www.ccs3.lanl.gov/~kelly/ZTRANSITION/notebook/laws.shtml 29. http://www.google.com.sg/url?sa=t&rct=j&q=fuzzy%20gray%20level%20co- occurrence%20matrix&source=web&cd=1&ved=0CCIQFjAA&url=http%3A% 2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.150.5 807%26rep%3Drep1%26type%3Dpdf&ei=FCGuTpShEMiJrAf7i53ZDA&usg= AFQjCNGkegRckD_89kQRrJHXl1bknj0CiQ 30. http://my.fit.edu/~dpetruss/papers%20to%20read/Gray%20Level%20Co- Occurrence%20Matrix%20Computation%20Based%20On%20Haar%20Wavele t.pdf 31. http://www.mathworks.com/help/toolbox/images/ref/graycomatrix.html 32. http://www.argentumsolutions.com/tutorials/neural_tutorialpg8.html 33. http://herselfsai.com/2007/03/probabilistic-neural-networks.html 42 | P a g e
  • 50. JAN2011/ENG/015 LEE YOU TAI DANNY (B0704498) APPENDIXES Appendix A: Moments and FD features with their normalized value for normal images Moment1 Moment2 Moment3 Moment4 FD 39.5686 0.486781 5.45E+03 3.79E-01 1.42E+05 1.35E-01 9.53E+06 7.79E-02 2.1416 0.988051 40.8667 0.50275 4.12E+03 2.87E-01 1.39E+05 1.32E-01 8.42E+06 6.88E-02 2.1103 0.97361 39.8039 0.489675 3.97E+03 2.76E-01 1.21E+05 1.15E-01 8.74E+06 7.14E-02 2.103 0.970242 40.2196 0.494789 3.60E+03 2.51E-01 1.38E+05 1.31E-01 8.33E+06 6.81E-02 2.1215 0.978777 38.8314 0.477711 4.02E+03 2.80E-01 1.28E+05 1.22E-01 8.40E+06 6.87E-02 2.1377 0.986251 81.2863 1 5.59E+03 3.89E-01 2.77E+05 2.64E-01 1.64E+07 1.34E-01 2.0891 0.963829 79.5176 0.978241 1.02E+04 7.12E-01 1.05E+06 1.00E+00 1.22E+08 1.00E+00 2.1005 0.969089 39.6157 0.48736 3.10E+03 2.16E-01 1.32E+05 1.26E-01 8.20E+06 6.71E-02 2.0926 0.965444 5.3412 0.065708 1.26E+03 8.75E-02 5.21E+03 4.96E-03 4.83E+05 3.95E-03 2.0694 0.95474 40.251 0.495176 7.38E+03 5.14E-01 2.58E+05 2.46E-01 2.95E+07 2.41E-01 2.1258 0.980761 10.749 0.132236 1.49E+03 1.04E-01 8.76E+03 8.33E-03 5.23E+05 4.28E-03 2.075 0.957324 14.8431 0.182603 2.45E+03 1.70E-01 2.21E+04 2.10E-02 2.54E+06 2.07E-02 2.0848 0.961845 13.698 0.168515 1.16E+03 8.04E-02 49.9725 4.76E-05 3.95E+05 3.23E-03 2.0866 0.962676 7.9882 0.098272 1.18E+03 8.20E-02 5.62E+03 5.35E-03 2.57E+05 2.10E-03 2.0973 0.967612 22.9373 0.282179 3.68E+03 2.56E-01 6.01E+03 5.72E-03 2.03E+06 1.66E-02 2.1123 0.974533 14.8941 0.18323 2.60E+03 1.81E-01 2.41E+04 2.29E-02 1.24E+06 1.02E-02 2.1054 0.971349 26.7686 0.329313 4.51E+03 3.14E-01 1.61E+04 1.53E-02 4.59E+06 3.76E-02 2.1258 0.980761 40.251 0.495176 7.38E+03 5.14E-01 2.58E+05 2.46E-01 2.95E+07 2.41E-01 2.1258 0.980761 41.8196 0.514473 6.91E+03 4.81E-01 1.27E+05 1.21E-01 1.52E+07 1.24E-01 2.1246 0.980208 14.3137 0.17609 6.74E+03 4.69E-01 2.11E+03 2.01E-03 2.48E+07 2.03E-01 2.1073 0.972226 65.8588 0.810208 7.18E+03 5.00E-01 3.05E+05 2.90E-01 2.47E+07 2.02E-01 2.1175 0.976932 66.3333 0.816045 7.15E+03 4.97E-01 3.06E+05 2.91E-01 2.48E+07 2.02E-01 2.1159 0.976194 64.698 0.795927 6.71E+03 4.67E-01 2.96E+05 2.82E-01 2.49E+07 2.03E-01 2.1074 0.972272 67.8078 0.834185 7.01E+03 4.88E-01 3.25E+05 3.09E-01 2.73E+07 2.23E-01 2.0982 0.968028 51.1216 0.628908 1.04E+04 7.26E-01 1.89E+05 1.80E-01 4.71E+07 3.85E-01 2.1153 0.975917 42.8824 0.527548 1.08E+04 7.52E-01 7.17E+03 6.83E-03 7.03E+07 5.75E-01 2.1675 1 64.5137 0.79366 1.44E+04 1.00E+00 4.81E+05 4.58E-01 1.03E+08 8.41E-01 2.1305 0.98293 67.1804 0.826466 6.98E+03 4.86E-01 2.60E+05 2.47E-01 2.20E+07 1.80E-01 2.1203 0.978224 63.1176 0.776485 5.77E+03 4.02E-01 3.14E+05 2.98E-01 2.43E+07 1.99E-01 2.0913 0.964844 69.051 0.849479 7.20E+03 5.01E-01 2.52E+05 2.40E-01 2.22E+07 1.82E-01 2.1211 0.978593 43 | P a g e