SlideShare une entreprise Scribd logo
1  sur  22
NEURAL NETWORKS
Harshita Gupta (11508013)
Bioprocess Engineering
   What are ANN’s?
   Conventional computers VS ANNs
   How do they work?
   Why them?
   Applications
What Are Artificial Neural Network?

   An Artificial Neural Network (ANN) is an information
    processing system
   It is composed of a large number of highly
    interconnected processing elements (neurons)
    working in unison to solve specific problems.
    ANNs, like people, learn by example, each configured
    for a specific application
   Learning in biological systems involves adjustments to
    the synaptic connections that exist between the
    neurons. This is true of ANNs as well.
   Modern neural networks are non-linear statistical data
    modeling tools. They are usually used to model
    complex relationships between inputs and outputs or
    to find patterns in data.
ANNs Vs Computers
   Neural networks               Conventional
    process information in a       computers use an
    similar way the human          algorithmic approach
    brain does, composed
    of a large number of
    highly interconnected
    processing units
   Neural networks learn         The problem solving
    by example which must          capability of
    be selected very               conventional computers
    carefully. They cannot         is restricted to
    be programmed to               problems that we
    perform a specific task.       already understand and
                                   know how to solve.
   They cannot be                 Conventional computers
    programmed to perform a         use a cognitive approach
    specific task. The              to problem solving; the
    examples must be                way the problem is to
    selected carefully              solved must be known and
    otherwise useful time is        stated in small
    wasted or even worse the        unambiguous instructions.
    network might be               These machines are totally
    functioning incorrectly.        predictable; if anything
   The disadvantage is that        goes wrong is due to a
    because the network finds       software or hardware fault.
    out how to solve the
    problem by itself, its
    operation can be
    unpredictable
A Human Neurons
As we are all familiar this is how a conventional neuron looks
Dendrites
                                                                           Cell Body

                                                                     Threshold


                                                                          axon

                                                                       Summation




A Network Neuron
In an artificial neural network, simple artificial nodes, variously called "neurons",
"neurodes", "processing elements" (PEs) or "units", are connected together to form a
network of nodes mimicking the biological neural networks — hence the term "artificial
neural network".
An Improved Neurod
       W1

                               f(x)=K( iwigi(x))
       W2



               A more sophisticated neuron (figure
               2) is the McCulloch and Pitts model
               (MCP). Here inputs are 'weighted',
       Wn      the effect that each input has at
               decision making is dependent on
               the weight of the particular input. In
               mathematical terms, the neuron
               fires if and only if;
               X1W1 + X2W2 + X3W3 + ... > T
Network Model
   The most common type of
    artificial neural network
    consists of three groups, or
    layers, of units:
    a layer of "input" units is
    connected to The activity of
    the input units represents the
    raw information that is fed
    into the network.
    a layer of "hidden" units, The
    activity of each hidden unit is
    determined by the activities of
    the input units and the
    weights on the connections
    between the input and the
    hidden units.
   a layer of "output" units. The
    behavior of the output units
    depends on the activity of the
    hidden units.
Network Architectures

Feed-forward networks           Feedback networks

   Feed-forward ANNs              Feedback networks can
    allow signals to travel         have signals travelling in
    one way only; from input
    to output. The output of        both directions by
    any layer does not affect       introducing loops in the
    that same layer.                network.




                                                                 output
                                     input



                                               Hidden
    Feed-forward ANNs              Feedback networks
    tend to be straight              are dynamic; their
    forward networks that            'state' is changing
    associate inputs with            continuously until they
    outputs.                         reach an equilibrium
   This type of                     point. They remain at
    organization is also             the equilibrium point


                             input




                                                           output
    referred to as bottom-           until the input
    up or top-down.                  changes and a new
                                     equilibrium needs to
                                     be found.
                                    Also known as
                                     interactive or
                                     recurrentHidden
Firing rules
   The firing rule is an important concept in
    neural networks and accounts for their high
    flexibility.
    A firing rule determines how one calculates
    whether a neuron should fire for any input
    pattern. It relates to all the input patterns, not
    only the ones on which the node was trained.
   Take a collection of training patterns for a node,
    some inputs in which
    Cause it to fire -1
   Prevent it from firing- 0
   Then the patterns not in the collection??
     Here the node fires when in comparison , they have
      more input elements in common with the 'nearest'
      pattern in the 1-taught set than with the 'nearest'
      pattern in the 0-taught set.
     If there is a tie, then the pattern remains in the
      undefined state.
For Example


   For example, a 3-input neuron is taught As
    follows
   Output 1 when input (X1,X2 and X3) -111, 101
   Output 0 when the input is 000 or 001. ;
111, 101=1             000 or 001=0

Before Generalization
X1     0     0   0       0      1     1   1     1
X2     0     0   1       1      0     0   1     1

X3     0     1   0       1      0     1   0     1

Out    0     0   0/1     0/1    0/1   1   0/1   1


After Generalization
 X1    0     0   0        0     1     1   1     1
 X2    0     0   1        1     0     0   1     1

 X3    0     1   0        1     0     1   0     1

 Out   0     0   0        0/1   0/1   1   1     1
Pattern Recognition
              X11
              X12   TAN
              X13    1
                          F1

              X21
              X22   TAN

              X23
                     2    F2

              X31
              X32   TAN   F3
                     3
              X33
X11   0   0     0   0     1     1   1     1
X12   0   0     1   1     0     0   1     1
                                              Top Neuron
X13   0   1     0   1     0     1   0     1
Out   0   0     1   1     0     0   1     1

X21   0   0     0   0     1     1   1     1
X22   0   0     1   1     0     0   1     1   Middle Neuron
X23   0   1     0   1     0     1   0     1
Out   1   0/1   1   0/1   0/1   0   0/1   0

X31   0   0     0   0     1     1   1     1
X32   0   0     1   1     0     0   1     1
                                              Bottom Neuro
X33   0   1     0   1     0     1   0     1
Out   0   0     1   1     0     0   1     0
Learning methods
Based on The memorization of
patterns response of the network
and the subsequent

   associative mapping in which the network learns to produce a particular
    pattern on the set of input units whenever another particular pattern is
    applied on the set of input units. The associative mapping can generally be
    broken down into two mechanisms:
       auto-association: an input pattern is associated with itself and the states of input
        and output units coincide. This is used to provide pattern competition, i.e. to
        produce a pattern whenever a portion of it or a distorted pattern is presented. In
        the second case, the network actually stores pairs of patterns building an
        association between two sets of patterns.
       hetero-association: is related to two recall mechanisms:
           nearest-neighbor recall, where the output pattern produced corresponds to the input
            pattern stored, which is closest to the pattern presented, and
           interpolative recall, where the output pattern is a similarity dependent interpolation of the
            patterns stored corresponding to the pattern presented. Yet another paradigm, which is a
            variant associative mapping is classification, i.e. when there is a fixed set of categories
            into which the input patterns are to be classified.
   regularity detection in which units learn to respond to particular properties
    of the input patterns. Whereas in associative mapping the network stores
    the relationships among patterns, in regularity detection the response of
    each unit has a particular 'meaning'. This type of learning mechanism is
    essential for feature discovery and knowledge representation
Conclusion
The computing world has a lot to gain from neural networks. Their
ability to learn by example makes them very flexible and powerful.
Furthermore there is no need to devise an algorithm (or understand
inner mechanism) in order to perform a specific task; They are also
very well suited for real time systems because of their fast response
and computational times which are due to their parallel architecture.
Neural networks also contribute to other areas of research such as
neurology and psychology. They are regularly used to model parts of
living organisms and to investigate the internal mechanisms of the
brain.
Perhaps the most exciting aspect of neural networks is the
possibility that some day 'conscious' networks might be produced.
There is a number of scientists arguing that consciousness is a
'mechanical' property and that 'conscious' neural networks are a
realistic possibility.
Finally, I would like to state that even though neural networks have a
huge potential we will only get the best of them when they are
References
   An introduction to neural computing. Alexander, I. and Morton, H. 2nd edition
   http://media.wiley.com/product_data/excerpt/19/04713491/0471349119.pdf
   Neural Networks at Pacific Northwest National Laboratory
    http://www.emsl.pnl.gov:2080/docs/cie/neural/neural.homepage.html
   Industrial Applications of Neural Networks (research reports Esprit, I.F.Croall,
    J.P.Mason)
   A Novel Approach to Modeling and Diagnosing the Cardiovascular System
    http://www.emsl.pnl.gov:2080/docs/cie/neural/papers2/keller.wcnn95.abs.html
   Artificial Neural Networks in Medicine
    http://www.emsl.pnl.gov:2080/docs/cie/techbrief/NN.techbrief.html
   Neural Networks by Eric Davalo and Patrick Naim
   Learning internal representations by error propagation by Rumelhart, Hinton and
    Williams (1986).
   Klimasauskas, CC. (1989). The 1989 Neuron Computing Bibliography.
    Hammerstrom, D. (1986). A Connectionist/Neural Network Bibliography.
   DARPA Neural Network Study (October, 1987-February, 1989). MIT Lincoln Lab.
    Neural Networks, Eric Davalo and Patrick Naim
   Assimov, I (1984, 1950), Robot, Ballatine, New York.
   Electronic Noses for Telemedicine
    http://www.emsl.pnl.gov:2080/docs/cie/neural/papers2/keller.ccc95.abs.html
   Pattern Recognition of Pathology Images

Contenu connexe

Tendances

Evolution of Intel Processors
Evolution of Intel ProcessorsEvolution of Intel Processors
Evolution of Intel ProcessorsShad Ahmad Zaidi
 
Programmers model of 8086
Programmers model of 8086Programmers model of 8086
Programmers model of 8086KunalPatel260
 
Cyber Forensics Overview
Cyber Forensics OverviewCyber Forensics Overview
Cyber Forensics OverviewYansi Keim
 
Interfacing memory with 8086 microprocessor
Interfacing memory with 8086 microprocessorInterfacing memory with 8086 microprocessor
Interfacing memory with 8086 microprocessorVikas Gupta
 
Microprocessor vs. microcontroller
Microprocessor vs. microcontrollerMicroprocessor vs. microcontroller
Microprocessor vs. microcontrolleraviban
 
INVESTIGATING UNIX SYSTEMS.pptx
INVESTIGATING UNIX SYSTEMS.pptxINVESTIGATING UNIX SYSTEMS.pptx
INVESTIGATING UNIX SYSTEMS.pptxAmAngel1
 
Audio Source Separation Based on Low-Rank Structure and Statistical Independence
Audio Source Separation Based on Low-Rank Structure and Statistical IndependenceAudio Source Separation Based on Low-Rank Structure and Statistical Independence
Audio Source Separation Based on Low-Rank Structure and Statistical IndependenceDaichi Kitamura
 
8237 dma controller
8237 dma controller8237 dma controller
8237 dma controllerTech_MX
 
Neural networks.ppt
Neural networks.pptNeural networks.ppt
Neural networks.pptSrinivashR3
 
06 Computer Image Verification and Authentication - Notes
06 Computer Image Verification and Authentication - Notes06 Computer Image Verification and Authentication - Notes
06 Computer Image Verification and Authentication - NotesKranthi
 
Artificial Neural Networks - ANN
Artificial Neural Networks - ANNArtificial Neural Networks - ANN
Artificial Neural Networks - ANNMohamed Talaat
 
What is a decoder and 2 to 4 DECODER
What is a decoder and 2 to 4 DECODERWhat is a decoder and 2 to 4 DECODER
What is a decoder and 2 to 4 DECODERsafia safreen
 
I/O system in intel 80386 microcomputer architecture
I/O system in intel 80386 microcomputer architectureI/O system in intel 80386 microcomputer architecture
I/O system in intel 80386 microcomputer architecturekavitha muneeshwaran
 
Chapter 5: Cominational Logic with MSI and LSI
Chapter 5: Cominational Logic with MSI and LSIChapter 5: Cominational Logic with MSI and LSI
Chapter 5: Cominational Logic with MSI and LSIEr. Nawaraj Bhandari
 

Tendances (20)

Autopsy Digital forensics tool
Autopsy Digital forensics toolAutopsy Digital forensics tool
Autopsy Digital forensics tool
 
Evolution of Intel Processors
Evolution of Intel ProcessorsEvolution of Intel Processors
Evolution of Intel Processors
 
Random number generator
Random number generatorRandom number generator
Random number generator
 
Memory Organization in 80386
Memory Organization in 80386 Memory Organization in 80386
Memory Organization in 80386
 
Programmers model of 8086
Programmers model of 8086Programmers model of 8086
Programmers model of 8086
 
Cyber Forensics Overview
Cyber Forensics OverviewCyber Forensics Overview
Cyber Forensics Overview
 
Interfacing memory with 8086 microprocessor
Interfacing memory with 8086 microprocessorInterfacing memory with 8086 microprocessor
Interfacing memory with 8086 microprocessor
 
Microprocessor vs. microcontroller
Microprocessor vs. microcontrollerMicroprocessor vs. microcontroller
Microprocessor vs. microcontroller
 
INVESTIGATING UNIX SYSTEMS.pptx
INVESTIGATING UNIX SYSTEMS.pptxINVESTIGATING UNIX SYSTEMS.pptx
INVESTIGATING UNIX SYSTEMS.pptx
 
Audio Source Separation Based on Low-Rank Structure and Statistical Independence
Audio Source Separation Based on Low-Rank Structure and Statistical IndependenceAudio Source Separation Based on Low-Rank Structure and Statistical Independence
Audio Source Separation Based on Low-Rank Structure and Statistical Independence
 
8237 dma controller
8237 dma controller8237 dma controller
8237 dma controller
 
Intel’s core i7
Intel’s core i7Intel’s core i7
Intel’s core i7
 
Neural networks.ppt
Neural networks.pptNeural networks.ppt
Neural networks.ppt
 
06 Computer Image Verification and Authentication - Notes
06 Computer Image Verification and Authentication - Notes06 Computer Image Verification and Authentication - Notes
06 Computer Image Verification and Authentication - Notes
 
Artificial Neural Networks - ANN
Artificial Neural Networks - ANNArtificial Neural Networks - ANN
Artificial Neural Networks - ANN
 
What is a decoder and 2 to 4 DECODER
What is a decoder and 2 to 4 DECODERWhat is a decoder and 2 to 4 DECODER
What is a decoder and 2 to 4 DECODER
 
I/O system in intel 80386 microcomputer architecture
I/O system in intel 80386 microcomputer architectureI/O system in intel 80386 microcomputer architecture
I/O system in intel 80386 microcomputer architecture
 
Cyber Forensics Module 1
Cyber Forensics Module 1Cyber Forensics Module 1
Cyber Forensics Module 1
 
Chapter 5: Cominational Logic with MSI and LSI
Chapter 5: Cominational Logic with MSI and LSIChapter 5: Cominational Logic with MSI and LSI
Chapter 5: Cominational Logic with MSI and LSI
 
Sampling theorem
Sampling theoremSampling theorem
Sampling theorem
 

En vedette

Load Forecasting II
Load Forecasting IILoad Forecasting II
Load Forecasting IIlinsstalex
 
Share point 2013 for intranets and the digital workplace (1)
Share point 2013 for intranets and the digital workplace (1)Share point 2013 for intranets and the digital workplace (1)
Share point 2013 for intranets and the digital workplace (1)Alex Manchester
 
Wanderlust- Photo Contest 2016: Winners and Commended
Wanderlust- Photo Contest 2016: Winners and CommendedWanderlust- Photo Contest 2016: Winners and Commended
Wanderlust- Photo Contest 2016: Winners and Commendedmaditabalnco
 
交點台大Vol.9 - 江佩璇 - yef一場有人挺你的風險
交點台大Vol.9 - 江佩璇 - yef一場有人挺你的風險交點台大Vol.9 - 江佩璇 - yef一場有人挺你的風險
交點台大Vol.9 - 江佩璇 - yef一場有人挺你的風險交點
 
83 solid pancreatic masses on computed tomography
83 solid pancreatic masses on computed tomography83 solid pancreatic masses on computed tomography
83 solid pancreatic masses on computed tomographyDr. Muhammad Bin Zulfiqar
 
EIT-Digital_Annual-Report-2015-Digital-Version
EIT-Digital_Annual-Report-2015-Digital-VersionEIT-Digital_Annual-Report-2015-Digital-Version
EIT-Digital_Annual-Report-2015-Digital-VersionEdna Ayme-Yahil, PhD
 
Task Management: 11 Tips for Effective Management
Task Management: 11 Tips for Effective ManagementTask Management: 11 Tips for Effective Management
Task Management: 11 Tips for Effective ManagementArun Agrawal
 
Presentación webinar “Los 12 mejores trucos de velocidad para WordPress”
Presentación webinar “Los 12 mejores trucos de velocidad para WordPress”Presentación webinar “Los 12 mejores trucos de velocidad para WordPress”
Presentación webinar “Los 12 mejores trucos de velocidad para WordPress”SiteGround España
 
Sap amazon and Kogent Demo
Sap amazon and Kogent DemoSap amazon and Kogent Demo
Sap amazon and Kogent DemoJon Boutelle
 
Salesforce1 data gov lunch anaheim deck
Salesforce1 data gov lunch anaheim deckSalesforce1 data gov lunch anaheim deck
Salesforce1 data gov lunch anaheim deckBeth Fitzpatrick
 
Intranet: un modelo para la transformación digital de la organización
Intranet: un modelo para la transformación digital de la organizaciónIntranet: un modelo para la transformación digital de la organización
Intranet: un modelo para la transformación digital de la organizaciónMaximiliano Gonzalez-Fierro
 
美次級房貸風暴的影響評估
美次級房貸風暴的影響評估美次級房貸風暴的影響評估
美次級房貸風暴的影響評估Obadiah
 
2015 South By Southwest Sports: #SXSports Insights
2015 South By Southwest Sports: #SXSports Insights2015 South By Southwest Sports: #SXSports Insights
2015 South By Southwest Sports: #SXSports InsightsNeil Horowitz
 
PSBJ - Accelerating Sales Without a Marketing Budget
PSBJ - Accelerating Sales Without a Marketing BudgetPSBJ - Accelerating Sales Without a Marketing Budget
PSBJ - Accelerating Sales Without a Marketing BudgetHeinz Marketing Inc
 
Compliance program requirements for the Volcker Rule of the Dodd-Frank Act
Compliance program requirements for the Volcker Rule of the Dodd-Frank ActCompliance program requirements for the Volcker Rule of the Dodd-Frank Act
Compliance program requirements for the Volcker Rule of the Dodd-Frank ActGrant Thornton LLP
 

En vedette (20)

Load Forecasting II
Load Forecasting IILoad Forecasting II
Load Forecasting II
 
Share point 2013 for intranets and the digital workplace (1)
Share point 2013 for intranets and the digital workplace (1)Share point 2013 for intranets and the digital workplace (1)
Share point 2013 for intranets and the digital workplace (1)
 
Estrategias
EstrategiasEstrategias
Estrategias
 
mobileHut_May_16
mobileHut_May_16mobileHut_May_16
mobileHut_May_16
 
Zaragoza Turismo 32
Zaragoza Turismo 32Zaragoza Turismo 32
Zaragoza Turismo 32
 
Wanderlust- Photo Contest 2016: Winners and Commended
Wanderlust- Photo Contest 2016: Winners and CommendedWanderlust- Photo Contest 2016: Winners and Commended
Wanderlust- Photo Contest 2016: Winners and Commended
 
交點台大Vol.9 - 江佩璇 - yef一場有人挺你的風險
交點台大Vol.9 - 江佩璇 - yef一場有人挺你的風險交點台大Vol.9 - 江佩璇 - yef一場有人挺你的風險
交點台大Vol.9 - 江佩璇 - yef一場有人挺你的風險
 
El narcotráfico
El narcotráficoEl narcotráfico
El narcotráfico
 
83 solid pancreatic masses on computed tomography
83 solid pancreatic masses on computed tomography83 solid pancreatic masses on computed tomography
83 solid pancreatic masses on computed tomography
 
EIT-Digital_Annual-Report-2015-Digital-Version
EIT-Digital_Annual-Report-2015-Digital-VersionEIT-Digital_Annual-Report-2015-Digital-Version
EIT-Digital_Annual-Report-2015-Digital-Version
 
Task Management: 11 Tips for Effective Management
Task Management: 11 Tips for Effective ManagementTask Management: 11 Tips for Effective Management
Task Management: 11 Tips for Effective Management
 
Presentación webinar “Los 12 mejores trucos de velocidad para WordPress”
Presentación webinar “Los 12 mejores trucos de velocidad para WordPress”Presentación webinar “Los 12 mejores trucos de velocidad para WordPress”
Presentación webinar “Los 12 mejores trucos de velocidad para WordPress”
 
Sap amazon and Kogent Demo
Sap amazon and Kogent DemoSap amazon and Kogent Demo
Sap amazon and Kogent Demo
 
Salesforce1 data gov lunch anaheim deck
Salesforce1 data gov lunch anaheim deckSalesforce1 data gov lunch anaheim deck
Salesforce1 data gov lunch anaheim deck
 
Intranet: un modelo para la transformación digital de la organización
Intranet: un modelo para la transformación digital de la organizaciónIntranet: un modelo para la transformación digital de la organización
Intranet: un modelo para la transformación digital de la organización
 
Zaragoza Turismo 20
Zaragoza Turismo 20Zaragoza Turismo 20
Zaragoza Turismo 20
 
美次級房貸風暴的影響評估
美次級房貸風暴的影響評估美次級房貸風暴的影響評估
美次級房貸風暴的影響評估
 
2015 South By Southwest Sports: #SXSports Insights
2015 South By Southwest Sports: #SXSports Insights2015 South By Southwest Sports: #SXSports Insights
2015 South By Southwest Sports: #SXSports Insights
 
PSBJ - Accelerating Sales Without a Marketing Budget
PSBJ - Accelerating Sales Without a Marketing BudgetPSBJ - Accelerating Sales Without a Marketing Budget
PSBJ - Accelerating Sales Without a Marketing Budget
 
Compliance program requirements for the Volcker Rule of the Dodd-Frank Act
Compliance program requirements for the Volcker Rule of the Dodd-Frank ActCompliance program requirements for the Volcker Rule of the Dodd-Frank Act
Compliance program requirements for the Volcker Rule of the Dodd-Frank Act
 

Similaire à neural-networks (1)

Neural networks of artificial intelligence
Neural networks of artificial  intelligenceNeural networks of artificial  intelligence
Neural networks of artificial intelligencealldesign
 
SujanKhamrui_28100119050.pptx
SujanKhamrui_28100119050.pptxSujanKhamrui_28100119050.pptx
SujanKhamrui_28100119050.pptxPrakasBhowmik
 
Ann by rutul mehta
Ann by rutul mehtaAnn by rutul mehta
Ann by rutul mehtaRutul Mehta
 
Neural network & its applications
Neural network & its applications Neural network & its applications
Neural network & its applications Ahmed_hashmi
 
Neuralnetwork 101222074552-phpapp02
Neuralnetwork 101222074552-phpapp02Neuralnetwork 101222074552-phpapp02
Neuralnetwork 101222074552-phpapp02Deepu Gupta
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural NetworkPrakash K
 
Neural Networks Ver1
Neural  Networks  Ver1Neural  Networks  Ver1
Neural Networks Ver1ncct
 
Neural network final NWU 4.3 Graphics Course
Neural network final NWU 4.3 Graphics CourseNeural network final NWU 4.3 Graphics Course
Neural network final NWU 4.3 Graphics CourseMohaiminur Rahman
 
Artificial Neural Network in Medical Diagnosis
Artificial Neural Network in Medical DiagnosisArtificial Neural Network in Medical Diagnosis
Artificial Neural Network in Medical DiagnosisAdityendra Kumar Singh
 
Artificial Neural Networks ppt.pptx for final sem cse
Artificial Neural Networks  ppt.pptx for final sem cseArtificial Neural Networks  ppt.pptx for final sem cse
Artificial Neural Networks ppt.pptx for final sem cseNaveenBhajantri1
 
Neural networks are parallel computing devices.docx.pdf
Neural networks are parallel computing devices.docx.pdfNeural networks are parallel computing devices.docx.pdf
Neural networks are parallel computing devices.docx.pdfneelamsanjeevkumar
 
Soft Computing-173101
Soft Computing-173101Soft Computing-173101
Soft Computing-173101AMIT KUMAR
 
Neural net NWU 4.3 Graphics Course
Neural net NWU 4.3 Graphics CourseNeural net NWU 4.3 Graphics Course
Neural net NWU 4.3 Graphics CourseMohaiminur Rahman
 
What are neural networks.pdf
What are neural networks.pdfWhat are neural networks.pdf
What are neural networks.pdfStephenAmell4
 
What are neural networks.pdf
What are neural networks.pdfWhat are neural networks.pdf
What are neural networks.pdfAnastasiaSteele10
 

Similaire à neural-networks (1) (20)

Neural networks of artificial intelligence
Neural networks of artificial  intelligenceNeural networks of artificial  intelligence
Neural networks of artificial intelligence
 
SujanKhamrui_28100119050.pptx
SujanKhamrui_28100119050.pptxSujanKhamrui_28100119050.pptx
SujanKhamrui_28100119050.pptx
 
Ann by rutul mehta
Ann by rutul mehtaAnn by rutul mehta
Ann by rutul mehta
 
Neural network & its applications
Neural network & its applications Neural network & its applications
Neural network & its applications
 
Neuralnetwork 101222074552-phpapp02
Neuralnetwork 101222074552-phpapp02Neuralnetwork 101222074552-phpapp02
Neuralnetwork 101222074552-phpapp02
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
 
Neural Networks Ver1
Neural  Networks  Ver1Neural  Networks  Ver1
Neural Networks Ver1
 
Neural network final NWU 4.3 Graphics Course
Neural network final NWU 4.3 Graphics CourseNeural network final NWU 4.3 Graphics Course
Neural network final NWU 4.3 Graphics Course
 
Artificial Neural Network in Medical Diagnosis
Artificial Neural Network in Medical DiagnosisArtificial Neural Network in Medical Diagnosis
Artificial Neural Network in Medical Diagnosis
 
Artificial Neural Networks ppt.pptx for final sem cse
Artificial Neural Networks  ppt.pptx for final sem cseArtificial Neural Networks  ppt.pptx for final sem cse
Artificial Neural Networks ppt.pptx for final sem cse
 
Neural Network
Neural NetworkNeural Network
Neural Network
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
 
Neural networks are parallel computing devices.docx.pdf
Neural networks are parallel computing devices.docx.pdfNeural networks are parallel computing devices.docx.pdf
Neural networks are parallel computing devices.docx.pdf
 
Soft Computing-173101
Soft Computing-173101Soft Computing-173101
Soft Computing-173101
 
Unit+i
Unit+iUnit+i
Unit+i
 
Neural net NWU 4.3 Graphics Course
Neural net NWU 4.3 Graphics CourseNeural net NWU 4.3 Graphics Course
Neural net NWU 4.3 Graphics Course
 
Neural Networks
Neural NetworksNeural Networks
Neural Networks
 
071bct537 lab4
071bct537 lab4071bct537 lab4
071bct537 lab4
 
What are neural networks.pdf
What are neural networks.pdfWhat are neural networks.pdf
What are neural networks.pdf
 
What are neural networks.pdf
What are neural networks.pdfWhat are neural networks.pdf
What are neural networks.pdf
 

Dernier

Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Mattias Andersson
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsMark Billinghurst
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Enterprise Knowledge
 
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdfHyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdfPrecisely
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenHervé Boutemy
 
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostZilliz
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationSlibray Presentation
 
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo DayH2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo DaySri Ambati
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxLoriGlavin3
 
Powerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time ClashPowerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time Clashcharlottematthew16
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024Stephanie Beckett
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piececharlottematthew16
 
Artificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxArtificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxhariprasad279825
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxNavinnSomaal
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLScyllaDB
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...Fwdays
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsMiki Katsuragi
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek SchlawackFwdays
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfAlex Barbosa Coqueiro
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfAddepto
 

Dernier (20)

Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR Systems
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024
 
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdfHyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache Maven
 
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck Presentation
 
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo DayH2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
 
Powerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time ClashPowerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time Clash
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piece
 
Artificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxArtificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptx
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptx
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQL
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering Tips
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdf
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdf
 

neural-networks (1)

  • 1. NEURAL NETWORKS Harshita Gupta (11508013) Bioprocess Engineering
  • 2. What are ANN’s?  Conventional computers VS ANNs  How do they work?  Why them?  Applications
  • 3. What Are Artificial Neural Network?  An Artificial Neural Network (ANN) is an information processing system  It is composed of a large number of highly interconnected processing elements (neurons) working in unison to solve specific problems.  ANNs, like people, learn by example, each configured for a specific application  Learning in biological systems involves adjustments to the synaptic connections that exist between the neurons. This is true of ANNs as well.  Modern neural networks are non-linear statistical data modeling tools. They are usually used to model complex relationships between inputs and outputs or to find patterns in data.
  • 4. ANNs Vs Computers  Neural networks  Conventional process information in a computers use an similar way the human algorithmic approach brain does, composed of a large number of highly interconnected processing units  Neural networks learn  The problem solving by example which must capability of be selected very conventional computers carefully. They cannot is restricted to be programmed to problems that we perform a specific task. already understand and know how to solve.
  • 5. They cannot be  Conventional computers programmed to perform a use a cognitive approach specific task. The to problem solving; the examples must be way the problem is to selected carefully solved must be known and otherwise useful time is stated in small wasted or even worse the unambiguous instructions. network might be  These machines are totally functioning incorrectly. predictable; if anything  The disadvantage is that goes wrong is due to a because the network finds software or hardware fault. out how to solve the problem by itself, its operation can be unpredictable
  • 6. A Human Neurons As we are all familiar this is how a conventional neuron looks
  • 7. Dendrites Cell Body Threshold axon Summation A Network Neuron In an artificial neural network, simple artificial nodes, variously called "neurons", "neurodes", "processing elements" (PEs) or "units", are connected together to form a network of nodes mimicking the biological neural networks — hence the term "artificial neural network".
  • 8. An Improved Neurod W1 f(x)=K( iwigi(x)) W2 A more sophisticated neuron (figure 2) is the McCulloch and Pitts model (MCP). Here inputs are 'weighted', Wn the effect that each input has at decision making is dependent on the weight of the particular input. In mathematical terms, the neuron fires if and only if; X1W1 + X2W2 + X3W3 + ... > T
  • 9. Network Model  The most common type of artificial neural network consists of three groups, or layers, of units:  a layer of "input" units is connected to The activity of the input units represents the raw information that is fed into the network.  a layer of "hidden" units, The activity of each hidden unit is determined by the activities of the input units and the weights on the connections between the input and the hidden units.  a layer of "output" units. The behavior of the output units depends on the activity of the hidden units.
  • 10. Network Architectures Feed-forward networks Feedback networks  Feed-forward ANNs  Feedback networks can allow signals to travel have signals travelling in one way only; from input to output. The output of both directions by any layer does not affect introducing loops in the that same layer. network. output input Hidden
  • 11. Feed-forward ANNs  Feedback networks tend to be straight are dynamic; their forward networks that 'state' is changing associate inputs with continuously until they outputs. reach an equilibrium  This type of point. They remain at organization is also the equilibrium point input output referred to as bottom- until the input up or top-down. changes and a new equilibrium needs to be found.  Also known as interactive or recurrentHidden
  • 12. Firing rules  The firing rule is an important concept in neural networks and accounts for their high flexibility.  A firing rule determines how one calculates whether a neuron should fire for any input pattern. It relates to all the input patterns, not only the ones on which the node was trained.
  • 13. Take a collection of training patterns for a node, some inputs in which  Cause it to fire -1  Prevent it from firing- 0  Then the patterns not in the collection??  Here the node fires when in comparison , they have more input elements in common with the 'nearest' pattern in the 1-taught set than with the 'nearest' pattern in the 0-taught set.  If there is a tie, then the pattern remains in the undefined state.
  • 14. For Example  For example, a 3-input neuron is taught As follows  Output 1 when input (X1,X2 and X3) -111, 101  Output 0 when the input is 000 or 001. ;
  • 15. 111, 101=1 000 or 001=0 Before Generalization X1 0 0 0 0 1 1 1 1 X2 0 0 1 1 0 0 1 1 X3 0 1 0 1 0 1 0 1 Out 0 0 0/1 0/1 0/1 1 0/1 1 After Generalization X1 0 0 0 0 1 1 1 1 X2 0 0 1 1 0 0 1 1 X3 0 1 0 1 0 1 0 1 Out 0 0 0 0/1 0/1 1 1 1
  • 16. Pattern Recognition X11 X12 TAN X13 1 F1 X21 X22 TAN X23 2 F2 X31 X32 TAN F3 3 X33
  • 17. X11 0 0 0 0 1 1 1 1 X12 0 0 1 1 0 0 1 1 Top Neuron X13 0 1 0 1 0 1 0 1 Out 0 0 1 1 0 0 1 1 X21 0 0 0 0 1 1 1 1 X22 0 0 1 1 0 0 1 1 Middle Neuron X23 0 1 0 1 0 1 0 1 Out 1 0/1 1 0/1 0/1 0 0/1 0 X31 0 0 0 0 1 1 1 1 X32 0 0 1 1 0 0 1 1 Bottom Neuro X33 0 1 0 1 0 1 0 1 Out 0 0 1 1 0 0 1 0
  • 18.
  • 20. Based on The memorization of patterns response of the network and the subsequent  associative mapping in which the network learns to produce a particular pattern on the set of input units whenever another particular pattern is applied on the set of input units. The associative mapping can generally be broken down into two mechanisms:  auto-association: an input pattern is associated with itself and the states of input and output units coincide. This is used to provide pattern competition, i.e. to produce a pattern whenever a portion of it or a distorted pattern is presented. In the second case, the network actually stores pairs of patterns building an association between two sets of patterns.  hetero-association: is related to two recall mechanisms:  nearest-neighbor recall, where the output pattern produced corresponds to the input pattern stored, which is closest to the pattern presented, and  interpolative recall, where the output pattern is a similarity dependent interpolation of the patterns stored corresponding to the pattern presented. Yet another paradigm, which is a variant associative mapping is classification, i.e. when there is a fixed set of categories into which the input patterns are to be classified.  regularity detection in which units learn to respond to particular properties of the input patterns. Whereas in associative mapping the network stores the relationships among patterns, in regularity detection the response of each unit has a particular 'meaning'. This type of learning mechanism is essential for feature discovery and knowledge representation
  • 21. Conclusion The computing world has a lot to gain from neural networks. Their ability to learn by example makes them very flexible and powerful. Furthermore there is no need to devise an algorithm (or understand inner mechanism) in order to perform a specific task; They are also very well suited for real time systems because of their fast response and computational times which are due to their parallel architecture. Neural networks also contribute to other areas of research such as neurology and psychology. They are regularly used to model parts of living organisms and to investigate the internal mechanisms of the brain. Perhaps the most exciting aspect of neural networks is the possibility that some day 'conscious' networks might be produced. There is a number of scientists arguing that consciousness is a 'mechanical' property and that 'conscious' neural networks are a realistic possibility. Finally, I would like to state that even though neural networks have a huge potential we will only get the best of them when they are
  • 22. References  An introduction to neural computing. Alexander, I. and Morton, H. 2nd edition  http://media.wiley.com/product_data/excerpt/19/04713491/0471349119.pdf  Neural Networks at Pacific Northwest National Laboratory http://www.emsl.pnl.gov:2080/docs/cie/neural/neural.homepage.html  Industrial Applications of Neural Networks (research reports Esprit, I.F.Croall, J.P.Mason)  A Novel Approach to Modeling and Diagnosing the Cardiovascular System http://www.emsl.pnl.gov:2080/docs/cie/neural/papers2/keller.wcnn95.abs.html  Artificial Neural Networks in Medicine http://www.emsl.pnl.gov:2080/docs/cie/techbrief/NN.techbrief.html  Neural Networks by Eric Davalo and Patrick Naim  Learning internal representations by error propagation by Rumelhart, Hinton and Williams (1986).  Klimasauskas, CC. (1989). The 1989 Neuron Computing Bibliography. Hammerstrom, D. (1986). A Connectionist/Neural Network Bibliography.  DARPA Neural Network Study (October, 1987-February, 1989). MIT Lincoln Lab. Neural Networks, Eric Davalo and Patrick Naim  Assimov, I (1984, 1950), Robot, Ballatine, New York.  Electronic Noses for Telemedicine http://www.emsl.pnl.gov:2080/docs/cie/neural/papers2/keller.ccc95.abs.html  Pattern Recognition of Pathology Images

Notes de l'éditeur

  1. 1.An Artificial Neural Network (ANN) is an information processing paradigm that is inspired by the way biological nervous systems, such as the brain, process information.2. The key element of this paradigm is the novel structure of the information processing system.3. It is composed of a large number of highly interconnected processing elements (neurons) working in unison to solve specific problems. ANNs, like people, learn by example. 3.An ANN is configured for a specific application, such as pattern recognition or data classification, through a learning process. Learning in biological systems involves adjustments to the synaptic connections that exist between the neurons. This is true of ANNs as well.
  2. Conventional computers use an algorithmic approach i.e. the computer follows a set of instructions in order to solve a problem. Unless the specific steps that the computer needs to follow are known the computer cannot solve the problem. That restricts the problem solving capability of conventional computers to problems that we already understand and know how to solve. But computers would be so much more useful if they could do things that we don't exactly know how to do. Neural networks process information in a similar way the human brain does. The network is composed of a large number of highly interconnected processing elements(neurones) working in parallel to solve a specific problem. Neural networks learn by example. They cannot be programmed to perform a specific task. The examples must be selected carefully otherwise useful time is wasted or even worse the network might be functioning incorrectly. The disadvantage is that because the network finds out how to solve the problem by itself, its operation can be unpredictable.
  3. The previous neuron doesn't do anything that conventional conventional computers don't do already. A more sophisticated neuron is the McCulloch and Pitts model (MCP). The difference from the previous model is that the inputs are 'weighted', the effect that each input has at decision making is dependent on the weight of the particular input. The weight of an input is a number which when multiplied with the input gives the weighted input. These weighted inputs are then added together and if they exceed a pre-set threshold value, the neuron fires. In any other case the neuron does not fire. The addition of input weights and of the threshold makes this neuron a very flexible and powerful one. The MCP neuron has the ability to adapt to a particular situation by changing its weights and/or threshold. Various algorithms exist that cause the neuron to 'adapt'; the most used ones are the Delta rule and the back error propagation. The former is used in feed-forward networks and the latter in feedback networks. where ,f(x)=K(iwigi(x)) where (commonly referred to as the activation function[1]) is some predefined function, such as the hyperbolic tangent. It will be convenient for the following to refer to a collection of functions as simply a vector .
  4. This simple type of network is interesting because the hidden units are free to construct their own representations of the input. The weights between the input and hidden units determine when each hidden unit is active, and so by modifying these weights, a hidden unit can choose what it represents.We also distinguish single-layer and multi-layer architectures. The single-layer organization, in which all units are connected to one another, constitutes the most general case and is of more potential computational power than hierarchically structured multi-layer organizations. In multi-layer networks, units are often numbered by layer, instead of following a global numbering.
  5. 1 Feed-forward networksFeed-forward ANNs (figure 1) allow signals to travel one way only; from input to output. The output of any layer does not affect that same layer. Feed-forward ANNs tend to be straight forward networks that associate inputs with outputs. They are extensively used in pattern recognition. This type of organization is also referred to as bottom-up or top-down. Feedback networks can have signals travelling in both directions by introducing loops in the network. Feedback networks are very powerful and can get extremely complicated. Feedback networks are dynamic; their 'state' is changing continuously until they reach an equilibrium point. They remain at the equilibrium point until the input changes and a new equilibrium needs to be found. Feedback architectures are also referred to as interactive or recurrent, although the latter term is often used to denote feedback connections in single-layer organizations.
  6. As an example of the way the firing rule is applied, take the pattern 010. It differs from 000 in 1 element, from 001 in 2 elements, from 101 in 3 elements and from 111 in 2 elements. Therefore, the 'nearest' pattern is 000 which belongs in the 0-taught set. Thus the firing rule requires that the neuron should not fire when the input is 001. On the other hand, 011 is equally distant from two taught patterns that have different outputs and thus the output stays undefined (0/1). The difference between the two truth tables is called the generalization of the neuron. Therefore the firing rule gives the neuron a sense of similarity and enables it to respond 'sensibly' to patterns not seen during training.
  7. An important application of neural networks is pattern recognition. Pattern recognition can be implemented by using a feed-forward (figure 1) neural network that has been trained accordingly. During training, the network is trained to associate outputs with input patterns. When the network is used, it identifies the input pattern and tries to output the associated output pattern. The power of neural networks comes to life when a pattern that has no output associated with it, is given as an input. In this case, the network gives the output that corresponds to a taught input pattern that is least different from the given pattern.