SlideShare une entreprise Scribd logo
1  sur  3
History of neural networks
In 1943, neurophysiologist Warren McCulloch and mathematician Walter Pitts wrote a paper
on how neurons might work. In order to describe how neurons in the brain might work, they
modelled a simple neural network using electrical circuits.
In 1949, Donald Hebb wrote The Organization of Behaviour, a work which pointed out the
fact that neural pathways are strengthened each time they are used, a concept fundamentally
essential to the ways in which humans learn. If two nerves fire at the same time, he argued,
the connection between them is enhanced.
As computers became more advanced in the 1950's, it was finally possible to simulate a
hypothetical neural network. The first step towards this was made by Nathanial Rochester
from the IBM research laboratories. Unfortunately for him, the first attempt to do so failed.
In 1959, Bernard Widrow and Marcian Hoff of Stanford developed models called
"ADALINE" and "MADALINE." In a typical display of Stanford's love for acronyms, the
names come from their use of Multiple Adaptive Linear Elements. ADALINE was developed
to recognize binary patterns so that if it was reading streaming bits from a phone line, it could
predict the next bit. MADALINE was the first neural network applied to a real-world
problem, using an adaptive filter that eliminates echoes on phone lines. While the system is
as ancient as air traffic control systems, like air traffic control systems, it is still in
commercial use.
In 1962, Widrow & Hoff developed a learning procedure that examines the value before the
weight adjusts it (i.e. 0 or 1) according to the rule: Weight Change = (Pre-Weight line value)
* (Error / (Number of Inputs)). It is based on the idea that while one active perceptron may
have a big error, one can adjust the weight values to distribute it across the network, or at
least to adjacent perceptron. Applying this rule still results in an error if the line before the
weight is 0, although this will eventually correct itself. If the error is conserved so that all of
it is distributed to all of the weights than the error is eliminated.
Despite the later success of the neural network, traditional von Neumann architecture took
over the computing scene, and neural research was left behind. Ironically, John von Neumann
himself suggested the imitation of neural functions by using telegraph relays or vacuum
tubes.
In the same time period, a paper was written that suggested there could not be an extension
from the single layered neural network to a multiple layered neural network. In addition,
many people in the field were using a learning function that was fundamentally flawed
because it was not differentiable across the entire line. As a result, research and funding went
drastically down.
This was coupled with the fact that the early successes of some neural networks led to an
exaggeration of the potential of neural networks, especially considering the practical
technology at the time. Promises went unfulfilled, and at times greater philosophical
questions led to fear. Writers pondered the effect that the so-called "thinking machines"
would have on humans, ideas which are still around today.
The idea of a computer which programs itself is very appealing. If Microsoft's Windows 2000
could reprogram itself, it might be able to repair the thousands of bugs that the programming
staff made. Such ideas were appealing but very difficult to implement. In addition, von
Neumann architecture was gaining in popularity. There were a few advances in the field, but
for the most part research was few and far between.
In 1972, Kohonen and Anderson developed a similar network independently of one another,
which we will discuss more about later. They both used matrix mathematics to describe their
ideas but did not realize that what they were doing was creating an array of analog
ADALINE circuits. The neurons are supposed to activate a set of outputs instead of just one.
The first multilayered network was developed in 1975, an unsupervised network
In 1982, interest in the field was renewed. John Hopfield of Caltech presented a paper to the
National Academy of Sciences. His approach was to create more useful machines by using
bidirectional lines. Previously, the connections between neurons was only one way.
That same year, Reilly and Cooper used a "Hybrid network" with multiple layers, each layer
using a different problem-solving strategy.
Also in 1982, there was a joint US-Japan conference on Cooperative/Competitive Neural
Networks. Japan announced a new Fifth Generation effort on neural networks, and US papers
generated worry that the US could be left behind in the field. (Fifth generation computing
involves artificial intelligence. First generation used switches and wires, second generation
used the transister, third state used solid-state technology like integrated circuits and higher
level programming languages, and the fourth generation is code generators.) As a result, there
was more funding and thus more research in the field.
In 1986, with multiple layered neural networks in the news, the problem was how to extend
the Widrow-Hoff rule to multiple layers. Three independent groups of researchers, one of
which included David Rumelhart, a former member of Stanford's psychology department,
came up with similar ideas which are now called back propagation networks because it
distributes pattern recognition errors throughout the network. Hybrid networks used just two
layers, these back-propagation networks use many. The result is that back-propagation
networks are "slow learners," needing possibly thousands of iterations to learn.
In 1989 Yann LeCun uses backpropagation to train convolutional neural network to
recognize handwritten digits. This is a breakthrough moment as it lays the foundation of
modern computer vision using deep learning. And also George Cybenko publishes earliest
version of the Universal Approximation Theorem in his paper “Approximation by
superpositions of a sigmoidal function“. He proves that feed forward neural network with
single hidden layer containing finite number of neurons can approximate any continuous
function. It further adds credibility to Deep Learning.
In 1997 Sepp Hochreiter and Jürgen Schmidhuber publishes a milestone paper on “Long
Short-Term Memory” (LSTM). It is a type of recurrent neural network architecture which
will go on to revolutionize deep learning in decades to come
There is another milestone in 2006 Geoffrey Hinton, Ruslan Salakhutdinov, Osindero and
Teh publishes the paper “A fast learning algorithm for deep belief nets” in which they
stacked multiple RBMs together in layers and called them Deep Belief Networks. The
training process is much more efficient for large amount of data.
In 2008 Andrew NG’s group in Stanford starts advocating for the use of GPUs for training
Deep Neural Networks to speed up the training time by many folds. This could bring
practicality in the field of Deep Learning for training on huge volume of data efficiently.
Finding enough labeled data has always been a challenge for Deep Learning community. In
2009 Fei-Fei Li, a professor at Stanford, launches ImageNet which is a database of 14 million
labeled images. It would serve as a benchmark for the deep learning researchers who would
participate in ImageNet competitions (ILSVRC) every year.
In 2011 yoshua Bengio, Antoine Bordes, Xavier Glorot in their paper “Deep Sparse Rectifier
Neural Networks” shows that ReLU activation function can avoid vanishing gradient
problem. This means that now, apart from GPU, deep learning community has another tool to
avoid issues of longer and impractical training times of deep neural network.
In 2012 AlexNet, a GPU implemented CNN model designed by Alex Krizhevsky, wins
Imagenet’s image classification contest with accuracy of 84%. It is a huge jump over 75%
accuracy that earlier models had achieved. This win triggers a new deep learning boom
globally.
In 2014 Generative Adversarial Neural Network also known as GAN is created by Ian
Goodfellow. GANs open a whole new door of application of deep learning in fashion, art,
science due it’s ability to synthesize real like data.
In 2016 Deepmind’s deep reinforcement learning model beats human champion in the
complex game of Go. The game is much more complex than chess, so this feat captures the
imagination of everyone and takes the promise of deep learning to whole new level.
In 2019 Yoshua Bengio, Geoffrey Hinton, and Yann LeCun wins Turing Award 2018 for
their immense contribution in advancements in area of deep learning and artificial
intelligence. This is a defining moment for those who had worked relentlessly on neural
networks when entire machine learning community had moved away from it in 1970s.
Now, neural networks are used in several applications, some of which we will describe later
in our presentation. The fundamental idea behind the nature of neural networks is that if it
works in nature, it must be able to work in computers. The future of neural networks, though,
lies in the development of hardware. Much like the advanced chess-playing machines like
Deep Blue, fast, efficient neural networks depend on hardware being specified for its
eventual use.

Contenu connexe

Tendances

Convolutional Neural Networks (CNN)
Convolutional Neural Networks (CNN)Convolutional Neural Networks (CNN)
Convolutional Neural Networks (CNN)Gaurav Mittal
 
Hebbian Learning
Hebbian LearningHebbian Learning
Hebbian LearningESCOM
 
Deep Learning Introduction Lecture
Deep Learning Introduction LectureDeep Learning Introduction Lecture
Deep Learning Introduction Lectureshivam chaurasia
 
Introduction to-machine-learning
Introduction to-machine-learningIntroduction to-machine-learning
Introduction to-machine-learningBabu Priyavrat
 
Convolutional neural network
Convolutional neural networkConvolutional neural network
Convolutional neural networkFerdous ahmed
 
Convolutional neural network
Convolutional neural network Convolutional neural network
Convolutional neural network Yan Xu
 
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...Simplilearn
 
Presentation on Neural Style Transfer
Presentation on Neural Style TransferPresentation on Neural Style Transfer
Presentation on Neural Style TransferSanjoy Datta
 
Artificial Neural Network(Artificial intelligence)
Artificial Neural Network(Artificial intelligence)Artificial Neural Network(Artificial intelligence)
Artificial Neural Network(Artificial intelligence)spartacus131211
 
Image classification using cnn
Image classification using cnnImage classification using cnn
Image classification using cnnSumeraHangi
 
Machine Learning
Machine LearningMachine Learning
Machine LearningRahul Kumar
 
GANs Presentation.pptx
GANs Presentation.pptxGANs Presentation.pptx
GANs Presentation.pptxMAHMOUD729246
 
Notes from Coursera Deep Learning courses by Andrew Ng
Notes from Coursera Deep Learning courses by Andrew NgNotes from Coursera Deep Learning courses by Andrew Ng
Notes from Coursera Deep Learning courses by Andrew NgdataHacker. rs
 

Tendances (20)

Neural networks
Neural networksNeural networks
Neural networks
 
Convolutional Neural Networks (CNN)
Convolutional Neural Networks (CNN)Convolutional Neural Networks (CNN)
Convolutional Neural Networks (CNN)
 
Hebbian Learning
Hebbian LearningHebbian Learning
Hebbian Learning
 
cnn ppt.pptx
cnn ppt.pptxcnn ppt.pptx
cnn ppt.pptx
 
Deep Learning Introduction Lecture
Deep Learning Introduction LectureDeep Learning Introduction Lecture
Deep Learning Introduction Lecture
 
Introduction to-machine-learning
Introduction to-machine-learningIntroduction to-machine-learning
Introduction to-machine-learning
 
Convolutional neural network
Convolutional neural networkConvolutional neural network
Convolutional neural network
 
Convolutional neural network
Convolutional neural network Convolutional neural network
Convolutional neural network
 
AlexNet
AlexNetAlexNet
AlexNet
 
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...
 
Deep learning ppt
Deep learning pptDeep learning ppt
Deep learning ppt
 
Presentation on Neural Style Transfer
Presentation on Neural Style TransferPresentation on Neural Style Transfer
Presentation on Neural Style Transfer
 
Artificial Neural Network(Artificial intelligence)
Artificial Neural Network(Artificial intelligence)Artificial Neural Network(Artificial intelligence)
Artificial Neural Network(Artificial intelligence)
 
CNN Tutorial
CNN TutorialCNN Tutorial
CNN Tutorial
 
Deep learning
Deep learningDeep learning
Deep learning
 
Image classification using cnn
Image classification using cnnImage classification using cnn
Image classification using cnn
 
Machine Learning
Machine LearningMachine Learning
Machine Learning
 
1.Introduction to deep learning
1.Introduction to deep learning1.Introduction to deep learning
1.Introduction to deep learning
 
GANs Presentation.pptx
GANs Presentation.pptxGANs Presentation.pptx
GANs Presentation.pptx
 
Notes from Coursera Deep Learning courses by Andrew Ng
Notes from Coursera Deep Learning courses by Andrew NgNotes from Coursera Deep Learning courses by Andrew Ng
Notes from Coursera Deep Learning courses by Andrew Ng
 

Similaire à History of neural networks

Case study on deep learning
Case study on deep learningCase study on deep learning
Case study on deep learningHarshitBarde
 
Growing-up With AI
Growing-up With AIGrowing-up With AI
Growing-up With AISpotle.ai
 
Artificial Neural Network An Important Asset For Future Computing
Artificial Neural Network   An Important Asset For Future ComputingArtificial Neural Network   An Important Asset For Future Computing
Artificial Neural Network An Important Asset For Future ComputingBria Davis
 
From web 2 to web 3
From web 2 to web 3From web 2 to web 3
From web 2 to web 3Asher Idan
 
Artificial Intelligence - Past, Present and Future
Artificial Intelligence - Past, Present and FutureArtificial Intelligence - Past, Present and Future
Artificial Intelligence - Past, Present and FutureGrigory Sapunov
 
NEURAL NETWORKS
NEURAL NETWORKSNEURAL NETWORKS
NEURAL NETWORKSESCOM
 
History of Artificial Intelligence.pptx
History of Artificial Intelligence.pptxHistory of Artificial Intelligence.pptx
History of Artificial Intelligence.pptxBenjamin Requiero
 
An Overview On Neural Network And Its Application
An Overview On Neural Network And Its ApplicationAn Overview On Neural Network And Its Application
An Overview On Neural Network And Its ApplicationSherri Cost
 
Foundations of ANNs: Tolstoy’s Genius Explored Using Transformer Architecture
Foundations of ANNs: Tolstoy’s Genius Explored Using Transformer ArchitectureFoundations of ANNs: Tolstoy’s Genius Explored Using Transformer Architecture
Foundations of ANNs: Tolstoy’s Genius Explored Using Transformer Architecturegerogepatton
 
Foundations of ANNs: Tolstoy’s Genius Explored Using Transformer Architecture
Foundations of ANNs: Tolstoy’s Genius Explored Using Transformer ArchitectureFoundations of ANNs: Tolstoy’s Genius Explored Using Transformer Architecture
Foundations of ANNs: Tolstoy’s Genius Explored Using Transformer Architectureijaia
 
Foundations of ANNs: Tolstoy’s Genius Explored using Transformer Architecture
Foundations of ANNs: Tolstoy’s Genius Explored using Transformer ArchitectureFoundations of ANNs: Tolstoy’s Genius Explored using Transformer Architecture
Foundations of ANNs: Tolstoy’s Genius Explored using Transformer Architecturegerogepatton
 
Artificial intellegence by Bhanuprakash
Artificial  intellegence by BhanuprakashArtificial  intellegence by Bhanuprakash
Artificial intellegence by BhanuprakashEAGALA BHANU PRAKASH
 
Elective Neural Networks. I. The boolean brain. On a Heuristic Point of V...
Elective Neural Networks.   I. The boolean brain.   On a Heuristic Point of V...Elective Neural Networks.   I. The boolean brain.   On a Heuristic Point of V...
Elective Neural Networks. I. The boolean brain. On a Heuristic Point of V...ABINClaude
 
Artificial Intelligence for Biology
Artificial Intelligence for BiologyArtificial Intelligence for Biology
Artificial Intelligence for Biologyarannadelwar361
 
Rise of AI through DL
Rise of AI through DLRise of AI through DL
Rise of AI through DLRehan Guha
 
Beep...Destroy All Humans!
Beep...Destroy All Humans!Beep...Destroy All Humans!
Beep...Destroy All Humans!Componica LLC
 

Similaire à History of neural networks (20)

Case study on deep learning
Case study on deep learningCase study on deep learning
Case study on deep learning
 
Growing-up With AI
Growing-up With AIGrowing-up With AI
Growing-up With AI
 
Artificial Neural Network An Important Asset For Future Computing
Artificial Neural Network   An Important Asset For Future ComputingArtificial Neural Network   An Important Asset For Future Computing
Artificial Neural Network An Important Asset For Future Computing
 
From web 2 to web 3
From web 2 to web 3From web 2 to web 3
From web 2 to web 3
 
Artificial Intelligence - Past, Present and Future
Artificial Intelligence - Past, Present and FutureArtificial Intelligence - Past, Present and Future
Artificial Intelligence - Past, Present and Future
 
NEURAL NETWORKS
NEURAL NETWORKSNEURAL NETWORKS
NEURAL NETWORKS
 
History of Artificial Intelligence.pptx
History of Artificial Intelligence.pptxHistory of Artificial Intelligence.pptx
History of Artificial Intelligence.pptx
 
An Overview On Neural Network And Its Application
An Overview On Neural Network And Its ApplicationAn Overview On Neural Network And Its Application
An Overview On Neural Network And Its Application
 
Foundations of ANNs: Tolstoy’s Genius Explored Using Transformer Architecture
Foundations of ANNs: Tolstoy’s Genius Explored Using Transformer ArchitectureFoundations of ANNs: Tolstoy’s Genius Explored Using Transformer Architecture
Foundations of ANNs: Tolstoy’s Genius Explored Using Transformer Architecture
 
Foundations of ANNs: Tolstoy’s Genius Explored Using Transformer Architecture
Foundations of ANNs: Tolstoy’s Genius Explored Using Transformer ArchitectureFoundations of ANNs: Tolstoy’s Genius Explored Using Transformer Architecture
Foundations of ANNs: Tolstoy’s Genius Explored Using Transformer Architecture
 
Foundations of ANNs: Tolstoy’s Genius Explored using Transformer Architecture
Foundations of ANNs: Tolstoy’s Genius Explored using Transformer ArchitectureFoundations of ANNs: Tolstoy’s Genius Explored using Transformer Architecture
Foundations of ANNs: Tolstoy’s Genius Explored using Transformer Architecture
 
Dli milano rl_parton_sep
Dli milano rl_parton_sepDli milano rl_parton_sep
Dli milano rl_parton_sep
 
3234150
32341503234150
3234150
 
Artificial intellegence by Bhanuprakash
Artificial  intellegence by BhanuprakashArtificial  intellegence by Bhanuprakash
Artificial intellegence by Bhanuprakash
 
Elective Neural Networks. I. The boolean brain. On a Heuristic Point of V...
Elective Neural Networks.   I. The boolean brain.   On a Heuristic Point of V...Elective Neural Networks.   I. The boolean brain.   On a Heuristic Point of V...
Elective Neural Networks. I. The boolean brain. On a Heuristic Point of V...
 
Artificial Intelligence for Biology
Artificial Intelligence for BiologyArtificial Intelligence for Biology
Artificial Intelligence for Biology
 
Rise of AI through DL
Rise of AI through DLRise of AI through DL
Rise of AI through DL
 
AI.pdf
AI.pdfAI.pdf
AI.pdf
 
[IJET V2I2P20] Authors: Dr. Sanjeev S Sannakki, Ms.Anjanabhargavi A Kulkarni
[IJET V2I2P20] Authors: Dr. Sanjeev S Sannakki, Ms.Anjanabhargavi A Kulkarni[IJET V2I2P20] Authors: Dr. Sanjeev S Sannakki, Ms.Anjanabhargavi A Kulkarni
[IJET V2I2P20] Authors: Dr. Sanjeev S Sannakki, Ms.Anjanabhargavi A Kulkarni
 
Beep...Destroy All Humans!
Beep...Destroy All Humans!Beep...Destroy All Humans!
Beep...Destroy All Humans!
 

Dernier

Right Money Management App For Your Financial Goals
Right Money Management App For Your Financial GoalsRight Money Management App For Your Financial Goals
Right Money Management App For Your Financial GoalsJhone kinadey
 
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...harshavardhanraghave
 
Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...
Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...
Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...MyIntelliSource, Inc.
 
HR Software Buyers Guide in 2024 - HRSoftware.com
HR Software Buyers Guide in 2024 - HRSoftware.comHR Software Buyers Guide in 2024 - HRSoftware.com
HR Software Buyers Guide in 2024 - HRSoftware.comFatema Valibhai
 
A Secure and Reliable Document Management System is Essential.docx
A Secure and Reliable Document Management System is Essential.docxA Secure and Reliable Document Management System is Essential.docx
A Secure and Reliable Document Management System is Essential.docxComplianceQuest1
 
Diamond Application Development Crafting Solutions with Precision
Diamond Application Development Crafting Solutions with PrecisionDiamond Application Development Crafting Solutions with Precision
Diamond Application Development Crafting Solutions with PrecisionSolGuruz
 
How To Troubleshoot Collaboration Apps for the Modern Connected Worker
How To Troubleshoot Collaboration Apps for the Modern Connected WorkerHow To Troubleshoot Collaboration Apps for the Modern Connected Worker
How To Troubleshoot Collaboration Apps for the Modern Connected WorkerThousandEyes
 
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...ICS
 
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...Health
 
5 Signs You Need a Fashion PLM Software.pdf
5 Signs You Need a Fashion PLM Software.pdf5 Signs You Need a Fashion PLM Software.pdf
5 Signs You Need a Fashion PLM Software.pdfWave PLM
 
Hand gesture recognition PROJECT PPT.pptx
Hand gesture recognition PROJECT PPT.pptxHand gesture recognition PROJECT PPT.pptx
Hand gesture recognition PROJECT PPT.pptxbodapatigopi8531
 
Optimizing AI for immediate response in Smart CCTV
Optimizing AI for immediate response in Smart CCTVOptimizing AI for immediate response in Smart CCTV
Optimizing AI for immediate response in Smart CCTVshikhaohhpro
 
Unlocking the Future of AI Agents with Large Language Models
Unlocking the Future of AI Agents with Large Language ModelsUnlocking the Future of AI Agents with Large Language Models
Unlocking the Future of AI Agents with Large Language Modelsaagamshah0812
 
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdfLearn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdfkalichargn70th171
 
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...Steffen Staab
 
How To Use Server-Side Rendering with Nuxt.js
How To Use Server-Side Rendering with Nuxt.jsHow To Use Server-Side Rendering with Nuxt.js
How To Use Server-Side Rendering with Nuxt.jsAndolasoft Inc
 
Unveiling the Tech Salsa of LAMs with Janus in Real-Time Applications
Unveiling the Tech Salsa of LAMs with Janus in Real-Time ApplicationsUnveiling the Tech Salsa of LAMs with Janus in Real-Time Applications
Unveiling the Tech Salsa of LAMs with Janus in Real-Time ApplicationsAlberto González Trastoy
 
Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...
Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...
Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...MyIntelliSource, Inc.
 

Dernier (20)

Right Money Management App For Your Financial Goals
Right Money Management App For Your Financial GoalsRight Money Management App For Your Financial Goals
Right Money Management App For Your Financial Goals
 
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
 
Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...
Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...
Steps To Getting Up And Running Quickly With MyTimeClock Employee Scheduling ...
 
HR Software Buyers Guide in 2024 - HRSoftware.com
HR Software Buyers Guide in 2024 - HRSoftware.comHR Software Buyers Guide in 2024 - HRSoftware.com
HR Software Buyers Guide in 2024 - HRSoftware.com
 
A Secure and Reliable Document Management System is Essential.docx
A Secure and Reliable Document Management System is Essential.docxA Secure and Reliable Document Management System is Essential.docx
A Secure and Reliable Document Management System is Essential.docx
 
Diamond Application Development Crafting Solutions with Precision
Diamond Application Development Crafting Solutions with PrecisionDiamond Application Development Crafting Solutions with Precision
Diamond Application Development Crafting Solutions with Precision
 
How To Troubleshoot Collaboration Apps for the Modern Connected Worker
How To Troubleshoot Collaboration Apps for the Modern Connected WorkerHow To Troubleshoot Collaboration Apps for the Modern Connected Worker
How To Troubleshoot Collaboration Apps for the Modern Connected Worker
 
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
 
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
 
5 Signs You Need a Fashion PLM Software.pdf
5 Signs You Need a Fashion PLM Software.pdf5 Signs You Need a Fashion PLM Software.pdf
5 Signs You Need a Fashion PLM Software.pdf
 
Hand gesture recognition PROJECT PPT.pptx
Hand gesture recognition PROJECT PPT.pptxHand gesture recognition PROJECT PPT.pptx
Hand gesture recognition PROJECT PPT.pptx
 
Vip Call Girls Noida ➡️ Delhi ➡️ 9999965857 No Advance 24HRS Live
Vip Call Girls Noida ➡️ Delhi ➡️ 9999965857 No Advance 24HRS LiveVip Call Girls Noida ➡️ Delhi ➡️ 9999965857 No Advance 24HRS Live
Vip Call Girls Noida ➡️ Delhi ➡️ 9999965857 No Advance 24HRS Live
 
Optimizing AI for immediate response in Smart CCTV
Optimizing AI for immediate response in Smart CCTVOptimizing AI for immediate response in Smart CCTV
Optimizing AI for immediate response in Smart CCTV
 
Unlocking the Future of AI Agents with Large Language Models
Unlocking the Future of AI Agents with Large Language ModelsUnlocking the Future of AI Agents with Large Language Models
Unlocking the Future of AI Agents with Large Language Models
 
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdfLearn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
 
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...
 
How To Use Server-Side Rendering with Nuxt.js
How To Use Server-Side Rendering with Nuxt.jsHow To Use Server-Side Rendering with Nuxt.js
How To Use Server-Side Rendering with Nuxt.js
 
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICECHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
 
Unveiling the Tech Salsa of LAMs with Janus in Real-Time Applications
Unveiling the Tech Salsa of LAMs with Janus in Real-Time ApplicationsUnveiling the Tech Salsa of LAMs with Janus in Real-Time Applications
Unveiling the Tech Salsa of LAMs with Janus in Real-Time Applications
 
Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...
Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...
Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...
 

History of neural networks

  • 1. History of neural networks In 1943, neurophysiologist Warren McCulloch and mathematician Walter Pitts wrote a paper on how neurons might work. In order to describe how neurons in the brain might work, they modelled a simple neural network using electrical circuits. In 1949, Donald Hebb wrote The Organization of Behaviour, a work which pointed out the fact that neural pathways are strengthened each time they are used, a concept fundamentally essential to the ways in which humans learn. If two nerves fire at the same time, he argued, the connection between them is enhanced. As computers became more advanced in the 1950's, it was finally possible to simulate a hypothetical neural network. The first step towards this was made by Nathanial Rochester from the IBM research laboratories. Unfortunately for him, the first attempt to do so failed. In 1959, Bernard Widrow and Marcian Hoff of Stanford developed models called "ADALINE" and "MADALINE." In a typical display of Stanford's love for acronyms, the names come from their use of Multiple Adaptive Linear Elements. ADALINE was developed to recognize binary patterns so that if it was reading streaming bits from a phone line, it could predict the next bit. MADALINE was the first neural network applied to a real-world problem, using an adaptive filter that eliminates echoes on phone lines. While the system is as ancient as air traffic control systems, like air traffic control systems, it is still in commercial use. In 1962, Widrow & Hoff developed a learning procedure that examines the value before the weight adjusts it (i.e. 0 or 1) according to the rule: Weight Change = (Pre-Weight line value) * (Error / (Number of Inputs)). It is based on the idea that while one active perceptron may have a big error, one can adjust the weight values to distribute it across the network, or at least to adjacent perceptron. Applying this rule still results in an error if the line before the weight is 0, although this will eventually correct itself. If the error is conserved so that all of it is distributed to all of the weights than the error is eliminated. Despite the later success of the neural network, traditional von Neumann architecture took over the computing scene, and neural research was left behind. Ironically, John von Neumann himself suggested the imitation of neural functions by using telegraph relays or vacuum tubes. In the same time period, a paper was written that suggested there could not be an extension from the single layered neural network to a multiple layered neural network. In addition, many people in the field were using a learning function that was fundamentally flawed because it was not differentiable across the entire line. As a result, research and funding went drastically down. This was coupled with the fact that the early successes of some neural networks led to an exaggeration of the potential of neural networks, especially considering the practical technology at the time. Promises went unfulfilled, and at times greater philosophical questions led to fear. Writers pondered the effect that the so-called "thinking machines" would have on humans, ideas which are still around today. The idea of a computer which programs itself is very appealing. If Microsoft's Windows 2000
  • 2. could reprogram itself, it might be able to repair the thousands of bugs that the programming staff made. Such ideas were appealing but very difficult to implement. In addition, von Neumann architecture was gaining in popularity. There were a few advances in the field, but for the most part research was few and far between. In 1972, Kohonen and Anderson developed a similar network independently of one another, which we will discuss more about later. They both used matrix mathematics to describe their ideas but did not realize that what they were doing was creating an array of analog ADALINE circuits. The neurons are supposed to activate a set of outputs instead of just one. The first multilayered network was developed in 1975, an unsupervised network In 1982, interest in the field was renewed. John Hopfield of Caltech presented a paper to the National Academy of Sciences. His approach was to create more useful machines by using bidirectional lines. Previously, the connections between neurons was only one way. That same year, Reilly and Cooper used a "Hybrid network" with multiple layers, each layer using a different problem-solving strategy. Also in 1982, there was a joint US-Japan conference on Cooperative/Competitive Neural Networks. Japan announced a new Fifth Generation effort on neural networks, and US papers generated worry that the US could be left behind in the field. (Fifth generation computing involves artificial intelligence. First generation used switches and wires, second generation used the transister, third state used solid-state technology like integrated circuits and higher level programming languages, and the fourth generation is code generators.) As a result, there was more funding and thus more research in the field. In 1986, with multiple layered neural networks in the news, the problem was how to extend the Widrow-Hoff rule to multiple layers. Three independent groups of researchers, one of which included David Rumelhart, a former member of Stanford's psychology department, came up with similar ideas which are now called back propagation networks because it distributes pattern recognition errors throughout the network. Hybrid networks used just two layers, these back-propagation networks use many. The result is that back-propagation networks are "slow learners," needing possibly thousands of iterations to learn. In 1989 Yann LeCun uses backpropagation to train convolutional neural network to recognize handwritten digits. This is a breakthrough moment as it lays the foundation of modern computer vision using deep learning. And also George Cybenko publishes earliest version of the Universal Approximation Theorem in his paper “Approximation by superpositions of a sigmoidal function“. He proves that feed forward neural network with single hidden layer containing finite number of neurons can approximate any continuous function. It further adds credibility to Deep Learning. In 1997 Sepp Hochreiter and Jürgen Schmidhuber publishes a milestone paper on “Long Short-Term Memory” (LSTM). It is a type of recurrent neural network architecture which will go on to revolutionize deep learning in decades to come There is another milestone in 2006 Geoffrey Hinton, Ruslan Salakhutdinov, Osindero and Teh publishes the paper “A fast learning algorithm for deep belief nets” in which they
  • 3. stacked multiple RBMs together in layers and called them Deep Belief Networks. The training process is much more efficient for large amount of data. In 2008 Andrew NG’s group in Stanford starts advocating for the use of GPUs for training Deep Neural Networks to speed up the training time by many folds. This could bring practicality in the field of Deep Learning for training on huge volume of data efficiently. Finding enough labeled data has always been a challenge for Deep Learning community. In 2009 Fei-Fei Li, a professor at Stanford, launches ImageNet which is a database of 14 million labeled images. It would serve as a benchmark for the deep learning researchers who would participate in ImageNet competitions (ILSVRC) every year. In 2011 yoshua Bengio, Antoine Bordes, Xavier Glorot in their paper “Deep Sparse Rectifier Neural Networks” shows that ReLU activation function can avoid vanishing gradient problem. This means that now, apart from GPU, deep learning community has another tool to avoid issues of longer and impractical training times of deep neural network. In 2012 AlexNet, a GPU implemented CNN model designed by Alex Krizhevsky, wins Imagenet’s image classification contest with accuracy of 84%. It is a huge jump over 75% accuracy that earlier models had achieved. This win triggers a new deep learning boom globally. In 2014 Generative Adversarial Neural Network also known as GAN is created by Ian Goodfellow. GANs open a whole new door of application of deep learning in fashion, art, science due it’s ability to synthesize real like data. In 2016 Deepmind’s deep reinforcement learning model beats human champion in the complex game of Go. The game is much more complex than chess, so this feat captures the imagination of everyone and takes the promise of deep learning to whole new level. In 2019 Yoshua Bengio, Geoffrey Hinton, and Yann LeCun wins Turing Award 2018 for their immense contribution in advancements in area of deep learning and artificial intelligence. This is a defining moment for those who had worked relentlessly on neural networks when entire machine learning community had moved away from it in 1970s. Now, neural networks are used in several applications, some of which we will describe later in our presentation. The fundamental idea behind the nature of neural networks is that if it works in nature, it must be able to work in computers. The future of neural networks, though, lies in the development of hardware. Much like the advanced chess-playing machines like Deep Blue, fast, efficient neural networks depend on hardware being specified for its eventual use.