Ce diaporama a bien été signalé.
Le téléchargement de votre SlideShare est en cours. ×

Algorithms that mimic the human brain (1)

Prochain SlideShare
 soft-computing
soft-computing
Chargement dans…3
×

Consultez-les par la suite

1 sur 36
1 sur 36

Algorithms that mimic the human brain (1)

Télécharger pour lire hors ligne

A lot of learning algorithms are based on how the human brain

A lot of learning algorithms are based on how the human brain

Plus De Contenu Connexe

Algorithms that mimic the human brain (1)

  1. 1. How Algorithms Mimic The Human Brain
  2. 2. WHAT IS ARTIFICIAL INTELLIGENCE TODAY? 2 Recommend videos Play Dota2 Who to follow on Twitter? Recognize objects Play chess Medical diagnosis
  3. 3. IT’S MOSTLY BECAUSE OF DEEP NEURAL NETS 3 • DNNs consist of artificial neurons (i.e., mathematical functions) connected to each other • Said neurons are arranged in layers, and those signals — the product of data, or inputs, fed into the DNN — travel from layer to layer
  4. 4. 4 DEEP LEARNING AND THE HUMAN BRAIN
  5. 5. NEURAL NETS ARE LOOSELY MODELED AFTER THE BRAIN’S NEURONS 5 • Signals can be received from dendrites, and sent down the axon • Once enough signals are received, this outgoing signal can then be used as another input for other neurons, repeating the process. • Some signals are more important than others and can trigger some neurons to fire easier.
  6. 6. DEEP NEURAL NETS ARE TRAINED USING BACKPROPAGATION 6 • The procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. • As a result of the weight adjustments, internal ‘hidden’ units which are not part of the input or output come to represent important features of the task domain. • Uses gradient descent, to minimize the loss function (difference between the predicted vs. desired output.)
  7. 7. DEEP LEARNING WAS INVENTED IN 1943!!! 7 Teaching these networks was so computationally expensive that people rarely used them for machine learning tasks UNTIL A Compute got orders of magnitudes faster (Moore’s law) B There was a lot of example data to come by….
  8. 8. HUGE VICTORY FOR ALEXNET IN 2012 8 ImageNet Challenge • Classify and detect objects in Images in a massive dataset of 14M images that has over 21K classes
  9. 9. ALEXNET MADE A SIGNIFICANT LEAP... 9 AlexNet is the name of a convolutional neural network Alex Krizhevsky Designer Ilya Sutskever Publisher Geoffrey Hinton PhD advisor • AlexNet competed in the ImageNet Large Scale Visual Recognition Challenge on September 30, 2012. • The network achieved a top-5 error of 15.3%, more than 10.8 percentage points lower than that of the runner up.
  10. 10. MAGIC OF DEEP LEARNING! 10 No need to hand-craft features Just give it a bunch of labelled data and minimize the loss function, so that the neural network learns the weights and biases to make predictions 01 02
  11. 11. BIG DIFFERENCES BETWEEN THE BRAIN AND DEEP LEARNING 11 Size: • The brain has 86B neurons and 10T connections Connections: • Neurons compute one layer after another, neurons in the brain can fire asynchronously Regeneration: • The brain is fault tolerant and self healing. information is stored redundantly.
  12. 12. BIGGEST DIFFERENCE - LEARNING 12 NEURONS THAT FIRE TOGETHER, WIRE TOGETHER • Brain fibers grow and reach out to connect to other neurons, neuroplasticity allows new connections to be created or areas to move and change function, and synapses may strengthen or weaken based on their importance. • Deep Neural Network learning is rigid. The network is trained once and then used for inference. It has to be re-trained whenever there is new data .
  13. 13. 13 DOPAMINE AND REINFORCEMENT LEARNING ALGORITHMS
  14. 14. DOPAMINE IS ONE OF THE BRAIN’S NEUROTRANSMITTERS 14 A neurotransmitter is a chemical that carries information back and forth between neurons. Glutamate Dopamine GABA Glycine Acetyl. Norepin. Serotonin Endorphins
  15. 15. DOPAMINE ENABLES US TO TAKE ACTION AND RECEIVE REWARDS 15 The dopamine kick that you get when someone likes your post is because dopamine is modifying your neuronal synapses and contributes to feelings of pleasure.
  16. 16. DOPAMINE IN THE CONTEXT OF LEARNING 16 Expected Reward Reward Prediction Error Actual Reward • Because the predictions are often not quite accurate, we need a way to calculate our prediction error so we don’t make the same mistakes again (hence Reward Prediction Error) 😉. Generally speaking, learning can be defined as the process of improving predictions of the future.
  17. 17. 17 Reinforcement Learning Works The Exact Same Way... In simple terms, reinforcement learning algorithms use prediction error to improve the computer’s ability to make better decisions in certain environments
  18. 18. 18 UNSUPERVISED LEARNING AND NEURONAL LOCALITY
  19. 19. UNSUPERVISED LEARNING
  20. 20. UNSUPERVISED LEARNING 20 • Neurons interact with each other in pairs…. • Use the same concept of locality to train hidden layers of a neural network to learn lower level features. • This results in a similar performance, as a state-of-the-art supervised algorithm. Paper: Unsupervised learning by competing hidden units
  21. 21. 21 FEW SHOT/ZERO SHOT LEARNING
  22. 22. CAN MACHINES LEARN FROM A FEW EXAMPLES, LIKE HUMANS DO? • It takes a child only a few dozen examples to learn the shapes of letters like ‘a’ and ‘b’. • This is because human brains are very good at generalizing from a few examples.
  23. 23. HUMAN CONCEPT LEARNING 23 • Humans are good at inferring the concepts conveyed in a pair of images and then applying them in a completely different setting—for example, the concept of stacking red and green objects applied to different settings.
  24. 24. VISUAL COGNITIVE COMPUTER 24 A new computer architecture called Visual cognitive computer (VCC) is proposed. The components are based on the science of human cognition. Human concepts are represented as cognitive programs. VCC is evaluated on how well it can represent and infer visuospatial concepts that cognitive scientists consider to be the fundamental building blocks.
  25. 25. A ROBOT THAT HAS VCC CAN PERFORM A WHOLE ARRAY OF TASKS 25
  26. 26. THE TEAM THAT INVENTED VCC SOLVED CAPTCHA 26 A CAPTCHA • (/kæp.tʃə/, an acronym for "Completely Automated Public Turing test to tell Computers and Humans Apart") is a type of challenge–response test used in computing to determine whether or not the user is human.
  27. 27. 27 BAYESIAN INFERENCE AND BRAIN FUNCTION
  28. 28. BAYESIAN INFERENCE IS A POPULAR FRAMEWORK FOR PREDICTIONS 28 Bayesian inference is a method of statistical inference in which Bayes' theorem is used to update the probability for a hypothesis as more evidence or information becomes available. The basic idea of Bayesian probability is that you update your beliefs in the light of new evidence
  29. 29. BAYESIAN BRAIN HYPOTHESIS 29 The hypothesis is meant to explain several important brain functions such as perception, learning and memory.
  30. 30. WHAT IS PREDICTIVE PROCESSING 30 • During every moment of your life, your brain gathers statistics to adapt its model of the world, and this model’s only job is to generate predictions. • Your brain is a prediction machine. • Just as the heart’s main function is to pump blood through the body, so the brain’s main function is to make predictions about the body.
  31. 31. KEY BRAIN FUNCTIONS EXPLAINED BY THIS HYPOTHESIS 31 Learning is the updating of your internal model based on prediction errors so that your predictions gradually improve. The better your predictions about the causal, probabilistic structure of the world, the more effectively you can engage with it. Memory consists of the learned parameters of your internal model, whereas its non-acquired parameters would be the innate knowledge evolution has genetically built into your nervous system. Both parts determine your brain’s predictions. Belief is a hyperprior; a systemic prior with a high degree of abstraction; a high- level prediction that encodes general knowledge about the world. (e.g. Physical beliefs that apples fall down from a tree. Cultural beliefs that cars slow down when you reach an intersection).
  32. 32. 32 ALGORITHMS AND EMOTIONS
  33. 33. IN THE NEAR FUTURE Facial Analysis Voice Pattern Analysis Generative Modeling AI systems and devices will recognize, interpret, process, and simulate human emotions.
  34. 34. MEASURE AND APPLY EMOTIONS TO DECISIONS…. 34 AI models will measure emotional response and factor that into decision making. Conversational Chatbots that detect emotion and react accordingly. Car software that detects if a driver is angry and/or is not paying attention and wants to take control. Security software that alerts security when there is fear in traveler’s face. Chinese schools that monitor children’s attention levels and alerts moms.
  35. 35. BUT ALGORITHMS STILL CAN’T FEEL EMOTIONS 35 In order to ‘FEEL’ emotions, you have to be self aware… There are other aspects of the human mind besides intelligence that are relevant to the concept of strong AI which play a major role in science fiction and the ethics of artificial intelligence: Consciousness: To have subjective experience and thought. Self-awareness: To be aware of oneself as a separate individual, especially to be aware of one's own thoughts. Sentience: The ability to "feel" perceptions or emotions subjectively. Sapience: The capacity for wisdom.

×