an open source cognitive humanoid robotic
There is a Global Effort to develop
IBM (IBM.N) said it will invest more
than $1 billion to establish a new
business unit for Watson
Reuters - Thu Jan 9, 2014 2:50am EST
"The biggest thing will be Artificial
Intelligence," Schmidt (Google
CEO) said at Oasis
Bloomberg - Mar 6, 2014 10:07 PM GMT+0100
China's top search engine Baidu
Inc. has hired Google Inc's former
Artificial Intelligence (AI) chief
Reuters - Fri May 16, 2014 4:58pm EDT
Numenta has developed a cohesive theory, core software
technology, and numerous applications all based on principles of the
This technology lays the groundwork for the new era of machine
Maximum Traction + Sideslip Angle Control
Integration of ATC and SSE: Supports the driver in
driving sport mode leaving more control and
enjoyment while keeping safety
Can be set to two targets:
+ PERFORMANCE : Maximum Cornering
Speed and Acceleration on Corner Exit add-for.com
We don’t know exactly how Intelligent Machines will evolve
The only thing we know is that it will be an Exponential Growth
This mean that it will happen sooner than we expect
Notes de l'éditeur
WE ARE AT THE END OF THE BEGINNING
Here Dr. Kelly is speaking about COGNITIVE COMPUTING
This means we are EXITING the PIONEERING phase and we are ENTERING the INDUSTRIAL Phase
At the beginning of the 20th century the humanity was around 1.5 billion people.
During the last century we’ve seen an exponential grown.
The Earth stored in GEOLOGICAL TIMES the energy coming from the sun in a compact form of FOSSIL FUELS.
The INDUSTRIAL REVOLUTION began when humans overcame the limitations of our muscle power with fossil energy.
In the SIXTIES the GREEN REVOLUTION made available synthetic FERTILIZERS and PESTICIDES to most of the world population.
We’re now in the early stages of doing the SAME thing to our MENTAL CAPACITY.
Some authors call it the SECOND MACHINE AGE
It’s IMPACT on the society will be characterized by an EXPONENTIAL GROWTH
We CANNOT PREDICT exactly is “HOW FAST” this process will be because we are still at an EARLY STAGE.
Nevertheless the EFFECTS are ALREADY EVIDENT.
Huge Capitals are invested by Major Companies in developing COGNITIVE MACHINES
You can find an example in your pocket.
Google translate can: +) Translate 90 languages +) Translate WITH THE CAMERA +) SPEECH TRANSLATION
Some of those functions are also available in offline mode. It works on your mobile processor
This is significant to understand how UBIQUITOUS this technology can be: from MAINFRAMES TO WEARABLE DEVICES.
IBM Watson Watson competed on JEOPARDY, a quiz show, in 2011
+) Uses NATURAL LANGUAGE PROCESSING to understand grammar and context in unstructured data +) Understands COMPLEX QUESTIONS. Evaluates all possible meanings and determines what is being asked. +) Presents answers and solutions.
Watson for ONCOLOGY analyze UNSTRUCTURED CLINICAL NOTES AND REPORTS in plain English. Identifies POTENTIAL TREATMENT PANS. Doctors can consider the treatment options when making decisions.
There was a very interesting phenomenon with deep learning whereas industry (Facebook, Google etcetera) kept-up deep learning faster than academia.
Development is taking place both in industry and academia. Is for this reason why companies are open to share their code and their knowledge.
If you’re creating a start-up the window is closing very quickly on deep-learning.
The most interesting applications are the vertical applications because anyone can download free software and create horizontal applications.
AYLIEN - NATURAL LANGUAGE PROCESSING. Sentiment analysis. Classification ALCHEMY API - NATURAL LANGUAGE PROCESSING. Sentiment analysis. MONKEY LEARN - Semantic Text Analysis. DEXTRO - Analyze VIDEO. Make video SEARCHABLE, and discoverable. SECURITY video MONITORING METAMIND - IMAGE RECOGNITION and LANGUAGE UNDERSTANDING KAIROS - Face Recognition. EMOTION ANALYSIS. Crowd Analytics CLARIFAI - VISUAL RECOGNITION
Cloud services to Train machine Learning Algorithms
Advanced Libraries: TensorFlow has been open sourced two weeks ago TensorFlow is the internal tool used by Google to design its Deep Learning Algorithms
Addfor s.r.l. Particular Recipe we use to design our virtual sensors. Engineering Represent VERTICAL DOMAIN KNOWLEDGE Data Science enables a much BROAD EXPLORATION OF THE DATA
Our applications in Oil & Gas Industries: Wave Nowcasting Interpretation of Lab Data
Military Prototypes Visible and Infrared Wavelengths FPGA and GPU Targets Systems are based on: Aggregated Channel Features as Regional Proposal Method Finetuned AlexNet (CNN) as Main Detector SVM as classificator
Performance Traction Control is our control system based on the Sideslip estimator SSE. Can be used to increase the cornering speed of race cars and to add safety to the high performance sports cars. It’s used on the best race and passenger performance cars since 2012
What’s Next ? A whole new scenario.
History tells us that when a new technology paradigm comes around, the obvious applications are not the killer apps.
The killer apps tend to be surprising. No one anticipates them.
Speaking about robots we imagine something like this.
Tomorrow’s Robots will be employed in completely different environments (our houses).
The SECOND MACHINE AGE will impact our society at every level
OXFORD UNIVERSITY calculated how susceptible to automation each job is based on 9 KEY SKILLS required to perform it
It turned out that about 35% of current jobs in the UK are at high risk of computerization over the following 20 years.
This will have a DIFFERENTIAL IMPACT:
repetitive and simple jobs will be the first to be replaced by machines
The same thing has already happened in industry
Artificial Intelligence Learns From Data (RULES ARE NOT PRE-CODED)
For this reason is DATA HUNGRY
Big companies NEDD DATA TO FEED THEIR AI
Skybox Imaging +) Provide Real-Time sub-meter resolution imaging and video for the whole earth +) Monitor people movements and infrastructure development +) Monitor high-value assets, pipelines, construction sites, monitor ships in ports +) Identify changes in relevant metrics like number of cars in a retailer’s parking lot or materials in ports to support investment decisions
Tesla Autopilot Has been automatically installed on 60000 Tesla vehicles in October 2015
Autopilot allows Model S to steer within a lane, change lanes with the simple tap of a turn signal, and manage speed by using active, traffic-aware cruise control.
The main point differentiating Tesla’s Autopilot from driving assist systems from other manufacturers is the fact that all cars are connected to each others and learn from each driver.
The car driver is seen as an “expert trainer”, which will feed the collective network intelligence of the fleet simply by driving the vehicle on the Autopilot.
The system updates the driving algorithm adding ~1 million miles of new data every day.
Google Data Acquisition +) It learns what temperature you like and builds a schedule around yours. +) It lights up when you walk in the room. +) It automatically adapts as your life and seasons change.
Just use it for a week and it programs itself: turn it off when you go to bed, it takes notes and builds your schedule. After you turned up heat a few days in a row, Nest learned you like eating breakfast at 70º
By monitoring a heart rate, respiration rate, body temperature and galvanic skin response, it would be able to tell the difference between REM, light and deep sleep
Tianhe-2 Most Powerful of the TOP 500 LIST 33,86 Petaflops = 33860 Teraflops = 33,860,000 Gigaflops
For comparison my Macbook Pro is 0,32 Gigaflops. In other words Tianhe-2 is equivalent to 100M Macbook Pro
National Supercomputer Center - Guangzhou
Google The total computational capacity of Google is basically unknown. Nevertheless some analyst try to devise it from power consumption. This is a picture of the central cooling plant in Google’s Douglas County, Georgia datacenter
D-Wave The D-Wave 2X is a 1000-qubit quantum computer.
It is based on a novel type of superconducting processor that uses quantum mechanics to massively accelerate computation. It is best suited to tackling complex optimization problems
At the beginning there were shallow neural networks. They were made by layers that multiplied inputs with weight and transformed it with transfer functions (typically sigmoids) Basically NN are HOMEOMORPHISMS - geometrical transformations that do not affect the topology
Is like have your data (say Olive and Capperi) spread around your pizza dough. To classify Olive from Capperi means rolling out and stretching the dough until it will be possible to separate the two with a straight line.
Neural Networks or Artificial Neural Networks were simple approximations of the brain’s neurons
In 1974 Geoffrey Hinton and other colleagues fund the way to apply an algorithm called Backpropagation to the training of the ANN.
Backpropagation is the key algorithm that makes training deep models computationally tractable. For modern neural networks, it can make training with gradient descent as much as ten million times faster, relative to a naive implementation.
It’s a reverse-mode differentiation that tracks how every node affects one output.
The problem with shallow NN is that they don’t adapt to very complex problems like recognizing a real-world image.
Actually they can fit any problem by adding enough neurons in the hidden layer but this is a fake. Is like having a student that prepares an exams by memorizing any single example: he doesn’t understand the underlying concepts. When he found a slightly different problem, he can’t generalize what he learned and fail the exam.
Memorizing everything is called overfitting. More shallow neurons give us more memory but not the understanding.
Again in 2004 Hinton made a giant step forward by inventing the way to train Deep Neural Networks.
DNN can tackle much complex problems for the following reasons:
Learn the Underlying Explanatory Factors of a complex model The models are Hierarchical. This means that simple factors are learned at low level and complex factors are learned on top of the previous learned factors. Since the models are Hierarchical a semi-supervised approach can be applied
Hierarchy means that higher complex concepts in an image are based on lower hierarchical features.
In this case both the images on the right contain the “eye” feature but obviously one is a real picture while the other in a Picasso…
This complex and hierarchical memory systems mimics the internal behavior of the human neocortex creating internal beliefs, memories and representations.
Looking inside this internal states is like looking inside the “dreams” of the machine.
Those are just some of the internal representations of the systems data. Those are the only one we can understand. In fact, images as any other concepts are reduces to multidimensional vectors (TENSORS) in the memory of the machine but we cannot interpret it due to their intrinsic high dimensionality.
The question is: what is Google “dreaming” about you right now? Or in other words: which are the tensors associated with your Google (or Facebook, or Amazon) profile?
For some people this is the BEGINNING of the END
For sure we are on the verge of an epochal change, a new revolution.
Apparemment, vous utilisez un bloqueur de publicités qui est en cours d'exécution. En ajoutant SlideShare à la liste blanche de votre bloqueur de publicités, vous soutenez notre communauté de créateurs de contenu.
Vous détestez les publicités?
Nous avons mis à jour notre politique de confidentialité.
Nous avons mis à jour notre politique de confidentialité pour nous conformer à l'évolution des réglementations mondiales en matière de confidentialité et pour vous informer de la manière dont nous utilisons vos données de façon limitée.
Vous pouvez consulter les détails ci-dessous. En cliquant sur Accepter, vous acceptez la politique de confidentialité mise à jour.