From Event to Action: Accelerate Your Decision Making with Real-Time Automation
Robotics and Artificial Intelligence
1. Robotics and Artificial Intelligence – will the scientists of tomorrow still be human?
Robotics and artificial intelligence are my passion. In fact, I am so passionate about this advancing
and expanding field that I decided in the 1990s to get my doctorate in robotics and artificial
intelligence. But did you know that until a man who was born in the USSR and grew up in Brooklyn,
NY, USA coined the term, there was no such word as “robotics?”. Indeed, “robotics, positronic and
psychohistory,” to name a few, did not exist in the English language until a brilliant writer and
scientist, born on a somewhat obscure date in the early 20th century of Jewish parents in what is now
Smolensk Oblast, Russia, created the terms. The young man was born somewhere between October
4, 1919 and January 2, 1920, and later set the date, himself, as January 2.
His name was Isaak Ozimov, whom we now know as Isaac Asimov, one
of the most prolific writers in any field of the 20th century.Asimov and
his family immigrated to the USA when he was three years old, where he
grew up in Brooklyn, New York. Although he did not like it to be drawn
attention to, he was a particularly brilliant individual, indeed, even
teaching himself to read at the age of five. Isaac Asimov was passionate
about science, but also about the possibilities that science not only
created, but that science suggested might be. This prompted him to write
many, many short stories and books in the area of science fiction. However, it was not just science
fiction full of strange creatures that needed to be defeated to save the human race, but of the
possibilities of what might be discovered through space exploration, who might be there, how it
could be done, how robots might help, what robots might become in the future, including the
possibility of robots as intelligent or even more intelligent than humans.
If you are a Star Trek fan, you will remember Data, the android on Star
Trek: The Next Generation, a robot so sophisticated that he appeared to
be almost human, but with superior intellect provided by his positronic
brain. Asimov was a long time friend of Gene Roddenberry, the creator,
screenwriter and producer of Star Trek and collaborated with him more
than once. Data, although a fictional character, represented an almost
perfect combination of robotics and artificial intelligence as these long
2. time friends dreamed will one day be achieved.
Over the years, it has surprised me just how more and more people are interested in science fiction,
considering that many of the things that were once fiction in this field are now fact! One need only
to read Asimov's Visit to the World's Fair of 2014, written in 1964 to be amazed at how many of his
predictions have now become a reality, or very close to it.
I began reading Isaac Asimov's stories and books when I was young. His books, such as the I, Robot
compilation, helped inspire me to get my doctorate in Artificial Intelligence (AI) and Robotics in the
1990s.
Robotics, the term coined by Asimov in his fictional history, are now a reality, and a massive
industry. The machines that have been developed included such fascinating equipment as the
Canadarm, developed by the Canadian Space Agency for use in zero gravity and used for many years
on the Space Shuttle Columbia and continues to be used on the International Space Station; the new
underwater diving robot developed by France, and the international sensations, the Mars Rovers. In
2015, the first Humanoid Robot in Space received NASA Government Invention of the Year. Robots
that can do the work that would normally have to be done by human beings have become
increasingly valuable in environments where human dexterity is essential, yet the environment, itself,
is too harsh for human life, at least without the encumbrance of incredibly clumsy armor.
Humanoid robots, unlike the Mars Rovers and the Canadarm, are designed to match all the
movements of the human body, including walking upright, running, bending, stooping, etc. With
modern technology, including the great strides being made in artificial intelligence, humanoid robots
are becoming more human-like in appearance and abilities every day. TOPIO is a demonstration
humanoid robot developed to play table tennis. Compare this photo from 2009 to what “he” looked
like in 2007!
Robots like NAO from France, Honda Asimo from Japan and many others are developing humanoid
robots for interaction and entertainment and aid in home and office environments. In only a few
short years, the range of motion and therefore the mobility and usefulness of robots has opened new
possibilities. With the advent of tactile sensors beneath the polyurethane “skin” on each fingertip,
new robotic hands are able to grasp a lightbulb without breaking it. Again, these are incredibly
useful in harsh environments, such as the French diving robot was created for, where the robot will
be used for deep water archaeological discovery and potential recovery of very fragile items.
Advances in artificial “skin” are even making robots look more human than ever, like Hanson
Robotics' Sophia... but robots that are too human-like might scare people!
That brings us again to the topic of artificial intelligence. Sophia is just one example of robotics
combined with AI. She actually “learns” from interacting with people and her environment. I, Robot
is becoming reality.
What IS artificial intelligence?
In a very simplified nutshell, it is intelligence exhibited by machines. In contrast to natural human
intelligence, AI is an output of human creation, and in most cases human input …but for how long?
AI will soon feed AI. For example, computers that play games with humans are traditionally
programmed with a combination of all possible moves in the game and a knowledge base of pre-
defined combinations, which, combined with the speed of their processors with a massive parallel
architecture, can beat even the most skilled player, as happened when an IBM supercomputer, known
as Deep Blue beat reigning chess champion, Garry Kasparov, the first time a machine beat a world
champion in a classical game. However, was this really a case of a machine thinking on its own? In
reality, this machine did not learn, as sentient beings do, but it was first painstakingly programmed
3. with every possible move that the programmers could think of, and compared each move by its
opponent with its knowledge database loaded by humans. Just the same, the defeat of a world
champion by a machine in an intellectual game was a stunning advance in the field, which has been
greatly surpassed since then. Deep Blue was built specifically to play chess, capable of evaluating
200 millions positions per second, and was the fastest computer ever to play a world chess
champion. Today, software is used with standard chipsets, and in fact, in 2006, Deep Fritz, a chess
program, installed on a PC with two Intel Dual Core CPUs beat world chess champion, Vladimir
Kramnik.
The goal of AI research is to create machines that can learn on their own, fully autonomous and
unencumbered by the need for human input. Today, this is no longer science fiction, but reality.
Today, AI used for games, for example, no longer requires painstaking input of all possible moves,
but instead learns from the examples of previous games. After all, the whole premise of artificial
intelligence IS for machines to be able to learn just like we do. One example given by Vivek
Wadhwa from Stanford University, is the ability of a machine, today, to recognize handwriting. In
the past, it would have required millions of lines of computer code, but today, thanks to the machine's
ability to learn from previous examples, it only takes hundreds of lines. As Mr. Wadhwa says, AI,
today, is exceeding human capabilities in the fields in which it is trained! Rather than continue with
the cumbersome approach once used of trying to load everything in the machine's memory that is in a
human's brain for a particular area, programmers now create virtual neural networks, modeled after
the human brain, where the information coming in is processed in layers and the connections between
the layer grow stronger based on what's learned.
This is called deep learning, a new area of machine learning where an increasing number of layers of
information are able to be processed by the ever faster computers that are coming out. Things that
we take for granted, today, even on devices we carry with us, such as our smartphones, use this
technology for things like voice recognition, image recognition and text recognition. For example, if
you are traveling in a foreign country, today, you no longer need to painstakingly type in the text of a
sign, perhaps an impossible task due to limitations in the characters on your English device, but you
can actually photograph the sign and have smart software provide you with an English translation!
Google announced its new machine learning system called RankBrain (ranking algorithm) which
provides more relevant search results. Google is also using this technology for its self-driving
vehicles. Image recognition systems enable these vehicles to perceive pedestrians in a similar way to
us when we’re driving, to recognise automatically what poses a danger on the roads.
More and more businesses (automotive, aeronautics, biotechnologies, banking, advertising, ...) are
beginning to use AI. By using machines that are able to learn, rather than having to program
everything into them, the costs of research and of producing many of the things we use every day will
go way down, as machines begin to do the work faster, more efficiently, and of course way cheaper
than humans.
AI is a massive consumer of data: the more data that is analyzed, the more the intelligence increases.
All actions on the Internet (Web sites and specially Social Network application, Apps, IoT, etc.), are
logged in huge Big Data systems and feed AI systems of many industries. Today, without realizing
it, we are building the intelligence of the machines of tomorrow.
Finally, AI will begin, likely in the very near future, to begin producing new technology, itself.
Already, as mentioned in a previous article, software used for analysis is flagging information it was
not specifically “asked” to flag, but which it learned would be relevant to the task it was performing.
Who knew when 2001: A Space Odyssey was released, the movie based on Arthur C. Clarke's short
story, The Sentinel, that artificial intelligence, as represented by the computer, HAL 9000, would ever
be possible? HAL was an acronym for Heuristically programmed ALgorithmic computer, a sentient
4. or in other words, intelligent computer that interacted in a calm and pleasant voice with the crew of a
space ship. Yet today, we have perhaps a very eerie similarity, at least as far as the voice and
interaction, with Apple's Siri, described as a spin-out from SRI International Artificial Intelligence
Center, an offshoot of DARPA's CALO project. Perhaps a bit ironic is Watson, by IBM. Some
claimed that Clarke's HAL was arrived at by shifting the letters of IBM by one letter, something that
Clarke vigorously denied. Just the same, there is an eerie similarity between the fictitious HAL and
IBM's Watson, an artificial intelligence that is able to learn via its analysis of Big Data in virtually
any field and with virtually no limitations.
As AI and robotics are developed and combined, it is only a matter of time, likely very little, perhaps
less than twenty years, that machines will be developing and producing the technology of the future,
quite literally, scientists, themselves...
Should we be afraid? Of course, there are always risks. But there were risks, in putting wheels on
boxes drawn by horses, risks in driving around in self-propelled carriages, risks in flying, risks in
riding on a rocket out into space, and of course, risk in putting weakened live organisms into our
bodies to prevent the actual diseases.
Our society has benefited greatly from the risks we have taken in the past. Indeed, we have the
ability to make mistakes that could wipe us out (like nuclear arms), but so far, we are still here and
reaping the benefits of taking those risks. There are always risks, but oh, the possibilities!
Jean-christophe (Jay C) Huc
22 June 2016 - jch@huc.name