8. Artificial Intelligence?
“the science and engineering
of making intelligent machines”
-John McCarthy, 1956
9. “We don’t have better algorithms. We just have more
data.” - Peter Norvig, Chief Scientist, Google
10. AI plus the recorded memory of augmented humans
11. Man-Computer Symbiosis
“The hope is that, in not too many years, human
brains and computing machines will be coupled
together very tightly, and that the resulting
partnership will think as no human brain has ever
thought and process data in a way not approached by
the information-handling machines we know today.”
– Licklider, J.C.R., "Man-Computer Symbiosis", IRE Transactions on
Human Factors in Electronics, vol. HFE-1, 4-11, Mar 1960. Eprint
13. A few key assertions
We are building a network-mediated global mind
It is not the “skynet” of the Terminator movies
It is us, augmented
14. “global consciousness is
that thing responsible for
deciding that pots
containing decaffeinated
coffee should be orange”
– Danny Hillis (via Jeff Bezos)
– http://www.oreillynet.com/pub/a/network/2005/03/16/etech_3.html
19. “Wikipedia is not an
encyclopedia. It is a
virtual city, a city whose
main export to the world
is its encyclopedia
articles, but with an
internal life of its own.”
20. “Half the money I spend
on advertising is wasted;
the trouble is I don't know
which half.”
- John Wanamaker
(1838-1922)
21.
22. “Only 1% of healthcare spend now goes to diagnosis.
We need to shift from the idea that you do diagnosis
at the start, followed by treatment, to a cycle of
diagnosis, treatment, diagnosis...as we explore what
works.”
-Pascale Witz, GE Medical Diagnostics
23.
24.
25.
26. Intelligence Augmentation
“The human mind ... operates by association. With one item in its grasp, it
snaps instantly to the next that is suggested by the association of thoughts,
in accordance with some intricate web of trails carried by the cells of the
brain. It has other characteristics, of course; trails that are not frequently
followed are prone to fade, items are not fully permanent, memory is
transitory. Yet the speed of action, the intricacy of trails, the detail of mental
pictures, is awe-inspiring beyond all else in nature.
Man cannot hope fully to duplicate this mental process artificially, but he
certainly ought to be able to learn from it. ... One cannot hope thus to equal
the speed and flexibility with which the mind follows an associative trail, but
it should be possible to beat the mind decisively in regard to the permanence
and clarity of the items resurrected from storage.
Consider a future device for individual use, which is a sort of mechanized
private file and library. It needs a name, and, to coin one at random, "memex"
will do.”
– Vannevar Bush, As We May Think, 1945
27. A device that knows
where I am better than I
do, a knowing assistant
telling me where to go
and how to get there.
Often, the shape of the emerging world is right in front of our faces, but we can’t see it because we aren’t framing it in the right way.\n
What I want to give you now is some context for thinking about the extraordinary convergence \nof computing and human potential\n
That is leading us towards what we might truly call a global brain. \n
I’m going to start in what might be an unexpected place, with the Google autonomous vehicle. This car is thought-provoking on a number of levels.\n
You see, back in 2005, when a car named Stanley won the DARPA Grand Challenge, it went seven miles in seven hours.\n
Yet only six years later, Google announced a robotic car that has driven over a hundred thousand miles in ordinary traffic.\n
Was this a triumph of artificial intelligence, like IBM’s Watson beating human Jeopardy champions?\n
\n
It was surely that. But there’s another important factor that is easy to overlook. Google’s chief scientist, Peter Norvig, says that the algorithms aren’t any better. Google just has more data. What kind of data? \n
It turns out that the autonomous vehicle is made possible by Google Streetview. Google had human drivers drive all those streets in cars that were taking pictures, and making very precise measurements of distances to everything. The autonomous vehicle is actually remembering the route that was driven by human drivers at some previous time. That “memory”, as recorded by the car’s electronic sensors, is stored in the cloud, and helps guide the car. As Peter pointed out to me, “picking a traffic light out of the field of view of a video camera is a hard AI problem. Figuring out if it’s red or green when you already know it’s there is trivial.” \n
The Google Autonomous Vehicle is thus an example of what JCR Licklider, the legendary DARPA program manager who funded the original development of TCP/IP was writing about in his 1960 paper entitled Man-Computer Symbiosis. He wrote:\n
\n
\n
\n
\n
So, in a surprising way, the Google Autonomous Vehicle is another unexpected example of the trend that I’ve referred to as Harnessing Collective Intelligence. It’s another miracle of computer-mediated human cooperation!. This is a thread that runs through all the great inventions of the web, from Google itself, Wikipedia, social platforms like Twitter and Facebook are all technologies for harvesting and coordinating the products of thousands or even millions of minds, increasingly in close to real-time.\n
The google vehicle is only the latest of a long series of developments that show how we are augmenting ourselves and connecting ourselves into something bigger. We are building a network-mediated global mind. It is not the “skynet” of the Terminator movies. It is us, augmented\n\nThis picture is a routing map of the internet. It’s striking how much it looks like a map of the synapses in a human brain. It’s nowhere near as dense yet, but the imagery is suggestive. But there’s a lot more here than just imagery. The global brain is a human-computer symbiosis. \n
Let me come at this idea from another angle.\n\nAt our Emerging Technology Conference in 2005, Amazon CEO Jeff Bezos recounted a conversation he’d had with computer scientist Danny Hillis, in which Danny said [quote above]. Now this is nothing new. Speech, the written word, printed books and newspapers, the telephone, radio and television are all technologies for passing knowledge from mind to mind.\n
But what’s different now is the way that electronic media speeds up that process. Using twitter, we can instantly learn about trending topics around the world, and share in the responses of others.\n
This new real time mind-sharing capability can be used to organize large numbers of people for political purposes, as we saw in the recent uprisings in the Middle East.\n
Technology-enabled cooperation can be very simple, as with a wiki. Here’s for example, is the initial wikipedia page for the great earthquake that hit Japan last year.\n
Within a short time, through thousands of edits by thousands of interested individuals, it turned into a full-featured encyclopedic account of the earthquake and its aftermath. Let’s watch that in action.\n
What’s important to realize is the human element in these applications. A community of hundreds of million humans linking to documents makes Google possible. At its deepest level, the web is social. It’s not just overtly social applications like Facebook, Twitter, and Google+.\n\nMichael Nielsen has written a wonderful book about how collective intelligence can be applied to problems of science. He takes lessons from the consumer internet and applies them to much more challenging intellectual activities. He emphasizes, in his discussion of Wikipedia, that it is not just a collection of documents, but the product of a community. He says [quote above]\n
It’s about harnessing micro-expertise. There was an earlier game, Karpov vs the World, in which Anatoly Karpov handily defeated “the crowd.” But that was a speed game where there was no time to organize the community. In Kasparov versus the world, the game took over four months. There was a group of moderators who managed the community - much like Wikipedia - even though anyone could suggest a move, and the moderators argued for them, it was the popular vote that decided each move. It was thus an exercise in persuasion. A key point in the game occurred on the tenth move. One of the moderators, Irina Krush, an American champion, had studied a particular possibility and had even written a paper about it. So on that one move, she had more expertise than even Kasparov. \n
Every wikipedia entry has a talk page. Here’s a discussion of why they changed the page to be about the Tohoku earthquake rather than the Sendai earthquake. It turns out that’s how it’s referred to in Japan.\n
That leads me to the whole topic of feedback loops. It isn’t just that this information is going mind to mind. We are increasingly taking this information and creating electronic feedback loops, which might include humans in different ways. Increasingly, technology is solving what we advertisers call “the Wanamaker problem” after 19th century department store magnate John Wanamaker, who said [quote above] What Google did with pay-per-click advertising was to solve the Wanamaker problem, by building a business model that only charged advertisers when consumers clicked on their ads, and harnessing collective intelligence to predict which of those ads would be most likely be clicked on. \n
We’re now seeing this same idea spread to other areas of the economy. For example, these kinds of feedback loops enabled by data are part of what the US government is trying to do in healthcare with Accountable Care Organizations.\n
Personalized medicine requires new kinds of diagnostic feedback loops. Pascale Witz of GE Medical Diagnostics explained how \n
In the city of San Francisco, you’re seeing something similar, where all the parking meters are equipped with sensors, and pricing varies by time of day, and ultimately by demand. I’m calling these systems of “algorithmic regulation” - they regulate in the same way our body regulates itself, autonomically and unconsciously. All of the technology “smart city” initiatives need to be seen as ways of instrumenting not just the physical city but the social life of the humans who live in it. \n
This is also a great example of how our practice areas can develop new business models that involve applying what we know rather than just writing about it, or organizing events about it.\n
The idea of the social life of the city as captured by technology becomes clear in this visualization done by Wired Magazine of 24 hours of 311 calls from the city of New York, showing what issues citizens are contacting their government about. This same kind of ebb and flow comes increasingly from the data exhaust from our devices.\n
Projects like MIT’s Senseable City Lab are exploring how the data exhaust from millions of network-connected citizens can be used to shape the patterns of cities. We will see more of this in future. It’s an important frontier of man-machine colaboration\n
This shift requires new competencies of companies and governments. The field has increasingly come to be called “Data Science” - extracting meaning and services from data - and as you can see, the set of skills that make up this job description are in high demand according to LinkedIn. They are literally going asymptotic.\n
\n
I want to turn to another aspect of man-machine symbiosis, this time in terms of information retrieval. In 1945, Vannevar Bush wrote a famous and influential article called “As We May Think” that in many ways prefigured the ideas of the World Wide Web.\n
The current state of the art of this kind of near-miraculous information retrieval can be seen in today’s smartphones. For example, a smartphone equipped with mapping software knows exactly where you are. But note that the information being retrieved is generated by human beings, often through collective activity.\n
Returning to IBM’s Watson, we see that it too is a product of man-machine symbiosis. After all, the documents that it “reads” to perform its feats of apparent intelligence are human documents. In the three seconds it has to come up with a Jeopardy question, it has time to read and process the equivalent of 200 million pages. And now that speed of information retrieval is being used for more than parlor tricks, as IBM works to train Watson to act as an assistant in healthcare. The average physician is able to read the latest research in his field perhaps five hours a week; Watson’s ability to read everything and suggest potentially relevant answers makes it an ideal assistant. “Watson makes suggestions, not decisions,” says IBM’s Dr. Randy Kohn.\n
This is the real opportunity for new information retrieval UIs like Google’s Project Glass - in specialized settings where access to a computer can be seen as a powerful kind of human augmentation. I expect it to be used in professional settings before it becomes popular as a consumer device. (In social settings, it will require even more profound resets of behavior than the “always-on” mobile phone.)\n
You can see a preview of where this is taking us in the Apple Store. Where most stores (at least in America) have used technology to eliminate salespeople, Apple has used it to augment them. Each store is flooded with smartphone-wielding salespeople who are able to help customers with everything from technical questions to purchase and checkout. Walgreens is experimenting with a similar approach in the pharmacy, and US CTO Todd Park foresees a future in which health workers will be part of a feedback loop including sensors to track patient data coupled with systems that alert them when a patient needs to be checked up on. The augmented home health worker will allow relatively unskilled workers to be empowered with the much deeper knowledge held in the cloud.\n
This global brain is doing more than letting us spread news and gossip more quickly.\n
patientslikeme allows patients to share their symptoms and solutions, and are starting to provide a basis for crowdsourced clinical trials of new treatments.\n
Or consider the point Jen Pahlka made in her recent TED talk about government services. When we use crowdsourced services like xxx, which let neighbors help each other rather than calling a city call center for help, we not only save money, we make our society stronger.\n
And of course, there’s the latest sensation in cooperative applications, crowdfunding, as exemplified by Kickstarter.\n
But this technology of collective action and man-machine collaboration can be used for immense social good. For example, after the devastating earthquake in Haiti last year\n
a whole set of new tools for cooperation were deployed to coordinate the activities of volunteers. For example, the Ushahidi crowd-reporting platform was used to report people in need of rescue via SMS to a special shortcode, the collaborative OpenStreetMap project was used to quickly map shanty towns so that rescuers could be sent to the right location, and crowd work platforms like Crowdflower, SamaSource and Amazon Mechanical Turk were used to translate reports from Haitian Creole into English via volunteers in Haitian diaspora communities thousands of miles away. These applications show a man-machine symbiosis including smartphone-wielding humans as sensors, remote processors (translators), and as augmented actors, guided to the locations where they were needed, and told where to dig.\n
\n
Incidentally, these collaborative micro-worker platforms like SamaSource and Crowdflower are increasingly being used by business.\n
\n
We also see new kinds of collaboration occurring with sites for peer-to-peer home rentals, like AirBnB\n
or RelayRides, for peer-to-peer sharing of automobiles.\n
Lisa Gansky analyzes this phenomenon in her recent book, The Mesh: Why The Future of Business is Sharing, and her site, meshing.it.\n
Clay Shirky asks if there’s an economy beyond consumption, made up from our “cognitive surplus”\n
In his prescient first novel, Down and Out in the Magic Kingdom, Cory Doctorow used the name “whuffie” to refer to the currency of a new “economy of attention.”\n
You can see the beginnings of this attention economy on YouTube, where five year old kids and their dad can produce videos about toy train crashes that are watched tens of millions of times by other small children - like my three year-old grandson!\n
\n
In thinking about what’s next, I want to return to a very important idea: many of the future cooperative applications will be powered by data gathered from sensors. Back in 2004, when we coined the term Web 2.0 to talk about the second coming of the web after the dotcom bust, everyone asked what Web 3.0 might be. I would always reply that it wasn’t a version number, but if it were, Web 3.0 would be when collective intelligence applications were driven by sensors rather than by people typing on keyboards. Remember that the data for the Google autonomous vehicle came from cars driven by humans but augmented by very precise sensors.\n
On the consumer internet, we see this in an area referred to as the Quantified Self. There are now hundreds of apps and devices that people are using to track their biology, their moods, their exercise, their sleep. They are then uploading this data to the internet.\n
Here’s an example of one such device - a stress monitor in a wristwatch, which is targeted at veterans with Post Traumatic Stress Disorder. It allows them to map and avoid areas of the city that are high stress for them.\n
Projects like UN Global Pulse are trying to gather and coordinate large scale sensor data from entire populations.\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
But machine learning, collective intelligence, and man-machine symbiosis can also be put to ill-use. There are many who think that the world financial crisis that came into focus in 2008 was the product of such a symbiosis gone wrong. Amoral companies thinking only of profit created synthetic financial instruments out of the livelihoods and homes of millions, and set up those instruments to fail.\n
But they can also be used to manipulate financial markets for private gain, to the detriment of society. Our financial markets are a great example of collective intelligence gone wrong, abused for private gain rather than public good.\n