SlideShare utilise les cookies pour améliorer les fonctionnalités et les performances, et également pour vous montrer des publicités pertinentes. Si vous continuez à naviguer sur ce site, vous acceptez l’utilisation de cookies. Consultez nos Conditions d’utilisation et notre Politique de confidentialité.
SlideShare utilise les cookies pour améliorer les fonctionnalités et les performances, et également pour vous montrer des publicités pertinentes. Si vous continuez à naviguer sur ce site, vous acceptez l’utilisation de cookies. Consultez notre Politique de confidentialité et nos Conditions d’utilisation pour en savoir plus.
Artificial intelligence is purported to improve workplace productivity. But does it really? This presentation takes a hard look at where AI is today and which aspects can truly help today's Digital Workplace improve productivity.
Artificial intelligence, people, work, and productivity – four terms that aren’t often mentioned in the same sentence. What do they have to do with each other? And why do we care? Do you care?
You should because AI is about to take your job, and soon. If you pick up any newspaper, that’s what you might think. You literally can’t open a newspaper without bumping into some article saying how Ai is going to either make life infinitely better by offloading menial jobs to machines, or forever destroying society by taking away meaning from millions of workers and separating society into an elite group of technological haves and hordes of unemployed technology have nots. While this is not a new discussion, and similar ones have been raging since at least the beginning of the industrial age, this time is seems different. More pervasive, more all-encompassing.
And perhaps more importantly, now it impacts not only low-skilled factory workers but highly skilled information workers as well. Will AI replace information workers; are our jobs at risk? And if so, which ones? Should we be worried or should we be excited? Books like Andrew McAfee and Eric Bryfslon’s Race Against the Machine and The Second Machine Age are helping to fuel a discussion about what AI means for the workplace.
This is obviously a topic much broader than a short presentation, but I would like to focus on some aspects of AI that puts a lot of the buzz into context, but highlighting a few things that are real, and what is clearly not going to happen any time soon. The presentation wraps up with a real-world demonstration of where AI is today helping (not replacing) information workers be effective by augmenting their strengths with intelligence provided by computers.
So let’s start with a brief look at what AI is… and isn’t’.
In the summer of 1956, leading computer scientists gathered at Dartmouth in what is now famously called the Dartmouth Workshop. Over the course of many weeks, scientists came and went with an effort to define the future of computing, and here is where the ideas that had been germinating the minds of people like Claude Shannon, Marvin Minsky, and Norbert Wiener came together to provide a roadmap for what computers could do.
<Read the quote> - the vision and optimism in those early days was that computers would soon be able to replicate the human mind.
In the early days, popular visions like this one were what people envisioned would be the future of computing. An anthropomorphic view of technology. You can see the humanlike form of the machines, which have remained popular until today, although developers have become more realistic about the actual capabilities of these cyborgs as time’s gone on.
Any talk about the place for AI today is well-served by a brief look at how we got to where we are. The history of AI has followed a series of boom and bust years with excited advancements followed by disappointed shortfalls, and reality setting in when the initial picture became more complicated than originally anticipated.
AI boom years have been characterized by periods of federal funding by DARPA; the first generation provided the initial vision originally rolled out at the Dartmouth Workshop, followed by a dip in the 1970s when people realized the level of computing resources were needed to actually achieve those goals.
The 1980s saw a renaissance in AI as researchers narrowed their focus on developing expert systems for specific needs based on proprietary hardware. Focus on limited aspects of intelligence like OCR and speech recognition saw advances during this period, but by the end of the 1980s, disappointment set in as results were less than stellar. The rise of the PC as a generic platform also had an impact on more expensive proprietary hardware vendors, which saw a collapse in that market.
The 2000s and more recently the 2010s have seen a second renaissance in AI due to again, both a narrowing of focus, as well as the development of machine learning made possible by the rapid advancement in hardware processing power, the appearance of the cloud, and advancements in big data analytics.
DARPA funding Representative Achievements Reasoning as search Natural language Micro-worlds First robot Neural networks
So where does that leave us today? The rapid advancement in machine learning, and more recently deep learning has driven AI technology forward. So now we have things like this instantaneous translation tools like you see in the picture.
So now we are making progress in understanding how AI will impact our business in the near term. We just need to examine a few more capabilities and limitations of the technology.
These kinds of problems have become the focus of AI research today because new developments in technology have made these problems solvable. Specifically, faster processers, readily available cloud services and cheap storage that has lead to big data – have converged to enable AI to solve heretofore untenable problems. Now, it is possible to analyze language in real time, in what is known as NLP.
Furthermore, a specialized type of computer construct called neural networks, which are hierarchical circuits can evaluate many different options very quickly. So when the rules are clear, and the goals are predictable, computers can evaluate many possible alternatives very quickly. When coupled with feedback – such as what was successful in similar situations or what the given answer was in similar situations by other people, the predictions get even better. This feedback loop is known as machine learning and it has become practical because of the improvements in technology.
(That’s why these type of solutions are good at chess; they can run many simulations very quickly and make good guesses at what the best solution could be. )
Certainly AI achievements are impressive.
Watson beating Jeopardy champions - 2011 Deep Blue beating a chess grandmaster - 1996 AlphaGo beating a Go master- October 2015
But as impressive as these are; they all have some common attributes – the computers in this case followed well defined rules, they were able to learn from a closed set of knowledge that could be applied, and the number of possibilities/degrees of freedom were effectively finite. In the case of Jeopardy, the knowledge set was Wikipedia, and for the others, the ruleset is well defined, even if the number of permutations of move is quite large.
This example from 2016 clearly demonstrates that we are nowhere close to emulating human intelligence. This Microsoft debacle showed that trying to solve generic problems is still far off.
So what can we do and were is AI effective today?
Emulating humanity has proven to be a really hairy problem. So researchers have focused on more modest goals, such as limiting the scope of the problem and identifying unique contexts.
By limiting the context and scope of AI projects, technology can be brought to bear to solve real, but specific problems. Limiting the scope makes it possible to overcome some of the biggest impediments to solving problems.
Today, AI is particularly effective in these cases. Why is that?
So AI is quite limited today. When does it work well? The entire universe of data is finite, the rules are clear, and the outcome is predictable. That is fine for the consumer market, but what does that mean for us business folks? How do these developments impact how information workers do their jobs on a daily basis?
Today, we focus on apps because that is the way information is presented. But the brain doesn’t work that way; we don’t get up in the morning and think about Salesforce, Outlook, or Dynamics. We think about topics, such as customers, projects, and products. But information isn’t presented that way because it is contained in disconnected siloes.
But what if we could aggregate all our information and organize it into meaningful constructs like topics. Then use previous experiences to figure out which ones are most important or relevant to what I am currently looking at, and presenting it to me in a context and format that I already use today. Wouldn’t that be wonderful.
Well that is exactly what NLP and machine learning can be used to do. How does it work? Let’s take a look by viewing a real world example.
All these factors determine how important a topic is. A system like this should present the most important
In the world of making decisions with many disconnected inputs; the practical solution to making decisions should focus on helping organize information the way the brain works, by topics, so people can make the best decision. These types of processes will not be replaced by robots anytime soon because the rules aren’t clear and the goals are hard to express. That’s why topic computing is the way of the future. And it’s available today from harmon.ie.
You’ll notice that Collage doesn’t replace a worker, it uses NLP and machine learning to help them make better decisions. This approach is known as Intelligence augmentation, which is a different take on AI; it doesn’t replace people, it helps people be more effective. This is what I believe will be an effective approach going forward. It puts the human at the center of the problem of too much information
Ai and productivity
How AI Can Help People
Be More Productive at
every aspect of learning or any other feature of intelligence can in principle
be so precisely described that a machine can be made to simulate it .
"machines will be capable, within
twenty years, of doing any work
a man can do.”
Herbert Simon (1965)
“In from three to eight years we
will have a machine with the
general intelligence of an
average human being.”
Marvin Minsky (1970)
An AI Timeline
1950 1960 1970 1980 1990 2010 20202000
Birth of AI
• Information Theory – digital signals
• Cybernetics – thinking machines
• The Turing Test
• Symbolic reasoning
• Limited computer processing power
• Limited database capacity
• Limited networking capabilities
• Real-world problems are complicated
- Image processing / face recognition
- Combinatorial explosion
• Expert Systems (knowledge)
• Neural networks make a comeback
• Optical character recognition
• Speech recognition
• Disappointing results
• Collapse of dedicated hardware vendors
• Machine learning
• Deep learning – pattern analysis / classification
- Big data: large databases
- Fast processors to crunch data
- High-speed networks
Focus on Specific ‘Intelligence’ Focus on Specific Problems
AI Winter II
A Modern Approach to AI
Artificial Intelligence (AI)
To build human-like capabilities into
autonomous technological systems
such as a computer or robot
Douglas Engelbart, 1962
Intelligence Augmentation (IA)
To increasing the capability of a person to
approach a complex problem situation, to
gain comprehension to suit their particular
needs, and to derive solutions to
Douglas Engelbart, 1962
Artificial Intelligence vs. Intelligence Augmentation
AI Puts Technology at the Core IA Puts Humanity at the Core
Email-based Topic Computing
Information Delivered the Way the Brain
• Singular proper nouns (NNP) becomes topic candidates, excluding dates, numbers,
locations and people’s names
• First word algorithm (matched against common dictionary)
• Capitalized nouns are included
Natural Language Processing (NLP) Example
Part of Speech
• How often does a topic appear in different sources (i.e. apps)?
• How many times does a topic appear in a given app?
• When was the last time the topic appeared?
• How often does the topic appear for my colleagues?
• How much have my colleagues interacted with a specific topic?
Machine Learning Example
People and technology work best,
when they work together