URL of the original TEDx Talk: https://www.youtube.com/watch?v=PjiZbMhqqTM
Notes from my 2015 TEDx presentation, titled: "We Should Wake Up Before The Machines Do," on the topic of artificial intelligence and consciousness.
Speaker: Daniel Faggella
Location: Southern New Hampshire University
Dan Faggella - TEDx Slides 2015 - Artificial intelligence and Consciousness
1.
2. This presentation is based on TechEmergence
founder Dan Faggella’s 2015 TedX talk titled
“What Will We Do When the Machines Can Feel?”
3. In his presentation, Dan explores the ethical
consequence of the development of conscious
machines which many artificial intelligence
researchers consider to be possible within our
lifetime.
4. To view Dan’s talk for yourself, click the video
screen below:
5. It’s clear that technology matters
... and it matters because it
matters to us.
7. A cell phone, sitting by itself,
doesn’t have any moral worth, but
an animal, we might say, does.
8. A cell phone, we could say, is just
matter, while a deer actually
matters.
9. And although we would presume
that human beings might have
the grandest and richest
sentience among animals that we
have found on this planet, we also
attribute moral worth to animals
now as well.
10. What would be the case when/if
technology could matter in and of
itself?
11. It seems very far out, but if we
take a quick jaunt through the
history of computing it might
shed some light on where we
might find ourselves in the
decades ahead.
12. This is what computers looked like
less than 70 years ago...
Pictured is the ENIAC computer, introduced in 1946 at the University of Pennsylvania
13. Around 20 years later IBM
developed computers that helped
Apollo 11 get to the moon.
14. But we’ve made many giant leaps
in computing technologies since
that time that make IBM’s
computers seem paltry in this day
and age.
15. Throughout the history of
computing, performance has
increased and price has
decreased at a steady rate.
The image pictured was taken
from Ray Kurzweil’s book titled
The Singularity Is Near. This theme
goes along with Ray’s general
theory of the law of accelerating
return. For more information on
ray’s site click here.
16. Some experts support, based on
Kurzweil’s model, that we are
getting relatively close to the
point to where an average laptop
will have as much raw computing
power as a lower mamilian brain.
17. Ray Kurzweil is not among the
only folks who is of the belief that
in the coming decade or so we
may have household computers
that have the same ‘computing
power’ as the human brain.
18. ... But it’s not just raw computing
power that would make a
technology morally relevant.
20. The Deep Blue computer that
beat (then) world chess champion
Garry Kasparov was 1.4 tons of
raw computing power. It was the
finest supercomputer of it’s day
just 15 years ago.
21. An iPhone 5 from 2012 has 7x the
computing power that Deep Blue
did...
22. That’s 15 years, 7x the computing
power, and 1/11,000 of the size...
25. ...And there’s other technologies
that are on their way up as well.
Biomemetic technology has
taken off in the past decade.
Click here to see one of MIT’s
interesting projects about
a robotic cheetas that ‘sees’.
Apple recently opened up
Siri to third-party developers.
Click here to read an article
outlining the announcement
via WIRED.
26. There’s alot of areas where
humans are just being ‘caught up’
to, but they are being beaten...
handily.
27. It brings to mind some of the
fears being posited by folks like
Bill Gates, Stephen Hawkin, and
Elon Musk within the last year
around the real consequences
of creating a superintelligence ...
something vastly beyond
ourselves.
28. If it was that much vastly
beyond ourselves as we are above
the lower animals... wouldn’t it
trounce the planet like humans
have? Wouldn’t that be morally
consequential?
29. But since Bill Gates isn’t exactly
an AI researcher, the worthwhile
question to ask is this:
What do real folks doing real work
in AI actually think about this?
32. Dr. Bostrom asked 170 AI
researchers the following:
“When, with a 50% confidence
would you suppose we would
have human-level machine
intelligence?”
33. Bostrom (et al) 2012-13 AI Researcher Poll,
Timelines to Human-Level Machine Intelligence
To see the full research study in context, you can see the full
pdf on nick bostrom’s website here.
Confidence in
Human-Level AI
Median
50% 2040
35. But what about consciousness?
A really complicated machine that can do smart
things, but isn’t really aware, doesn’t really
matter that much...
36. We asked 33 AI researchers when they believe
(with 90% confidence) that artificial intelligence
will be capable of self-aware consciousness.
37. To see the full research study in context, you can see the full
pdf on nick bostrom’s website here.
Before
2021
TechEmergence 2015 AI Researcher Poll,
Timelines to Machine Consciousness
(90% Confidence)
2021 -
2060
2036 -
2060
2061 -
2100
2101 -
2200
2201 -
3000
Likely
Never
Can’t
Tell
4
12.12%
5
15.15%
8
24.24%
4
12.12%
2
6.06%
2
6.06%
1
3.03%
6
18.18%
38. We could suppose that maybe in the next two
decades that we might have some kind of a
machine that replicates not only the intelligence
but also the sentience of a dog, for example.
39. This would be something that would be able to
understand sensory experiences, have a
knowlege of the past, and some kind of a
rough understanding of the future.
We wouldn’t just treat it as a machine anymore.
It wouldn’t just be empty mass—it would now
have moral worth.
40. It’s reasonable to suppose that if we are able to
replicate that much intelligence andsentience
into a machine, and if any part of Kurzweil’s
trajectory continued, that it might not be
horribly long until Bostrom maybe would be
right.
41. We could then find ourselves where the sentient
and intelligent complexity of our machinery
would be able to at last match us.
42. If there were AI programmers that never had
to sleep, didn’t have to go to college, and never
made mistakes, we might suppose that one day
we may get here:
43. ... Where there would be something of greater
sentience and moral worth than ourselves.
44. It’s reasonable to suppose that there are in fact
sensory experiences, concepts, and ideas that
we cant possibly compute...given our hardware.
It is supposed by many that if we were to get to
human-level intelligence, there would be an
explosion of intelligence and sentience itself
that would vastly outstrip any words that we
have to articulate it.
45. There would be a flexible and ever evolving kind
of intelligence unlike anything biology has been
able to create.
But we should make note that this has
happened yet...
46. We don’t have human level computers and we
may be coming up on a plateau before we touch
what ‘human’ is.
Maybe we can replicate some kind of
intelligence/sentience, but not much more than
a fish. Maybe there’s something in a human
skull that science will never get its hands fully
around.
47. It’s reasonable to say that it’s more dangerous
than ever in this time of exponentially
improving technologies to hide under the rock
of:
“It hasn’t happened yet, so it never will”
48. What kind of an artificial intelligence should
corperations be able to build without
regulation?
If we are going to be able to construct alive
machines who could suffer at our hands, should
we do that at all?
49. Or should we make them only capable of
experiencing pleasure, no matter how we treat
them?
But if that were the case, would they not have
sympathy of our sorrows, and would they not
feel as bad about harming us?
50. If we could set laws on bounding AI within the
United States, what would ever stop another
nation from doing the same developments
themselves?
51. Ask yourself this:
“What makes a dogs life less worthy than a
human being?”
Is it something about the consciousness of a
human being?
52. If AI can crack open that door... what does that
imply?
When machines not only trounce us in chess,
but in fact supercede us in the very moral traits
and qualities that we suppose make us unique
and make our lives worthwile.
What do we mean and how do we matter then?
53. In order for an idea to trickle to policy and
regulation, it first has to be worth of
contemplation and dialogue.
Luckily, this isn’t the first grand moral concern
that will involve global unity in some way, shape,
or form.
This could be another one of those efforts...
54. The cosmopolitan ideal is more alive now than
ever. Despite our conflicts, education and
exposure are making us more likely to embrace
humnas, of any skin color, gender, or type...
maybe even all sentient beings.
I don’t see the trend of expanding circles of sen-
timent slowing down. We will need well
intended collaboration if we are to survive the
technologies that we will create.
55. Many of the global collaborations (League of
nations, World Health Organization etc.) have
first involved tragedy.
56. The way that I see it now, the way that these
technologies are projected, the genuine
perspective of people in this field, and given the
moral consequence of not only destroying
ourselves but maybe creating what is beyond
us... I think that it behooves us to wake up
before the machines do.
57. The way that I see it now, the way that these
technologies are projected, the genuine
perspective of people in this field, and given the
moral consequence of not only destroying
ourselves but maybe creating what is beyond
us... I think that it behooves us to wake up
before the machines do.
58. Click the screen below to view a video of Dan’s
TedX talk for yourself:
59. dan@techemergence.com | www.danfaggella.com
Thanks for viewing the presentation. To join the
conversation on the intersection of
technology and intelligence, visit my personal
webpage and follow me on social media by
clicking the icons below: