6. Plausibility of Artificial
Consciousness
A view skeptical of AC is held by typeidentity theorists
“consciousness can only be realized in particular
physical systems because consciousness has properties that
necessarily depend on physical constitution”
However, for functionalists,
“any system that can instantiate the same pattern of
causal roles, regardless of physical constitution, will
instantiate the same mental states, including
consciousness”
Along these lines, some theorists have proposed that
consciousness can be realized in properly designed and
programmed computers.
7. TypeIdentity Theory
The mental events can be grouped into types and
associated with types of physical events in the brain.
For example, mental event pain results in physical event in
the brain (like Cfiber firings)
We have two totally different versions of typeidentity
theory based on the definition of “what kind of identity” is
associated with mental and physical events.
Ullin Place (1956) – Compositional Identity
Feigl (1957) and Smart (1959) – Referential Identity
9. Referential TypeIdentity Theory
For Feigl and Smart, the identity was to be interpreted as
the identity between the referents of two descriptions
which referred to the same thing.
“the morning star” and “the evening star” are identical in
the sense that both of them refer to the Venus.
Sensations and brain processes do indeed mean different
things but they refer to the same physical phenomenon.
This is called as The Fregean distinction
Conclusion : All of the versions share the central idea that
the mind is identical to something physical.
11. Multiple Realizability
Objections to the type
identity theory
Hilary Putnam popularized it
in late 1960s.
It states that the same
mental property, state, or
event can be implemented by
different physical properties,
states or events.
12. Putnam's Formulation
Do all organisms have the same brain structures? Clearly
not !
Pain corresponds to completely different physical states and
yet they all experience the same mental state of "being in
pain."
Should robots be considered a priori incable of expereincing
pain just because they did not posses the same
neurochemistry as humans?
Putnam concluded that typeidentity is making an
implausible conjecture.
13. Functionalism
Core idea is that mental states are constituted solely by
their functional role
They are causal relations to other mental states, sensory
inputs, and behavioral outputs.
Brains are physical devices with neural substrate that
perform computations on inputs which produce behaviours.
According to this theory it is possible to build silicon based
devices which are functionally isomorphic to the
humans as long as system performs appropriate functions.
19. Ability Hypothesis
Nemirow claims that "knowing what an experience is like
is the same as knowing how to imagine having the
experience".
He argues that Mary only obtained the ability to do
something, not the knowledge of something new.
Mary gained an ability to "remember, imagine and
recognize."
Knowing what it's like to see red is merely a sort of
practical knowledge, a “knowing how” (to imagine,
remember, or reidentify, a certain type of experience)
28. Anticipation
Machine needs flexible, realtime components that predict
worlds.
A conscious machine should make coherent predictions and
plans, for environments that may change.
Executed only when appropriate to simulate and control the
real world.
Significant research on role of consciousness in cognitive
models. Examples : CLARION, OpenCog
29. Unified Theory of Cognition
Book written by Allen Newell
Newell's goal :
To define the architecture of human cognition, which is the
way that humans process information. This architecture
must explain how we react to stimuli, exhibit goal directed
behavior,acquire rational goals, represent knowledge, and
learn.
30. Newell's Cognitive Model
Newell introduces Soar, an architecture for general
cognition.
Soar is the first problem solver to create its own subgoals
and learn continuously from its own experience.
Soar has the ability to operate within the realtime
constraints of intelligent behavior, such as immediate
response and itemrecognition tasks.
34. It's Soar not SOAR !
Historically, Soar stood for State, Operator And Result,
because all problem solving in Soar is regarded as a search
through a problem space in which you apply an operator to
a state to get a result.
Over time, the community no longer regarded Soar as an
acronym: this is why it is no longer written in upper case
39. Longterm Production Memory
All of Soar's longterm knowledge is stored in a single
production memory.
Each production is a conditionaction structure that
performs its actions when its conditions are met.
Memory access consists of the execution of these
productions.
During the execution of a production, variables in its
actions are instantiated with value.
40. Working Memory
The result of memory access is the retrieval of information
into a global working memory.
It is the temporary memory that contains all of Soar's
shortterm processing context. It has 3 components :
The context stack specifies the hierarchy of active goals,
problem spaces, states and operators
objects, such as goals and states (and their subobjects)
preferences that encode the procedural searchcontrol
knowledge
42. Preferences
There is one special type of working memory structure
“the preference”
Preferences encode control knowledge about the
acceptability and desirability of actions.
Acceptability preferences determine which actions should
be considered as candidates.
Desirability preferences define a partial ordering on the
candidate actions.
43. Decision Level
The decision level is based on the memory level plus an
architecturally provided, fixed, decision procedure.
The decision level proceeds in a two phase elaboratedecide
cycle.
During elaboration, the memory is accessed repeatedly, in
parallel, until quiescence is reached; that is, until no more
productions can execute.
This results in the retrieval into working memory of all of
the accessible knowledge that is relevant to the current
decision.
After quiescence has occurred, the decision procedure
selects one of the retrieved actions based on the preferences
that were retrieved into working memory.
44. Goal Level
A general intelligence must be able to set and work
towards goals.This level is based on the decision level.
Goals are set whenever a decision cannot be made; that is,
when the decision procedure reaches an impasse.
Impasses occur when there are no alternatives that can be
selected (nochange and rejection impasses) or when there
are multiple alternatives that can be selected, but
insufficient discriminating preferences exist to allow a
choice to be made among them (tie and conflict impasses).
45. Impasse Resolution
Whenever an impasse occurs, the architecture generates
the goal of resolving the impasse which becomes the
subgoal.
Along with this goal, a new performance context is created.
The creation of a new context allows decisions to continue
to be made in the service of achieving the goal of “resolving
the impasse”.
A stack of impasses is possible.
The original goal is resumed after all the impasse stack is
46. Learning through Chunking
In addition to all above levels, a general intelligence
requires the ability to learn.
All learning occurs by the acquisition of chunks
productions that summarize the problem solving that
occurs in subgoals, a mechanism called “Chunking”
The actions of a chunk represent the knowledge generated
during the subgoal; that is, the results of the subgoal.
47. Evolution of Soar
YEAR VERSION IMPLEMENTED IN
1982 Soar 1 Lisp
1983 Soar 2 Lisp/OPS5
1984 Soar 3
1986 Soar 4
1989 Soar 5
1992 Soar 6 C
1996 Soar 7 Tcl/tk
1999 Soar 8 SGIO
49. Appraisal's Detector
This theory proposes that an agent continually evaluates a
situation and that evaluation leads to emotion.
The evaluation is hypothesized to take place along multiple
dimensions, such as
goal relevance
goal conduciveness
causality and control
These dimensions are exactly what an intelligent agent
needs to compute as it pursues its goals while interacting
with an environment.
50. Conclusion
This field still has far to travel before we understand fully
the space of cognitive architectures and the principles that
underlie their successful design and utilization.
However, we now have over two decades’ experience with
constructing and using a variety such architectures for a
wide range of problems, along with a number of challenges
that have arisen in this pursuit.
If the scenery revealed by these initial steps are any
indication, the journey ahead promises even more
interesting and intriguing sites and attractions.
52. References
1) SOAR : An Architecture for General Intelligence, John E.
Laird, Allen Newell, Paul S. Rosenbloom,1986.
2) A preliminary analysis of the Soar architecture as a basis
for general intelligence, John E. Laird, Allen Newell, Paul
S. Rosenbloom, 1989.
3) http://en.wikipedia.org/wiki/Cognitive_architecture
4) http://cs.gmu.edu/~eclab/research.html
5) http://en.wikipedia.org/wiki/Unified_theory_of_cognition
6) http://cll.stanford.edu/research/ongoing/icarus/
7) http://en.wikipedia.org/wiki/Artificial_consciousness
8) http://plato.stanford.edu/entries/functionalism/
53. References
9) A Survey of Cognitive Architectures, David E. Kieras,
University of Michigan .
10) Connectionism and Cognitive Architecture : A Critical
Analysis, Jerry A. Fodor and Zenon W. Pylyshyn, Rutgers
Center for Cognitive Science, Rutgers University, New
Brunswick, NJ.
11) Human Cognitive Architecture, John Sweller, University
of New South Wales, Sydney, Australia.
12) http://cogarch.org/index.php/Soar/Architecture
13) http://code.google.com/p/soar/wiki/Documentation