SlideShare une entreprise Scribd logo
1  sur  56
Télécharger pour lire hors ligne
From the Signal to the Symbol:
Structure and Process in Artificial Intelligence



                  Marko A. Rodriguez
           T-5, Center for Nonlinear Studies
           Los Alamos National Laboratory
              http://markorodriguez.com

                 November 13, 2008
1

                                     Abstract

There is a divide in the domain of artificial intelligence. On the one end of this divide
are the various sub-symbolic, or signal-based systems that are able to distill stable
representations from a potentially noisy signal. Pattern recognition and classification
are typical uses of such signal-based systems. On the other side of the divide are
various symbol-based systems. In these systems, the lowest-level of representation
is that of the a priori determined symbol, which can denote something as high-level
as a person, place, or thing. Such symbolic systems are used to model and reason
over some domain of discourse given prescribed rules of inference. An example of
the unification of this divide is the human. The human perceptual system performs
signal processing to yield the rich symbolic models that form the majority of our
interpretation of and reasoning about the world. This presentation will provide an
introduction to different signal and symbol systems and discuss the unification of
this divide.




                                Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
2

                       General Introduction

• We receive signals that are noisy, never identical, and yet we have a
  stable representation of “reality”.

• Signals from different modalities can map to the same abstract concepts
  (e.g. hearing a dog bark and seeing a dog, both map to dog. Or with
  more specificity, to a particular dog you know.).

• In higher-level thinking (i.e. at the level of “conscious awareness”), we
  reason in terms of these abstract concepts, not in terms of the signals
  (e.g. “This dog has no owner, it must be a stray.”).

• Both signal and symbol processing occur in the same neural substrate.


                            Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
3

                               General Introduction

• A distinction between signal and symbol systems:
     signal: the information processed by the system is very “low-level” (e.g. simple
     geometric patterns) and makes few ontological commitments.1
     symbol: the information processed by the system is very “high-level” (e.g. people)
     and makes many ontological commitments.
• A distinction between the structure and process of systems:
     structure: the types of objects that compose the system.
     process: the types of mappings that evolve the system.


                               structure                              process
            signal       features and relations         feature distance and activation
           symbol         objects and relations                rules of inference
  1
    Ontological commitment means the assumptions about the world/environment that the system assumes
to be true.


                                      Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
4

        Introduction to our Experimental Subjects and
                    Notation Conventions
Our example subjects are Marko and Fluffy:2




All formalisms are going to be presented in graph notation and using the same variable
names as best as possible.

• G graph, V vertices, E edges, E family of edge sets
• i, j ∈ V , (i, j) ∈ E , (i, n, j) a statement or triple, w+, w− evidence tuple
• x ∈ Rn input vector, w ∈ Rm feature vector
  2
    These images were found on the web many moons ago and apologies to the fine people who created
them and will only get this meager credit.


                                    Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
5

                                 Outline

• Signal Representation
    The HMAX Model
    Self-Organizing Maps

• Symbol Representation
    Description Logics
    Evidential Logics


• Unifying Signals and Symbols

• A Distributed Graph in an Infinite Space


                           Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
6

                                 Outline

• Signal Representation
    The HMAX Model
    Self-Organizing Maps

• Symbol Representation
    Description Logics
    Evidential Logics


• Unifying Signals and Symbols

• A Distributed Graph in an Infinite Space


                           Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
7

                          The HMAX Model - Introduction

• Object recognition/classification through low-level feature analysis.

• Can support scale, translation, and rotation invariance.3

• Anatomically realistic with respect to the Hubel and Wiesel visual cortex
  research.

Riesenhuber, M., Poggio, T., “Hierarchical models of object recognition in cortex”, Nature Neuroscience, volume 2, pages

1019-1025, 1999.[6]




   3
     Depends on the learning/training procedure used as well as the choice of the low-level features coded
into the system.


                                                Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
8

             The HMAX Model - The Structure

• The HMAX network can be defined as G = (V, E) where V is a set of
  vertices (i.e. neurons, feature selectors), E ⊆ (V × V ), and there exist
  no cycles.

• There are two types of vertices: simple and complex, where V = S ∪ C
  and S ∩ C = ∅. Cells are “tuned” to respond to a particular input
  feature.

                          C2                               ...



                          S2                   ...


                          C1                         ...



                          S1



                               Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
9

                      The HMAX Model - The Process

• Each vertex i ∈ S is tuned to a particular feature wi ∈ Rn and performs
  the function.4

                             n                       ||wi − x||2
                     si : R → [0, 1] : si(x) → exp −
                                                        2σ 2


• Each vertex i ∈ C has the same excitation value as its most excited
  simple, child vertex.

                              ci : Rm → [0, 1] : ci(x) → max(x)

 4
     The w features at S1 are the ontological commitments of the model and are usually simple line types.


                                        Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
10

The HMAX Model - Example




      Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
11

                 The HMAX Model - Example

     1   2                                                                            1          2
S1
     3   4                                                                            3          4
             1    2   3        4                   1        2        3        4




                          Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
12

                 The HMAX Model - Example

C1
                   ...   ...    ...                              ...    ...    ...


     1   2                                                                                 1          2
S1
     3   4                                                                                 3          4
             1    2        3          4                 1        2        3          4




                               Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
13

                 The HMAX Model - Example
C2                                              ...


     1   2        1            2                                    1             2             1         2
S2   3   4        3            4                ...                 3             4             3         4




C1
                   ...   ...        ...                              ...    ...       ...


     1   2                                                                                          1     2
S1
     3   4                                                                                          3     4
             1    2        3              4                 1        2        3             4




                                   Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
14

                       The HMAX Model - Drawbacks

• What is captured at the highest point in the hierarchy is a large list of
  features, not their relative positions to each other. With high resolution,
  the list of features turns into a unique identifier for an object (hopefully).5

• There is a distinction between learning/training and categorizing/perceiving.




   5
     Complex cells can be seen as “grandmother cells”. The further up the hierarchy, the more agnostic the
cell is to its object representation’s under various transformations.


                                        Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
15

                       Self-Organizing Maps - Introduction

• Self-organizing maps (aka. Kohonen networks) can be used to generate a map
  (i.e. model) of an input space (i.e. environment) in an unsupervised manner.
• Each vertex in the map specializes on representing a particular region of the input
  space (i.e. each vertex specializes on particular features of the environment). Denser
  regions of the input space receive more vertices for their representation.
• There is no separation between learning/training and categorzing/perceiving. Every
  input adjusts the feature tunings of the vertices. The more “learned” the system is to
  the environment, the smaller the adjustments.


Kohonen, T., “Self-organized formation of topologically correct feature maps”, Biological Cybernetics, volume 42, pages 59-69,

1982.[5]




                                                Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
16

                 Self-Organizing Maps - The Structure

• A self-organizing map is defined as G = (V, E, ω), where V is the set
  of vertices, E ⊆ {V × V } is a set of edges, and ω : E → [0, 1] defines the
  strength of coupling between vertices. If (i, j) ∈ E, then ω(i, j) → 0.
                                                    /
  Finally, (i, i) ∈ E and ω(i, i) → 1.

• Every vertex i ∈ V has an n-dimensional feature vector wi ∈ Rn. Initially
  all vertex features are randomly generated.6

• The environment is defined by an n-dimensional space. A sample from
  that space is denoted x ∈ Rn.

   6
    Coupling strength between vertices (i.e. edge weight) can be determined by their relative distance to
one another in Rn .


                                       Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
17

             Self-Organizing Maps - The Process

The SOM algorithm proceeds according to the following looping rules:

1. Generate a sample x ∈ Rn from the environment.

2. Determine which vertex in V is closest to x via some distance function
   (e.g. ||x − wi||2). Denote that vertex i.

3. For each vertex j ∈ V ,

                        wj ← wj + ω(j, i)(wj − x)η,

   where η ∈ [0, 1] is some learning parameter.


                             Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
18

                                              Self-Organizing Maps - Example

                                                iteration 0                                                                                                      iteration 1
1.0




                                                                                                               1.0
                                                                                               q                                                                                            q                        q
                                                                                                                                                                                                    q
                                                                                                                                   q q
                                                                                                                                   q
                                               q                                                                                                                 q                      q
0.8




                                                                                                               0.8
                                                                          q                                                                                                                     q

                                                                                q   q                                                                                                               q   q
                                      q             q                                                                                                q               q
                                                                   q     q q
                                                                        q q                                                                                                         q     q q
                                                                                                                                                                                         q q
                                                               q       qq q
                                                                     qq q                                                                                                       q       qq q
                                                                                                                                                                                      qq q
                                                                       q q q                                                                                                            q q q
                                                             q   qq        qq                                                                                                 q   qq        qq
0.6




                                                                  q qq




                                                                                                               0.6
                                                                 q
                                                                  q qq     q                                                                                                      qq qq
                                                                                                                                                                                   q qq     q
                q                                                                                                              q
                                                             q        q  q
                                                                            q                                                                                                 q        q  q
                                                                                                                                                                                        q q                          q
                                                              qq                                                                                                               qq                                q
                                                                      qq                                                                                                               qq
                                                            qq q           q                                                                                                 qq q           q
                                                               q                                                                                                                q
            q                                                                                                              q
                                                        q                                                                                                                q
0.4




                                                                                                               0.4
                              q
                                                                                               q
                                                                                                                                             q                                                      q                q
                                                                                         q                                                                                                                   q
                                                                                                                                                             q
        q                q                                                                                             q                q
                                                                                                                                                                               q q
                                  q                                                                                                              q
                                                                                                                                                                                                                         q
0.2




                                                                                                               0.2
                                          q                                                                                                              q
                                                                                                                                                                                            q
                                      q                                                                                                              q
                    q                                                                                                              q
                                                                                                                                             q
0.0




                                                                                                               0.0

      0.0               0.2                   0.4                  0.6                  0.8         1.0              0.0               0.2                   0.4                    0.6                     0.8              1.0




                                                                                             Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
19

                                                  Self-Organizing Maps - Example

                                                   iteration 10                                                                                                iteration 25
1.0




                                                                                                                1.0
                                                                                                q                                                                                                            q



                                                   q                                                                                                           q
0.8




                                                                                                                0.8
                                                                            q                                                                                                            q
                                      q                                     q
                                  q                                     q            q                                                                                                             q
                                                                                 q                                                                                                             q
                                      q                 q       q   q     q q
                                                                         q q
                                                                                                                                                      q             q
                                                                                                                                                                                   q    q q
                                                                                                                                                                                       q q
                                                                q       qq q
                                                                      qq q                                                                                      q              q      qq q
                                                                                                                                                                                      q q
                                                                                                                                                                                     qq q
                                                  q                     q q q                                                                     q                                   q q
                                                              q   qq         q                                                                                               q   qq        q
                                                                            q                                                                                           q                 q
0.6




                                                                   q qq




                                                                                                                0.6
                                                                  q                                                                                                                 qq
                q                                                  q qq     q                                                   q                                              qqq qq q qq
                                                                                                                                                                                 q
                                                              q    q q qq q          q                                                                                       q q
                                                                                                                                                                                     qq q
                                                                                                                                                                                     q
                                                               qq                                                                                                             qq
                                                                       qq                                                                                           q           q    qq
                                                             qq q
                                                                q        qq                                                                                                 qq q
                                                                                                                                                                               q
                                                                                                                                                                                          q

            q                                                                                                               q                                           q
                                                        qq              q                                                                                               q     q
                                                                                                                                                                q
0.4




                                                                                                                0.4
                              q                                                                                                               q
                                                                                                q                                                                                                            q
                                                                 q                        q                                                                             q                q              q

                         q
                                                                            q                                                            q
        q                                                                                                               q
                                              q
                                  q                                         q                                                                     q
0.2




                                                                                                                0.2
                                          q                                                                                                               q

                                      q                                                                                                               q
                    q                                                                                                               q
0.0




                                                                                                                0.0

      0.0               0.2                       0.4               0.6                  0.8         1.0              0.0               0.2                   0.4                  0.6                 0.8       1.0




                                                                                              Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
20

                                              Self-Organizing Maps - Example

                                               iteration 50                                                                                                     iteration 75
1.0




                                                                                                             1.0
                                                                                             q                                                                                                                q



                                               q                                                                                                                q
0.8




                                                                                                             0.8
                                                                          q                                                                                                                  q

                                                                              q   q                                                                                                             q
                                                                                                                                                                                                q   q
                                  q                 q                     q                                                                        q                 q
                                                                    q   q q
                                                                       q q                                                                                                            q   q qq
                                                                                                                                                                                           q q
                                                                      q q                                                                                                                qq q q
                                                                q
                                                                     qq qq
                                                                     q q
                                                                                                                                                                              qq
                                                                                                                                                                                  q     qq q
                                                            q q q q q qq  q
                                                                                                                                                                                    q qq
                                                                                                                                                                                        qq qq q
                                                                                                                                                                                              q
                                                                     q q                                                                                                           q qq
0.6




                                                                  q qq




                                                                                                             0.6
                                                                 q
                                                                  q qqq q                                                                                                           q q
                                                                                                                                                                                    q qq      q
                q                                                 q q q                                                      q
                                                                                                                                                                             qqq         q  q q
                                                              qq
                                                              qq          q                                                                                                     qq
                                                                                                                                                                                 q
                                                                                                                                                                                              q
                                                                      q
                                                                      q                                                                                                                  qq
                                                    q        q q
                                                             qq q        qq
                                                                                                                                                                 q            qq q
                                                                                                                                                                                  q
                                                                                                                                                                                             q

            q                                               q                                                            q
                                          q     q       q
                                                                                                                                                                    q    q
0.4




                                                                                                             0.4
                              q
                              q
                                                                                             q
                                                                                                                                           q
                                                                                                                                                                                                              q
                                                                                       q                                                           q                                                     q
                                                                                                                                                           q
                                      q
        q                q                                                                                           q                q    q

                              q                                                                                                                q
0.2




                                                                                                             0.2
                                      q                                                                                                                q

                                  q                                                                                                                q
                    q                                                                                                            q
0.0




                                                                                                             0.0

      0.0               0.2                   0.4                   0.6               0.8         1.0              0.0               0.2                       0.4                    0.6               0.8       1.0




                                                                                           Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
21

Self-Organizing Maps - Example
                                                                    iteration 75




                 1.0
                                                                                                                   ●



                                                                    ●


                 0.8
                                                                                                 ●

                                                                                                     ●
                                                                                                     ●   ●
                                                       ●                 ●
                                                                                        ●     ● ●●
                                                                                               ● ●
                                                                                      ●      ●●  ●● ●        fluffy
                                                                                            ●●
                                                                                  ●●        ●● ●● ●
number of legs



                                                                                        ● ●●       ●
                                                                                       ● ●●
                 0.6



                                                                                        ●  ●
                                 ●                                                      ● ●●      ●
                                                                                 ●●●         ●  ● ●
                                                                                                   ●
                                                                                    ●●
                                                                                     ●       ●●
                                                                                  ●● ●            ●
                                                                     ●                ●
                             ●
                                                                        ●    ●
                 0.4




                                               ●
                                                                                                                     ●
                                                       ●                     mammal                           ●
                                                               ●
                         ●                ●    ●

                                                   ●
                 0.2




                                                           ●

                                                       ●
                                     ●



                                 marko
                 0.0




                       0.0               0.2                       0.4                   0.6                 0.8         1.0

                                                                   amount of fur



                                         Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
22

                                 Outline

• Signal Representation
    The HMAX Model
    Self-Organizing Maps

• Symbol Representation
    Description Logics
    Evidential Logics


• Unifying Signals and Symbols

• A Distributed Graph in an Infinite Space


                           Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
23

     From Categorization to Reasoning on Categories

• With signal-based systems, the “grounded” entities are very primitive
  constructs (e.g. simple line types) and from these primitive constructs it
  is possible generate abstract representations of patterns that are invariant
  to various transformations (e.g. Fluffy regardless of his location in space).

• With symbol-based systems, the “grounded” entities are generally very
  abstract (e.g. Fluffy) and from these concepts its possible to reason
  abstract relationships (e.g. Fluffy must be a canine because he has fur.).




                             Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
24

        Knowledge Representation and Reasoning
• Knowledge representation: a model of a domain of discourse
  represented in some medium – structure.

• Reasoning: the algorithm by which implicit knowledge in the model is
  made explicit – process.

                        f (x)          Reasoner



                                       read/write




                                Knowledge Representation




                          Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
25

                          Description Logics - Introduction

• The purpose of description logics is to infer subsumption relationships
  in a knowledge structure.

• Given a set of individuals (i.e. real-world instances), determine which
  concept descriptions subsume the individuals. For example, is marko a
  type of Mammal?

F. Baader, D. Calvanese, D. L. McGuinness, D. Nardi, P. F. Patel-Schneider: The Description Logic Handbook: Theory,

Implementation, Applications. Cambridge University Press, Cambridge, UK, 2003.[1]




                                               Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
26

              Description Logics - The Structure

• A multi-relational network (aka. semantic network, directed labeled
  graph) is defined as G = (V, E), where V is the set of vertices
  (i.e. symbols), E = {E1, E2, . . . , En} is a family of edge sets, where
  any En ⊆ (V × V ). Each edge set has a categorical or nominal meaning
  (e.g. bestFriend, hasFur, numberOfLegs, etc.).

• An edge (i, j) ∈ En is called a “statement” and is usually denoted as a
  triple (i, n, j) (e.g. (marko, bestFriend, fluffy)).




                   marko          bestFriend                    fluffy


                           Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
27

                    Description Logics - The Structure

• Individual: a unique identifier denoting some “real-world thing” that
  exists. For example: marko.

• Simple Concepts: a unique identifier denoting a “ground” concept. For
  example: Mammal.

• Simple Roles (aka properties): a unique identifier denoting a binary
  relationship. For example: numberOfLegs, hasFur, bestFriend.

• Compound Concept: a concept that is defined in terms of another
  concept. For example: a Canine is a thing that has 4 legs and is furry.7
   7
    There are many description logic languages. Distinctions between these languages are made explicit by
defining their “expressivity” (i.e. the possible forms a compound concept description can take).


                                       Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
28

              Description Logics - The Structure

• Terminological Box (T-Box): a collection of descriptions. Also known
  as an ontology.
    Human ≡ (= 2 numberOfLegs) (= false hasFur) ∃bestFriend.Canine
    Canine ≡ (= 4 numberOfLegs) (= true hasFur)
    Human   Mammal
    Canine   Mammal

• Assertion Box (A-Box): a collection of individuals and their relationships
  to one another.
    numberOfLegs(marko, 2), hasFur(marko, false), bestFriend(marko, fluffy),
    numberOfLegs(fluffy, 4), hasFur(fluffy, true).




                            Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
29

                 Description Logics - The Process

• Inference rules (Reasoner): a collection of pattern descriptions are used
  to assert new statements:
    (?x, subClassOf, ?y) ∧ (?y, subClassOf, ?z) ⇒ (?x, subClassOf, ?z)

    (?x, subClassOf, ?y) ∧ (?y, subClassOf, ?x) ⇒ (?x, equivalentClass, ?y)

    (?x, subPropertyOf, ?y) ∧ (?y, subPropertyOf, ?z) ⇒ (?x, subPropertyOf, ?z)

    (?x, type, ?y) ∧ (?y, subClassOf, ?z) ⇒ (?x, type, ?z)

    (?x, onProperty, ?y) ∧ (?x, hasValue, ?z) ∧ (?a, subClassOf, ?x) ⇒ (?a, ?y, ?z)

    (?x, onProperty, ?y) ∧ (?x, hasValue, ?z) ∧ (?a, ?y, ?z) ⇒ (?a, type, ?x)

    . . .




                                 Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
30

                                    Description Logics - Example
Human ≡ (= 2 numberOfLegs)            (= false hasFur)             ∃bestFriend.Canine Canine ≡ (= 4 numberOfLegs)                      (= true hasFur)

   bestFriend      numberOfLegs        2          false             hasFur                 numberOfLegs         4       true                      hasFur

                         onProperty                  hasValue                                      onProperty
   onProperty                                                                                                                  hasValue onProperty
                                  hasValue                   onProperty                                      hasValue


  Restriction_A                   Restriction_B             Restriction_C                                 Restriction_D           Restriction_E


                                                  subClassOf                                                              subClassOf
                  subClassOf                                                 Mammal
                                  subClassOf                                                               subClassOf
someValuesFrom                                            subClassOf                       subClassOf

                                    Human            Human        Mammal              Canine   Mammal        Canine
                                                                                                                                                  T-Box




                                                              Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
31

                   Description Logics - Example

                   bestFriend(marko, fluffy)

             marko               bestFriend                    fluffy


    numberOfLegs     hasFur                       numberOfLegs         hasFur


2                             false           4                                  true


numberOfLegs(marko, 2)                            numberOfLegs(fuffy, 4)
    hasFur(marko, false)                           hasFur(fluffy, true)                 A-Box




                                Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
32

                       Description Logics - Example
          inferred                              Mammal

                                  subClassOf                 subClassOf

                         Human                                                Canine

                                      type                        type
        T-Box

        A-Box
                          type                                                 type


                         marko                  bestFriend                     fluffy


              numberOfLegs        hasFur                         numberOfLegs          hasFur


          2                                  false           4                                   true

        * The T-Box includes other description information, but for diagram clarity, this was left out.


Yes — marko is a type of Mammal.

                                        Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
33

                Description Logics - Drawbacks

• With “nested” descriptions and complex quantifiers, you can run into
  exponential running times.

• Requires that all assertions in the A-Box are “true”. For example, if
  the T-Box declares that a country can have only one president and you
  assert that barack is the president of the United States and that marko
  is the president of the United States, then it is inferred that barack and
  marko are the same person. And this can have rippling effects such as
  their mothers and fathers must be the same people, etc.

• Not very “organic” as concepts descriptions are driven, not by the system,
  but by a human designer. Where do all the meta-language predicates
  come from? Where do all the inference rules come from?

                            Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
34

                            Evidential Logics - Introduction

Evidential logics are multi-valued logics founded on AIKIR (Assumption of
Insufficient Knowledge and Insufficient Resources) and are:

• non-bivalent: there is no inherent truth in a statement, only differing
  degrees of support or negation.

• non-monotonic: the evaluation of the “truth” of a statement is not
  immutable, but can change as new experiences occur. In other words, as
  new evidence is accumulated.

Wang, P., “Cognitive Logic versus Mathematical Logic”, Proceedings of the Third International Seminar on Logic and Cognition,

May 2004.[8]




                                               Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
35

                      Evidential Logics - The Structure
• An evidence network is defined as G = (V, E, ω), where V is the set
  of vertex (i.e. symbols), E ⊆ (V × V ) is a set of directed edges, and
  ω : E → R+, R+ maps each edge to its evidence tuple.8

• Edge (i, j) can be thought of as stating “i inherits from j”, “i is a j”,
  “i has properties of j”, etc.

• Every edge has two values: total amount of positive (w+) and negative
  (w−) evidence supporting or negating the inheritance statement. “How
  much positive and negative evidence is there for marko inheriting the
  properties of Human”?
   8
       Every evidence tuple w+ , w−   has a mapping to f, c ∈ [0, 1], [0, 1] that is perhaps more
                                w+                                                   +   −
“natural” to work with. f =          denotes frequency of positive evidence and c = w +w     denotes
                              w+ +w−                                               w+ +w− +k
                                                  +
confidence in stability of the frequency, where k ∈ N    is a user-defined constant.


                                        Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
36

                          Evidential Logics - The Process
Evidential reasoning is done using various syllogisms:9

• deduction: (?x, ?y) ∧ (?y, ?z) ⇒ (?x, ?z)
  fluffy is a canine, canine is a mammal ⇒ fluffy is a mammal

• induction: (?x, ?y) ∧ (?z, ?y) ⇒ (?x, ?z)
  fluffy is a canine, fifi is a canine ⇒ fluffy is a fifi

• abduction: (?x, ?y) ∧ (?x, ?z) ⇒ (?y, ?z)
  fluffy is a canine, fluffy is a dog ⇒ canine is a dog

• exemplification: (?x, ?y) ∧ (?y, ?z) ⇒ (?z, ?x)10
  fluffy is a canine, canine is a mammal ⇒ mammal is a fluffy
  9
      It is helpful to think of the copula as “inherits the properties of” instead of “is a”.
 10
      Exemplification is a much less used syllogism in evidential reasoning.


                                            Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
37

                               Evidential Logics - Example

Assume that the past experience of the evidential system has provided
these w+, w− evidential tuples for the following relationships.11

                                                 Mammal

                                       <1,0>                   <1,0>

                               Human                                      Canine



                       <1,0>           <0,1>                  <1,0>                <1,0>



              2-legs                                fur                                    4-legs


  11
    The example to follow is not completely faithful to NAL-* (Non-Axiomatic Logic). Please refer to more
expressive NAL constructs for a better representation of the ideas presented in this example.


                                        Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
38

                   Evidential Logics - Example

experienced                          Mammal

                           <1,0>                   <1,0>

                   Human                                      Canine



           <1,0>           <0,1>                  <1,0>                <1,0>



  2-legs                                fur                                    4-legs


           <1,0>           <0,1>                   <1,0>               <1,0>


                   marko                                       fluffy




                            Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
39

                   Evidential Logics - Example

inferred                                 Mammal

                               <1,0>                   <1,0>

                   Human                                          Canine



           <1,0>               <0,1>                  <1,0>                <1,0>

                   <1,0>   D                                       <2,0>   D
2-legs                                      fur                                    4-legs


           <1,0>               <0,1>                   <1,0>               <1,0>
                                       D deduction
                   marko               I induction                 fluffy
                                       A abduction




                                Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
40

                   Evidential Logics - Example

inferred                                    Mammal

                                 <1,0>                    <1,0>

                   Human                                               Canine



           <1,0>                   <0,1>                 <1,0>                    <1,0>

                   <1,0>                                               <2,0>

2-legs                     <0,1>               fur                <1,0>                   4-legs
                             I                                     A
           <1,0>                 <0,1>                    <1,0>                   <1,0>
                                           D deduction
                   marko                   I induction                    fluffy
                                           A abduction




                                   Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
41

                         Evidential Logics - Example

                         <1,0>                     Mammal
              inferred    D
                                         <1,0>                 <1,0>

                              Human                 <1,0>                  Canine



                      <1,0>              <0,1>                 <1,0>                   <1,0>

                              <1,0>                                        <2,0>

             2-legs              <0,1>               fur               <1,0>                   4-legs


                      <1,0>              <0,1>                 <1,0>                   <1,0>
                                                 D deduction
                              marko              I induction                   fluffy
                                                 A abduction




Yes — currently, marko is believed to be a type of Mammal.

                                      Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
42

                                   Evidential Logics - Versions
What was presented was an evidential logic known as NAL-1 or
Non-Axomatic Logic 1. There exist more expressive forms that are based
on the NAL-1 core formalisms:

 •   NAL-0: binary inheritance – (marko, Human)
 •   NAL-1: inference rules – (?x, ?y) ∧ (?y, ?z) ⇒ (?x, ?z)
 •   NAL-2: sets and variants of inheritance – (fluffy, [fur]), ({marko}, Human)
 •   NAL-3: intersections and differences
 •   NAL-4: products, images, and ordinary relations – ((marko × fluffy), bestFriend)
 •   NAL-5: statement reification – ((marko × (fluffy, Canine)), knows)
 •   NAL-6: variables – (?x, Human) ∧ (?y, Canine) ⇒ ((?x×?y), bestFriend)
 •   NAL-7: temporal statements
 •   NAL-8: procedural statements – can model FOPL and thus, utilize an axiomatic
     “subsystem”

Pei, W., “Rigid Flexibility”, Springer, 2006.[9]


                                                   Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
43

                Evidential Logics - Drawbacks

• The model does not provide a mechanism for how evidence is “perceived”.
  All communication with the system is by means of statement-based
  assertions (marko, Human) and queries (marko, ?x).




                           Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
44

                                 Outline

• Signal Representation
    The HMAX Model
    Self-Organizing Maps

• Symbol Representation
    Description Logics
    Evidential Logics


• Unifying Signals and Symbols

• A Distributed Graph in an Infinite Space


                           Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
45

        Unification of Symbol and Signal - Introduction

• Signal to symbol: receive input signals and map them to transformation
  invariant symbols.12 – categorization

• Explicit relations between symbols: from similarities in input signals,
  make explicit inheritance relations between symbols. – relations

• Implicit relations between symbols: utilize various rules of inference
  to generate new relations that might not be based on external signal
  alone. – reasoning


  12
    Symbols need not be labeled, just unique. In other words, some vertex must denote Fluffy, yet need not
be labeled fluffy.


                                       Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
46

                               Signals to Symbols
• Signal-based systems are able to provide a (fuzzy) unique identifier for a concept. For
  example, if ci(x) ≈ 1, then marko was perceived. Another way to think of it is that
  ci denotes markoness. With ci : Rn → [0, 1], ci is a fuzzy classifier of the concept
  “marko” (aka. “grandmother cell”).



                          marko                         Human                    Mammal




                                                   ci                              Cm
                            arm
                                                        ...                         ...
                                                                                    C1

                                                                                    S1

                           Symbols need not exist. They are provided for diagram clarity
                         e.g. marko's c vertex is just some unique identifier (e.g. abcd1234)




                                      Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
47

                Explicit Relations Between Symbols
• Symbols (i.e. derived abstract concepts) can be related to one another according to
  inheritance relationships. Simply, this can be based on the intersection of their features.
• For example, how much are the features that make up marko are part of the features
  that make up Human? Likewise, for Human and Mammal?



                          marko    <1,0>         Human    <1,0>      Mammal

                          <1,0>

                                            ci                         Cm
                           arm
                                                 ...                    ...
                                                                        C1

                                                                        S1



                                  Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
48

                Implicit Relations Between Symbols

• Once experience has dictated the relationship between various concepts, utilize rules of
  inference to “predict” or “assume” other relationships in the world.
• Validate these inferences with more experiential data.

                                                 <1,0>



                         marko    <1,0>         Human    <1,0>      Mammal

                         <1,0>

                                           ci                         Cm
                          arm
                                                ...                    ...
                                                                       C1

                                                                       S1



                                 Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
49

                                 Outline

• Signal Representation
    The HMAX Model
    Self-Organizing Maps

• Symbol Representation
    Description Logics
    Evidential Logics


• Unifying Signals and Symbols

• A Distributed Graph in an Infinite Space


                           Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
50

 A Distributed Graph in an Infinite Space - Introduction

• The Uniform Resource Identifier (URI) provides an infinite, global
  address space for denoting “resources” (i.e. discrete entities, symbols,
  vertices). An example URI is http://www.lanl.gov#marko.13 14

• The Resource Description Framework (RDF) is a means of graphing
  URIs in a standardized, machine processable representation.

• The URI and RDF form the foundation standards of the Semantic Web.
  At its most general-level, the Semantic Web is a distributed directed
  labeled graph. The Semantic Web is for data what the World Wide Web
  is for documents.
  13
     Namespace prefixes are denoted for brevity, where http://www.lanl.gov#marko is expressed as
lanl:marko.
  14
     Universally Unique Identifiers (UUIDs) are 232 bit identifiers that can be used as globally unique
identifiers (e.g. lanl:fb5d2990-b111-11dd-ad8b-0800200c9a66).


                                      Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
51

   A Distributed Graph in an Infinite Space - Example
                                                                                             127.0.0.2
       127.0.0.1

                      lanl:marko              lanl:bestFriend              vub:fluffy

                           lanl:hasFur                                         lanl:hasFur
           lanl:numberOfLegs                                    lanl:numberOfLegs


        "2"^^xsd:integer     "false"^^xsd:boolean          "4"^^xsd:integer      "true"^^xsd:boolean




• The concept of lanl:marko and the properties lanl:numberOfLegs, lanl:hasFur,
  and lanl:bestFriend is maintained by LANL.
• The concept of vub:fluffy is maintained by VUB.
• The data types of xsd:integer and xsd:boolean are maintained by XML standards
  organization.


                                         Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
52

A Distributed Graph in an Infinite Space - Example
        127.0.0.3                                                                                127.0.0.4


           ad8a        ad8b         ad8c           ad8d       n:region00

                    n:super n:super             n:super
             ...      ...             n:super


             ad8e           ad8f          ad81

                          n:super                                    http://www.images.com/marko.jpg
                    n:super     n:super


                            ad82



                       owl:sameAs                                                                127.0.0.2
        127.0.0.1

                       lanl:marko                 lanl:bestFriend              vub:fluffy

                             lanl:hasFur                                           lanl:hasFur
            lanl:numberOfLegs                                       lanl:numberOfLegs


         "2"^^xsd:integer      "false"^^xsd:boolean           "4"^^xsd:integer       "true"^^xsd:boolean




                                      Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
53

                        Related Interesting Work

Healy, M.J., Caudell, T.P., “Ontologies and Worlds in Category Theory: Implications for
Neural Systems”, Axiomathes, volume 16, pages 165-214, 2006.[2]

Jackendoff, R., “Languages of the Mind”, MIT Press, September 1992.[4]

Serre, T., Oliva, A., Poggio, T., “A feedforward architecture accounts for rapid
categorization”, Proceedings of the National Academy of Science, volume 104, number
15, pages 6424-6429, April 2007.[7]

Heylighen, F., “Collective Intelligence and its Implementation on the Web: Algorithms to
Develop a Collective Mental Map”, Computational & Mathematical Organization Theory,
volume 5, number 3, pages 253-280, 1999.[3]




                                 Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
54




References

[1] Franz Baader, Diego Calvanese, Deborah L. Mcguinness, Daniele Nardi,
    and Peter F. Patel-Schneider, editors. The Description Logic Handbook:
    Theory, Implementation and Applications. Cambridge University Press,
    January 2003.

[2] Michael John Healy and Thomas Preston Caudell. Ontologies and
    worlds in category theory: Implications for neural systems. Axiomathes,
    16:165–214, 2006.

[3] Francis Heylighen. Collective intelligence and its implementation on the
    web: Algorithms to develop a collective mental map. Computational &
    Mathematical Organization Theory, 5(3):253–280, 1999.

                            Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
55

[4] Ray S. Jackendoff. Languages of the Mind. MIT Press, 1992.

[5] Teuvo Kohonen. Self-organized formation of topologically correct
    feature maps. Biological Cybernetics, 43:59–69, 1982.

[6] M. Riesenhuber and T. Poggio. Hierarchical models of ob ject
    recognition in cortex. Nature Neuroscience, 2:1019–1025, 1999.

[7] Thomas Serre, Aude Oliva, and Tomaso Poggio. A feedforward
    architecture accounts for rapid categorization. Proceedings of the
    National Academy of Science, 104(15):6424–6429, April 2007.

[8] Pei Wang. Cognitive logic versus mathematical logic. In Proceedings of
    the Third International Seminar on Logic and Cognition, May 2004.

[9] Pei Wang. Rigid Flexibility. Springer, 2006.

                            Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008

Contenu connexe

Plus de Marko Rodriguez

mm-ADT: A Multi-Model Abstract Data Type
mm-ADT: A Multi-Model Abstract Data Typemm-ADT: A Multi-Model Abstract Data Type
mm-ADT: A Multi-Model Abstract Data TypeMarko Rodriguez
 
Open Problems in the Universal Graph Theory
Open Problems in the Universal Graph TheoryOpen Problems in the Universal Graph Theory
Open Problems in the Universal Graph TheoryMarko Rodriguez
 
Gremlin 101.3 On Your FM Dial
Gremlin 101.3 On Your FM DialGremlin 101.3 On Your FM Dial
Gremlin 101.3 On Your FM DialMarko Rodriguez
 
Gremlin's Graph Traversal Machinery
Gremlin's Graph Traversal MachineryGremlin's Graph Traversal Machinery
Gremlin's Graph Traversal MachineryMarko Rodriguez
 
Quantum Processes in Graph Computing
Quantum Processes in Graph ComputingQuantum Processes in Graph Computing
Quantum Processes in Graph ComputingMarko Rodriguez
 
ACM DBPL Keynote: The Graph Traversal Machine and Language
ACM DBPL Keynote: The Graph Traversal Machine and LanguageACM DBPL Keynote: The Graph Traversal Machine and Language
ACM DBPL Keynote: The Graph Traversal Machine and LanguageMarko Rodriguez
 
The Gremlin Graph Traversal Language
The Gremlin Graph Traversal LanguageThe Gremlin Graph Traversal Language
The Gremlin Graph Traversal LanguageMarko Rodriguez
 
Faunus: Graph Analytics Engine
Faunus: Graph Analytics EngineFaunus: Graph Analytics Engine
Faunus: Graph Analytics EngineMarko Rodriguez
 
Solving Problems with Graphs
Solving Problems with GraphsSolving Problems with Graphs
Solving Problems with GraphsMarko Rodriguez
 
Titan: The Rise of Big Graph Data
Titan: The Rise of Big Graph DataTitan: The Rise of Big Graph Data
Titan: The Rise of Big Graph DataMarko Rodriguez
 
The Pathology of Graph Databases
The Pathology of Graph DatabasesThe Pathology of Graph Databases
The Pathology of Graph DatabasesMarko Rodriguez
 
Traversing Graph Databases with Gremlin
Traversing Graph Databases with GremlinTraversing Graph Databases with Gremlin
Traversing Graph Databases with GremlinMarko Rodriguez
 
The Path-o-Logical Gremlin
The Path-o-Logical GremlinThe Path-o-Logical Gremlin
The Path-o-Logical GremlinMarko Rodriguez
 
The Gremlin in the Graph
The Gremlin in the GraphThe Gremlin in the Graph
The Gremlin in the GraphMarko Rodriguez
 
Memoirs of a Graph Addict: Despair to Redemption
Memoirs of a Graph Addict: Despair to RedemptionMemoirs of a Graph Addict: Despair to Redemption
Memoirs of a Graph Addict: Despair to RedemptionMarko Rodriguez
 
Graph Databases: Trends in the Web of Data
Graph Databases: Trends in the Web of DataGraph Databases: Trends in the Web of Data
Graph Databases: Trends in the Web of DataMarko Rodriguez
 
Problem-Solving using Graph Traversals: Searching, Scoring, Ranking, and Reco...
Problem-Solving using Graph Traversals: Searching, Scoring, Ranking, and Reco...Problem-Solving using Graph Traversals: Searching, Scoring, Ranking, and Reco...
Problem-Solving using Graph Traversals: Searching, Scoring, Ranking, and Reco...Marko Rodriguez
 
A Perspective on Graph Theory and Network Science
A Perspective on Graph Theory and Network ScienceA Perspective on Graph Theory and Network Science
A Perspective on Graph Theory and Network ScienceMarko Rodriguez
 
The Graph Traversal Programming Pattern
The Graph Traversal Programming PatternThe Graph Traversal Programming Pattern
The Graph Traversal Programming PatternMarko Rodriguez
 

Plus de Marko Rodriguez (20)

mm-ADT: A Multi-Model Abstract Data Type
mm-ADT: A Multi-Model Abstract Data Typemm-ADT: A Multi-Model Abstract Data Type
mm-ADT: A Multi-Model Abstract Data Type
 
Open Problems in the Universal Graph Theory
Open Problems in the Universal Graph TheoryOpen Problems in the Universal Graph Theory
Open Problems in the Universal Graph Theory
 
Gremlin 101.3 On Your FM Dial
Gremlin 101.3 On Your FM DialGremlin 101.3 On Your FM Dial
Gremlin 101.3 On Your FM Dial
 
Gremlin's Graph Traversal Machinery
Gremlin's Graph Traversal MachineryGremlin's Graph Traversal Machinery
Gremlin's Graph Traversal Machinery
 
Quantum Processes in Graph Computing
Quantum Processes in Graph ComputingQuantum Processes in Graph Computing
Quantum Processes in Graph Computing
 
ACM DBPL Keynote: The Graph Traversal Machine and Language
ACM DBPL Keynote: The Graph Traversal Machine and LanguageACM DBPL Keynote: The Graph Traversal Machine and Language
ACM DBPL Keynote: The Graph Traversal Machine and Language
 
The Gremlin Graph Traversal Language
The Gremlin Graph Traversal LanguageThe Gremlin Graph Traversal Language
The Gremlin Graph Traversal Language
 
The Path Forward
The Path ForwardThe Path Forward
The Path Forward
 
Faunus: Graph Analytics Engine
Faunus: Graph Analytics EngineFaunus: Graph Analytics Engine
Faunus: Graph Analytics Engine
 
Solving Problems with Graphs
Solving Problems with GraphsSolving Problems with Graphs
Solving Problems with Graphs
 
Titan: The Rise of Big Graph Data
Titan: The Rise of Big Graph DataTitan: The Rise of Big Graph Data
Titan: The Rise of Big Graph Data
 
The Pathology of Graph Databases
The Pathology of Graph DatabasesThe Pathology of Graph Databases
The Pathology of Graph Databases
 
Traversing Graph Databases with Gremlin
Traversing Graph Databases with GremlinTraversing Graph Databases with Gremlin
Traversing Graph Databases with Gremlin
 
The Path-o-Logical Gremlin
The Path-o-Logical GremlinThe Path-o-Logical Gremlin
The Path-o-Logical Gremlin
 
The Gremlin in the Graph
The Gremlin in the GraphThe Gremlin in the Graph
The Gremlin in the Graph
 
Memoirs of a Graph Addict: Despair to Redemption
Memoirs of a Graph Addict: Despair to RedemptionMemoirs of a Graph Addict: Despair to Redemption
Memoirs of a Graph Addict: Despair to Redemption
 
Graph Databases: Trends in the Web of Data
Graph Databases: Trends in the Web of DataGraph Databases: Trends in the Web of Data
Graph Databases: Trends in the Web of Data
 
Problem-Solving using Graph Traversals: Searching, Scoring, Ranking, and Reco...
Problem-Solving using Graph Traversals: Searching, Scoring, Ranking, and Reco...Problem-Solving using Graph Traversals: Searching, Scoring, Ranking, and Reco...
Problem-Solving using Graph Traversals: Searching, Scoring, Ranking, and Reco...
 
A Perspective on Graph Theory and Network Science
A Perspective on Graph Theory and Network ScienceA Perspective on Graph Theory and Network Science
A Perspective on Graph Theory and Network Science
 
The Graph Traversal Programming Pattern
The Graph Traversal Programming PatternThe Graph Traversal Programming Pattern
The Graph Traversal Programming Pattern
 

Dernier

Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024BookNet Canada
 
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024BookNet Canada
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxNavinnSomaal
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.Curtis Poe
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr BaganFwdays
 
Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionDilum Bandara
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsSergiu Bodiu
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxLoriGlavin3
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxLoriGlavin3
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Mattias Andersson
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .Alan Dix
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024Stephanie Beckett
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfLoriGlavin3
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningLars Bell
 
unit 4 immunoblotting technique complete.pptx
unit 4 immunoblotting technique complete.pptxunit 4 immunoblotting technique complete.pptx
unit 4 immunoblotting technique complete.pptxBkGupta21
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024Lonnie McRorey
 
Generative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information DevelopersGenerative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information DevelopersRaghuram Pandurangan
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii SoldatenkoFwdays
 
SALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICESSALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICESmohitsingh558521
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationSlibray Presentation
 

Dernier (20)

Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
 
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptx
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan
 
Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An Introduction
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platforms
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdf
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine Tuning
 
unit 4 immunoblotting technique complete.pptx
unit 4 immunoblotting technique complete.pptxunit 4 immunoblotting technique complete.pptx
unit 4 immunoblotting technique complete.pptx
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024
 
Generative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information DevelopersGenerative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information Developers
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko
 
SALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICESSALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICES
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck Presentation
 

From the Signal to the Symbol: Structure and Process in Artificial Intelligence

  • 1. From the Signal to the Symbol: Structure and Process in Artificial Intelligence Marko A. Rodriguez T-5, Center for Nonlinear Studies Los Alamos National Laboratory http://markorodriguez.com November 13, 2008
  • 2. 1 Abstract There is a divide in the domain of artificial intelligence. On the one end of this divide are the various sub-symbolic, or signal-based systems that are able to distill stable representations from a potentially noisy signal. Pattern recognition and classification are typical uses of such signal-based systems. On the other side of the divide are various symbol-based systems. In these systems, the lowest-level of representation is that of the a priori determined symbol, which can denote something as high-level as a person, place, or thing. Such symbolic systems are used to model and reason over some domain of discourse given prescribed rules of inference. An example of the unification of this divide is the human. The human perceptual system performs signal processing to yield the rich symbolic models that form the majority of our interpretation of and reasoning about the world. This presentation will provide an introduction to different signal and symbol systems and discuss the unification of this divide. Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
  • 3. 2 General Introduction • We receive signals that are noisy, never identical, and yet we have a stable representation of “reality”. • Signals from different modalities can map to the same abstract concepts (e.g. hearing a dog bark and seeing a dog, both map to dog. Or with more specificity, to a particular dog you know.). • In higher-level thinking (i.e. at the level of “conscious awareness”), we reason in terms of these abstract concepts, not in terms of the signals (e.g. “This dog has no owner, it must be a stray.”). • Both signal and symbol processing occur in the same neural substrate. Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
  • 4. 3 General Introduction • A distinction between signal and symbol systems: signal: the information processed by the system is very “low-level” (e.g. simple geometric patterns) and makes few ontological commitments.1 symbol: the information processed by the system is very “high-level” (e.g. people) and makes many ontological commitments. • A distinction between the structure and process of systems: structure: the types of objects that compose the system. process: the types of mappings that evolve the system. structure process signal features and relations feature distance and activation symbol objects and relations rules of inference 1 Ontological commitment means the assumptions about the world/environment that the system assumes to be true. Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
  • 5. 4 Introduction to our Experimental Subjects and Notation Conventions Our example subjects are Marko and Fluffy:2 All formalisms are going to be presented in graph notation and using the same variable names as best as possible. • G graph, V vertices, E edges, E family of edge sets • i, j ∈ V , (i, j) ∈ E , (i, n, j) a statement or triple, w+, w− evidence tuple • x ∈ Rn input vector, w ∈ Rm feature vector 2 These images were found on the web many moons ago and apologies to the fine people who created them and will only get this meager credit. Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
  • 6. 5 Outline • Signal Representation The HMAX Model Self-Organizing Maps • Symbol Representation Description Logics Evidential Logics • Unifying Signals and Symbols • A Distributed Graph in an Infinite Space Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
  • 7. 6 Outline • Signal Representation The HMAX Model Self-Organizing Maps • Symbol Representation Description Logics Evidential Logics • Unifying Signals and Symbols • A Distributed Graph in an Infinite Space Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
  • 8. 7 The HMAX Model - Introduction • Object recognition/classification through low-level feature analysis. • Can support scale, translation, and rotation invariance.3 • Anatomically realistic with respect to the Hubel and Wiesel visual cortex research. Riesenhuber, M., Poggio, T., “Hierarchical models of object recognition in cortex”, Nature Neuroscience, volume 2, pages 1019-1025, 1999.[6] 3 Depends on the learning/training procedure used as well as the choice of the low-level features coded into the system. Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
  • 9. 8 The HMAX Model - The Structure • The HMAX network can be defined as G = (V, E) where V is a set of vertices (i.e. neurons, feature selectors), E ⊆ (V × V ), and there exist no cycles. • There are two types of vertices: simple and complex, where V = S ∪ C and S ∩ C = ∅. Cells are “tuned” to respond to a particular input feature. C2 ... S2 ... C1 ... S1 Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
  • 10. 9 The HMAX Model - The Process • Each vertex i ∈ S is tuned to a particular feature wi ∈ Rn and performs the function.4 n ||wi − x||2 si : R → [0, 1] : si(x) → exp − 2σ 2 • Each vertex i ∈ C has the same excitation value as its most excited simple, child vertex. ci : Rm → [0, 1] : ci(x) → max(x) 4 The w features at S1 are the ontological commitments of the model and are usually simple line types. Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
  • 11. 10 The HMAX Model - Example Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
  • 12. 11 The HMAX Model - Example 1 2 1 2 S1 3 4 3 4 1 2 3 4 1 2 3 4 Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
  • 13. 12 The HMAX Model - Example C1 ... ... ... ... ... ... 1 2 1 2 S1 3 4 3 4 1 2 3 4 1 2 3 4 Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
  • 14. 13 The HMAX Model - Example C2 ... 1 2 1 2 1 2 1 2 S2 3 4 3 4 ... 3 4 3 4 C1 ... ... ... ... ... ... 1 2 1 2 S1 3 4 3 4 1 2 3 4 1 2 3 4 Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
  • 15. 14 The HMAX Model - Drawbacks • What is captured at the highest point in the hierarchy is a large list of features, not their relative positions to each other. With high resolution, the list of features turns into a unique identifier for an object (hopefully).5 • There is a distinction between learning/training and categorizing/perceiving. 5 Complex cells can be seen as “grandmother cells”. The further up the hierarchy, the more agnostic the cell is to its object representation’s under various transformations. Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
  • 16. 15 Self-Organizing Maps - Introduction • Self-organizing maps (aka. Kohonen networks) can be used to generate a map (i.e. model) of an input space (i.e. environment) in an unsupervised manner. • Each vertex in the map specializes on representing a particular region of the input space (i.e. each vertex specializes on particular features of the environment). Denser regions of the input space receive more vertices for their representation. • There is no separation between learning/training and categorzing/perceiving. Every input adjusts the feature tunings of the vertices. The more “learned” the system is to the environment, the smaller the adjustments. Kohonen, T., “Self-organized formation of topologically correct feature maps”, Biological Cybernetics, volume 42, pages 59-69, 1982.[5] Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
  • 17. 16 Self-Organizing Maps - The Structure • A self-organizing map is defined as G = (V, E, ω), where V is the set of vertices, E ⊆ {V × V } is a set of edges, and ω : E → [0, 1] defines the strength of coupling between vertices. If (i, j) ∈ E, then ω(i, j) → 0. / Finally, (i, i) ∈ E and ω(i, i) → 1. • Every vertex i ∈ V has an n-dimensional feature vector wi ∈ Rn. Initially all vertex features are randomly generated.6 • The environment is defined by an n-dimensional space. A sample from that space is denoted x ∈ Rn. 6 Coupling strength between vertices (i.e. edge weight) can be determined by their relative distance to one another in Rn . Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
  • 18. 17 Self-Organizing Maps - The Process The SOM algorithm proceeds according to the following looping rules: 1. Generate a sample x ∈ Rn from the environment. 2. Determine which vertex in V is closest to x via some distance function (e.g. ||x − wi||2). Denote that vertex i. 3. For each vertex j ∈ V , wj ← wj + ω(j, i)(wj − x)η, where η ∈ [0, 1] is some learning parameter. Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
  • 19. 18 Self-Organizing Maps - Example iteration 0 iteration 1 1.0 1.0 q q q q q q q q q q 0.8 0.8 q q q q q q q q q q q q q q q q q q q q q qq q qq q q qq q qq q q q q q q q q qq qq q qq qq 0.6 q qq 0.6 q q qq q qq qq q qq q q q q q q q q q q q q q qq qq q qq qq qq q q qq q q q q q q q q 0.4 0.4 q q q q q q q q q q q q q q q q q 0.2 0.2 q q q q q q q q 0.0 0.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
  • 20. 19 Self-Organizing Maps - Example iteration 10 iteration 25 1.0 1.0 q q q q 0.8 0.8 q q q q q q q q q q q q q q q q q q q q q q q q q q qq q qq q q q qq q q q qq q q q q q q q q q qq q q qq q q q q 0.6 q qq 0.6 q qq q q qq q q qqq qq q qq q q q q qq q q q q qq q q qq qq qq q q qq qq q q qq qq q q q q q q qq q q q q 0.4 0.4 q q q q q q q q q q q q q q q q q q 0.2 0.2 q q q q q q 0.0 0.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
  • 21. 20 Self-Organizing Maps - Example iteration 50 iteration 75 1.0 1.0 q q q q 0.8 0.8 q q q q q q q q q q q q q q q q q q q qq q q q q qq q q q qq qq q q qq q qq q q q q q q qq q q qq qq qq q q q q q qq 0.6 q qq 0.6 q q qqq q q q q qq q q q q q q qqq q q q qq qq q qq q q q q qq q q q qq q qq q qq q q q q q q q q q q q 0.4 0.4 q q q q q q q q q q q q q q q q q 0.2 0.2 q q q q q q 0.0 0.0 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
  • 22. 21 Self-Organizing Maps - Example iteration 75 1.0 ● ● 0.8 ● ● ● ● ● ● ● ● ●● ● ● ● ●● ●● ● fluffy ●● ●● ●● ●● ● number of legs ● ●● ● ● ●● 0.6 ● ● ● ● ●● ● ●●● ● ● ● ● ●● ● ●● ●● ● ● ● ● ● ● ● 0.4 ● ● ● mammal ● ● ● ● ● ● 0.2 ● ● ● marko 0.0 0.0 0.2 0.4 0.6 0.8 1.0 amount of fur Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
  • 23. 22 Outline • Signal Representation The HMAX Model Self-Organizing Maps • Symbol Representation Description Logics Evidential Logics • Unifying Signals and Symbols • A Distributed Graph in an Infinite Space Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
  • 24. 23 From Categorization to Reasoning on Categories • With signal-based systems, the “grounded” entities are very primitive constructs (e.g. simple line types) and from these primitive constructs it is possible generate abstract representations of patterns that are invariant to various transformations (e.g. Fluffy regardless of his location in space). • With symbol-based systems, the “grounded” entities are generally very abstract (e.g. Fluffy) and from these concepts its possible to reason abstract relationships (e.g. Fluffy must be a canine because he has fur.). Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
  • 25. 24 Knowledge Representation and Reasoning • Knowledge representation: a model of a domain of discourse represented in some medium – structure. • Reasoning: the algorithm by which implicit knowledge in the model is made explicit – process. f (x) Reasoner read/write Knowledge Representation Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
  • 26. 25 Description Logics - Introduction • The purpose of description logics is to infer subsumption relationships in a knowledge structure. • Given a set of individuals (i.e. real-world instances), determine which concept descriptions subsume the individuals. For example, is marko a type of Mammal? F. Baader, D. Calvanese, D. L. McGuinness, D. Nardi, P. F. Patel-Schneider: The Description Logic Handbook: Theory, Implementation, Applications. Cambridge University Press, Cambridge, UK, 2003.[1] Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
  • 27. 26 Description Logics - The Structure • A multi-relational network (aka. semantic network, directed labeled graph) is defined as G = (V, E), where V is the set of vertices (i.e. symbols), E = {E1, E2, . . . , En} is a family of edge sets, where any En ⊆ (V × V ). Each edge set has a categorical or nominal meaning (e.g. bestFriend, hasFur, numberOfLegs, etc.). • An edge (i, j) ∈ En is called a “statement” and is usually denoted as a triple (i, n, j) (e.g. (marko, bestFriend, fluffy)). marko bestFriend fluffy Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
  • 28. 27 Description Logics - The Structure • Individual: a unique identifier denoting some “real-world thing” that exists. For example: marko. • Simple Concepts: a unique identifier denoting a “ground” concept. For example: Mammal. • Simple Roles (aka properties): a unique identifier denoting a binary relationship. For example: numberOfLegs, hasFur, bestFriend. • Compound Concept: a concept that is defined in terms of another concept. For example: a Canine is a thing that has 4 legs and is furry.7 7 There are many description logic languages. Distinctions between these languages are made explicit by defining their “expressivity” (i.e. the possible forms a compound concept description can take). Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
  • 29. 28 Description Logics - The Structure • Terminological Box (T-Box): a collection of descriptions. Also known as an ontology. Human ≡ (= 2 numberOfLegs) (= false hasFur) ∃bestFriend.Canine Canine ≡ (= 4 numberOfLegs) (= true hasFur) Human Mammal Canine Mammal • Assertion Box (A-Box): a collection of individuals and their relationships to one another. numberOfLegs(marko, 2), hasFur(marko, false), bestFriend(marko, fluffy), numberOfLegs(fluffy, 4), hasFur(fluffy, true). Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
  • 30. 29 Description Logics - The Process • Inference rules (Reasoner): a collection of pattern descriptions are used to assert new statements: (?x, subClassOf, ?y) ∧ (?y, subClassOf, ?z) ⇒ (?x, subClassOf, ?z) (?x, subClassOf, ?y) ∧ (?y, subClassOf, ?x) ⇒ (?x, equivalentClass, ?y) (?x, subPropertyOf, ?y) ∧ (?y, subPropertyOf, ?z) ⇒ (?x, subPropertyOf, ?z) (?x, type, ?y) ∧ (?y, subClassOf, ?z) ⇒ (?x, type, ?z) (?x, onProperty, ?y) ∧ (?x, hasValue, ?z) ∧ (?a, subClassOf, ?x) ⇒ (?a, ?y, ?z) (?x, onProperty, ?y) ∧ (?x, hasValue, ?z) ∧ (?a, ?y, ?z) ⇒ (?a, type, ?x) . . . Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
  • 31. 30 Description Logics - Example Human ≡ (= 2 numberOfLegs) (= false hasFur) ∃bestFriend.Canine Canine ≡ (= 4 numberOfLegs) (= true hasFur) bestFriend numberOfLegs 2 false hasFur numberOfLegs 4 true hasFur onProperty hasValue onProperty onProperty hasValue onProperty hasValue onProperty hasValue Restriction_A Restriction_B Restriction_C Restriction_D Restriction_E subClassOf subClassOf subClassOf Mammal subClassOf subClassOf someValuesFrom subClassOf subClassOf Human Human Mammal Canine Mammal Canine T-Box Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
  • 32. 31 Description Logics - Example bestFriend(marko, fluffy) marko bestFriend fluffy numberOfLegs hasFur numberOfLegs hasFur 2 false 4 true numberOfLegs(marko, 2) numberOfLegs(fuffy, 4) hasFur(marko, false) hasFur(fluffy, true) A-Box Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
  • 33. 32 Description Logics - Example inferred Mammal subClassOf subClassOf Human Canine type type T-Box A-Box type type marko bestFriend fluffy numberOfLegs hasFur numberOfLegs hasFur 2 false 4 true * The T-Box includes other description information, but for diagram clarity, this was left out. Yes — marko is a type of Mammal. Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
  • 34. 33 Description Logics - Drawbacks • With “nested” descriptions and complex quantifiers, you can run into exponential running times. • Requires that all assertions in the A-Box are “true”. For example, if the T-Box declares that a country can have only one president and you assert that barack is the president of the United States and that marko is the president of the United States, then it is inferred that barack and marko are the same person. And this can have rippling effects such as their mothers and fathers must be the same people, etc. • Not very “organic” as concepts descriptions are driven, not by the system, but by a human designer. Where do all the meta-language predicates come from? Where do all the inference rules come from? Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
  • 35. 34 Evidential Logics - Introduction Evidential logics are multi-valued logics founded on AIKIR (Assumption of Insufficient Knowledge and Insufficient Resources) and are: • non-bivalent: there is no inherent truth in a statement, only differing degrees of support or negation. • non-monotonic: the evaluation of the “truth” of a statement is not immutable, but can change as new experiences occur. In other words, as new evidence is accumulated. Wang, P., “Cognitive Logic versus Mathematical Logic”, Proceedings of the Third International Seminar on Logic and Cognition, May 2004.[8] Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
  • 36. 35 Evidential Logics - The Structure • An evidence network is defined as G = (V, E, ω), where V is the set of vertex (i.e. symbols), E ⊆ (V × V ) is a set of directed edges, and ω : E → R+, R+ maps each edge to its evidence tuple.8 • Edge (i, j) can be thought of as stating “i inherits from j”, “i is a j”, “i has properties of j”, etc. • Every edge has two values: total amount of positive (w+) and negative (w−) evidence supporting or negating the inheritance statement. “How much positive and negative evidence is there for marko inheriting the properties of Human”? 8 Every evidence tuple w+ , w− has a mapping to f, c ∈ [0, 1], [0, 1] that is perhaps more w+ + − “natural” to work with. f = denotes frequency of positive evidence and c = w +w denotes w+ +w− w+ +w− +k + confidence in stability of the frequency, where k ∈ N is a user-defined constant. Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
  • 37. 36 Evidential Logics - The Process Evidential reasoning is done using various syllogisms:9 • deduction: (?x, ?y) ∧ (?y, ?z) ⇒ (?x, ?z) fluffy is a canine, canine is a mammal ⇒ fluffy is a mammal • induction: (?x, ?y) ∧ (?z, ?y) ⇒ (?x, ?z) fluffy is a canine, fifi is a canine ⇒ fluffy is a fifi • abduction: (?x, ?y) ∧ (?x, ?z) ⇒ (?y, ?z) fluffy is a canine, fluffy is a dog ⇒ canine is a dog • exemplification: (?x, ?y) ∧ (?y, ?z) ⇒ (?z, ?x)10 fluffy is a canine, canine is a mammal ⇒ mammal is a fluffy 9 It is helpful to think of the copula as “inherits the properties of” instead of “is a”. 10 Exemplification is a much less used syllogism in evidential reasoning. Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
  • 38. 37 Evidential Logics - Example Assume that the past experience of the evidential system has provided these w+, w− evidential tuples for the following relationships.11 Mammal <1,0> <1,0> Human Canine <1,0> <0,1> <1,0> <1,0> 2-legs fur 4-legs 11 The example to follow is not completely faithful to NAL-* (Non-Axiomatic Logic). Please refer to more expressive NAL constructs for a better representation of the ideas presented in this example. Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
  • 39. 38 Evidential Logics - Example experienced Mammal <1,0> <1,0> Human Canine <1,0> <0,1> <1,0> <1,0> 2-legs fur 4-legs <1,0> <0,1> <1,0> <1,0> marko fluffy Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
  • 40. 39 Evidential Logics - Example inferred Mammal <1,0> <1,0> Human Canine <1,0> <0,1> <1,0> <1,0> <1,0> D <2,0> D 2-legs fur 4-legs <1,0> <0,1> <1,0> <1,0> D deduction marko I induction fluffy A abduction Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
  • 41. 40 Evidential Logics - Example inferred Mammal <1,0> <1,0> Human Canine <1,0> <0,1> <1,0> <1,0> <1,0> <2,0> 2-legs <0,1> fur <1,0> 4-legs I A <1,0> <0,1> <1,0> <1,0> D deduction marko I induction fluffy A abduction Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
  • 42. 41 Evidential Logics - Example <1,0> Mammal inferred D <1,0> <1,0> Human <1,0> Canine <1,0> <0,1> <1,0> <1,0> <1,0> <2,0> 2-legs <0,1> fur <1,0> 4-legs <1,0> <0,1> <1,0> <1,0> D deduction marko I induction fluffy A abduction Yes — currently, marko is believed to be a type of Mammal. Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
  • 43. 42 Evidential Logics - Versions What was presented was an evidential logic known as NAL-1 or Non-Axomatic Logic 1. There exist more expressive forms that are based on the NAL-1 core formalisms: • NAL-0: binary inheritance – (marko, Human) • NAL-1: inference rules – (?x, ?y) ∧ (?y, ?z) ⇒ (?x, ?z) • NAL-2: sets and variants of inheritance – (fluffy, [fur]), ({marko}, Human) • NAL-3: intersections and differences • NAL-4: products, images, and ordinary relations – ((marko × fluffy), bestFriend) • NAL-5: statement reification – ((marko × (fluffy, Canine)), knows) • NAL-6: variables – (?x, Human) ∧ (?y, Canine) ⇒ ((?x×?y), bestFriend) • NAL-7: temporal statements • NAL-8: procedural statements – can model FOPL and thus, utilize an axiomatic “subsystem” Pei, W., “Rigid Flexibility”, Springer, 2006.[9] Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
  • 44. 43 Evidential Logics - Drawbacks • The model does not provide a mechanism for how evidence is “perceived”. All communication with the system is by means of statement-based assertions (marko, Human) and queries (marko, ?x). Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
  • 45. 44 Outline • Signal Representation The HMAX Model Self-Organizing Maps • Symbol Representation Description Logics Evidential Logics • Unifying Signals and Symbols • A Distributed Graph in an Infinite Space Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
  • 46. 45 Unification of Symbol and Signal - Introduction • Signal to symbol: receive input signals and map them to transformation invariant symbols.12 – categorization • Explicit relations between symbols: from similarities in input signals, make explicit inheritance relations between symbols. – relations • Implicit relations between symbols: utilize various rules of inference to generate new relations that might not be based on external signal alone. – reasoning 12 Symbols need not be labeled, just unique. In other words, some vertex must denote Fluffy, yet need not be labeled fluffy. Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
  • 47. 46 Signals to Symbols • Signal-based systems are able to provide a (fuzzy) unique identifier for a concept. For example, if ci(x) ≈ 1, then marko was perceived. Another way to think of it is that ci denotes markoness. With ci : Rn → [0, 1], ci is a fuzzy classifier of the concept “marko” (aka. “grandmother cell”). marko Human Mammal ci Cm arm ... ... C1 S1 Symbols need not exist. They are provided for diagram clarity e.g. marko's c vertex is just some unique identifier (e.g. abcd1234) Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
  • 48. 47 Explicit Relations Between Symbols • Symbols (i.e. derived abstract concepts) can be related to one another according to inheritance relationships. Simply, this can be based on the intersection of their features. • For example, how much are the features that make up marko are part of the features that make up Human? Likewise, for Human and Mammal? marko <1,0> Human <1,0> Mammal <1,0> ci Cm arm ... ... C1 S1 Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
  • 49. 48 Implicit Relations Between Symbols • Once experience has dictated the relationship between various concepts, utilize rules of inference to “predict” or “assume” other relationships in the world. • Validate these inferences with more experiential data. <1,0> marko <1,0> Human <1,0> Mammal <1,0> ci Cm arm ... ... C1 S1 Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
  • 50. 49 Outline • Signal Representation The HMAX Model Self-Organizing Maps • Symbol Representation Description Logics Evidential Logics • Unifying Signals and Symbols • A Distributed Graph in an Infinite Space Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
  • 51. 50 A Distributed Graph in an Infinite Space - Introduction • The Uniform Resource Identifier (URI) provides an infinite, global address space for denoting “resources” (i.e. discrete entities, symbols, vertices). An example URI is http://www.lanl.gov#marko.13 14 • The Resource Description Framework (RDF) is a means of graphing URIs in a standardized, machine processable representation. • The URI and RDF form the foundation standards of the Semantic Web. At its most general-level, the Semantic Web is a distributed directed labeled graph. The Semantic Web is for data what the World Wide Web is for documents. 13 Namespace prefixes are denoted for brevity, where http://www.lanl.gov#marko is expressed as lanl:marko. 14 Universally Unique Identifiers (UUIDs) are 232 bit identifiers that can be used as globally unique identifiers (e.g. lanl:fb5d2990-b111-11dd-ad8b-0800200c9a66). Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
  • 52. 51 A Distributed Graph in an Infinite Space - Example 127.0.0.2 127.0.0.1 lanl:marko lanl:bestFriend vub:fluffy lanl:hasFur lanl:hasFur lanl:numberOfLegs lanl:numberOfLegs "2"^^xsd:integer "false"^^xsd:boolean "4"^^xsd:integer "true"^^xsd:boolean • The concept of lanl:marko and the properties lanl:numberOfLegs, lanl:hasFur, and lanl:bestFriend is maintained by LANL. • The concept of vub:fluffy is maintained by VUB. • The data types of xsd:integer and xsd:boolean are maintained by XML standards organization. Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
  • 53. 52 A Distributed Graph in an Infinite Space - Example 127.0.0.3 127.0.0.4 ad8a ad8b ad8c ad8d n:region00 n:super n:super n:super ... ... n:super ad8e ad8f ad81 n:super http://www.images.com/marko.jpg n:super n:super ad82 owl:sameAs 127.0.0.2 127.0.0.1 lanl:marko lanl:bestFriend vub:fluffy lanl:hasFur lanl:hasFur lanl:numberOfLegs lanl:numberOfLegs "2"^^xsd:integer "false"^^xsd:boolean "4"^^xsd:integer "true"^^xsd:boolean Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
  • 54. 53 Related Interesting Work Healy, M.J., Caudell, T.P., “Ontologies and Worlds in Category Theory: Implications for Neural Systems”, Axiomathes, volume 16, pages 165-214, 2006.[2] Jackendoff, R., “Languages of the Mind”, MIT Press, September 1992.[4] Serre, T., Oliva, A., Poggio, T., “A feedforward architecture accounts for rapid categorization”, Proceedings of the National Academy of Science, volume 104, number 15, pages 6424-6429, April 2007.[7] Heylighen, F., “Collective Intelligence and its Implementation on the Web: Algorithms to Develop a Collective Mental Map”, Computational & Mathematical Organization Theory, volume 5, number 3, pages 253-280, 1999.[3] Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
  • 55. 54 References [1] Franz Baader, Diego Calvanese, Deborah L. Mcguinness, Daniele Nardi, and Peter F. Patel-Schneider, editors. The Description Logic Handbook: Theory, Implementation and Applications. Cambridge University Press, January 2003. [2] Michael John Healy and Thomas Preston Caudell. Ontologies and worlds in category theory: Implications for neural systems. Axiomathes, 16:165–214, 2006. [3] Francis Heylighen. Collective intelligence and its implementation on the web: Algorithms to develop a collective mental map. Computational & Mathematical Organization Theory, 5(3):253–280, 1999. Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008
  • 56. 55 [4] Ray S. Jackendoff. Languages of the Mind. MIT Press, 1992. [5] Teuvo Kohonen. Self-organized formation of topologically correct feature maps. Biological Cybernetics, 43:59–69, 1982. [6] M. Riesenhuber and T. Poggio. Hierarchical models of ob ject recognition in cortex. Nature Neuroscience, 2:1019–1025, 1999. [7] Thomas Serre, Aude Oliva, and Tomaso Poggio. A feedforward architecture accounts for rapid categorization. Proceedings of the National Academy of Science, 104(15):6424–6429, April 2007. [8] Pei Wang. Cognitive logic versus mathematical logic. In Proceedings of the Third International Seminar on Logic and Cognition, May 2004. [9] Pei Wang. Rigid Flexibility. Springer, 2006. Center for Non-Linear Studies – Los Alamos, New Mexico – November 13, 2008