SlideShare une entreprise Scribd logo
1  sur  15
Build Agent
What are agents?
 Agents are the machines that perform your
build steps.
 An agent is anything that can be viewed as
perceiving its environment through
sensors and acting upon that environment
through actuators
Human agent:
eyes, ears, and other organs for sensors;
hands, legs, mouth, and other body parts
for
actuators
Robotic agent:
cameras and infrared range finders for
sensors; various motors for actuators
Agents and environments:
 The agent function maps from percept
histories to actions:
[f: P*  A]
 The agent program runs on the physical
architecture to produce f
 agent = architecture + program
Agent types:
Five basic types in order of increasing generality:
 Table Driven agent
 Simple reflex agents
 Model-based reflex agents
 Goal-based agents
 Utility-based agents
TABLE-DRIVEN-AGENT
function TABLE-DRIVEN-AGENT (percept)
returns action
static: percepts, a sequence, initially empty
table, a table, indexed by percept
sequences, initially fully specified
append percept to the end of percepts
action  LOOKUP(percepts, table)
return action
Simple reflex agent
Differs from the lookup table based agent
In that the condition (that determines the
action) is already higher-level interpretation
of the percepts
Percepts could be e.g. the pixels on the
camera of the automated taxi
 A simple reflex agent works by finding a
rule whose condition matches the current
situation (as defined by the percept) and
then doing the action associated with that
rule
Reflex Agent with Internal
State:
 A reflex agent with internal state works by
finding a rule whose condition matches
the current situation (as defined by the
percept and the stored internal state) and
then doing the action associated with that
rule.
Agent with Explicit Goals
sensors
What the world
is like now
What action I
should do now
Goals
effectors
Environment
State
How the world evolves
What my actions do
What it will be like
if I do action A
 Choose actions so as to achieve a (given or
computed) goal = a description of desirable
situations. e.g. where the taxi wants to go
 Keeping track of the current state is often not
enough – need to add goals to decide which
situations are good
 Deliberative instead of reactive
 May have to consider long sequences of
possible actions before deciding if goal is
achieved – involves considerations of the
future, “what will happen if I do…?” (search
and planning)
 More flexible than reflex agent. (e.g. rain /
new destination)
 In the reflex agent, the entire database of rules
would have to be rewritten
Utility-Based Agent
sensors
What the world
is like now
What action I
should do now
Utility
effectors
Environment
State
How the world evolves
What my actions do
What it will be like
if I do action A
How happy I will
be in such as a
state
 When there are multiple possible alternatives,
how to decide which one is best?
 A goal specifies a crude destination from an
unhappy to a happy state, but often need a
more general performance measure that
describes “degree of happiness”
 Utility function U: State  Reals indicating a
measure of success or happiness when at a
given state
 Allows decisions comparing
 choice between conflicting goals
 choice between likelihood of success and
importance of goal (if achievement is uncertain)
Thanks 

Contenu connexe

Similaire à Infosec

Jarrar.lecture notes.aai.2011s.ch2.intelligentagents
Jarrar.lecture notes.aai.2011s.ch2.intelligentagentsJarrar.lecture notes.aai.2011s.ch2.intelligentagents
Jarrar.lecture notes.aai.2011s.ch2.intelligentagentsPalGov
 
Jarrar.lecture notes.aai.2011s.ch2.intelligentagents
Jarrar.lecture notes.aai.2011s.ch2.intelligentagentsJarrar.lecture notes.aai.2011s.ch2.intelligentagents
Jarrar.lecture notes.aai.2011s.ch2.intelligentagentsPalGov
 
Artificial Intelligence Chapter two agents
Artificial Intelligence Chapter two agentsArtificial Intelligence Chapter two agents
Artificial Intelligence Chapter two agentsEhsan Nowrouzi
 
Intelligent Agents
Intelligent AgentsIntelligent Agents
Intelligent Agentsmarada0033
 
AI_02_Intelligent Agents.pptx
AI_02_Intelligent Agents.pptxAI_02_Intelligent Agents.pptx
AI_02_Intelligent Agents.pptxYousef Aburawi
 
ai-slides-1233566181695672-2 (1).pdf
ai-slides-1233566181695672-2 (1).pdfai-slides-1233566181695672-2 (1).pdf
ai-slides-1233566181695672-2 (1).pdfShivareddyGangam
 
introduction to inteligent IntelligentAgent.ppt
introduction to inteligent IntelligentAgent.pptintroduction to inteligent IntelligentAgent.ppt
introduction to inteligent IntelligentAgent.pptdejene3
 
Agents-and-Problem-Solving-20022024-094442am.pdf
Agents-and-Problem-Solving-20022024-094442am.pdfAgents-and-Problem-Solving-20022024-094442am.pdf
Agents-and-Problem-Solving-20022024-094442am.pdfsyedhasanali293
 
Artificial intelligence introduction
Artificial intelligence introductionArtificial intelligence introduction
Artificial intelligence introductionmelchismel
 
intelligentagent-140313053301-phpapp01 (1).pdf
intelligentagent-140313053301-phpapp01 (1).pdfintelligentagent-140313053301-phpapp01 (1).pdf
intelligentagent-140313053301-phpapp01 (1).pdfShivareddyGangam
 

Similaire à Infosec (20)

Jarrar.lecture notes.aai.2011s.ch2.intelligentagents
Jarrar.lecture notes.aai.2011s.ch2.intelligentagentsJarrar.lecture notes.aai.2011s.ch2.intelligentagents
Jarrar.lecture notes.aai.2011s.ch2.intelligentagents
 
Jarrar.lecture notes.aai.2011s.ch2.intelligentagents
Jarrar.lecture notes.aai.2011s.ch2.intelligentagentsJarrar.lecture notes.aai.2011s.ch2.intelligentagents
Jarrar.lecture notes.aai.2011s.ch2.intelligentagents
 
Week 3.pdf
Week 3.pdfWeek 3.pdf
Week 3.pdf
 
Artificial Intelligence Chapter two agents
Artificial Intelligence Chapter two agentsArtificial Intelligence Chapter two agents
Artificial Intelligence Chapter two agents
 
Slide01 - Intelligent Agents.ppt
Slide01 - Intelligent Agents.pptSlide01 - Intelligent Agents.ppt
Slide01 - Intelligent Agents.ppt
 
Lecture 2
Lecture 2Lecture 2
Lecture 2
 
Week 2.pdf
Week 2.pdfWeek 2.pdf
Week 2.pdf
 
Intelligent Agents
Intelligent AgentsIntelligent Agents
Intelligent Agents
 
AI_02_Intelligent Agents.pptx
AI_02_Intelligent Agents.pptxAI_02_Intelligent Agents.pptx
AI_02_Intelligent Agents.pptx
 
Artificial Intelligence
Artificial IntelligenceArtificial Intelligence
Artificial Intelligence
 
ai-slides-1233566181695672-2 (1).pdf
ai-slides-1233566181695672-2 (1).pdfai-slides-1233566181695672-2 (1).pdf
ai-slides-1233566181695672-2 (1).pdf
 
introduction to inteligent IntelligentAgent.ppt
introduction to inteligent IntelligentAgent.pptintroduction to inteligent IntelligentAgent.ppt
introduction to inteligent IntelligentAgent.ppt
 
Lec 2-agents
Lec 2-agentsLec 2-agents
Lec 2-agents
 
AI_Ch2.pptx
AI_Ch2.pptxAI_Ch2.pptx
AI_Ch2.pptx
 
Agents-and-Problem-Solving-20022024-094442am.pdf
Agents-and-Problem-Solving-20022024-094442am.pdfAgents-and-Problem-Solving-20022024-094442am.pdf
Agents-and-Problem-Solving-20022024-094442am.pdf
 
Artificial intelligence introduction
Artificial intelligence introductionArtificial intelligence introduction
Artificial intelligence introduction
 
Unit-1.pptx
Unit-1.pptxUnit-1.pptx
Unit-1.pptx
 
m2-agents.pptx
m2-agents.pptxm2-agents.pptx
m2-agents.pptx
 
Intelligent agent
Intelligent agentIntelligent agent
Intelligent agent
 
intelligentagent-140313053301-phpapp01 (1).pdf
intelligentagent-140313053301-phpapp01 (1).pdfintelligentagent-140313053301-phpapp01 (1).pdf
intelligentagent-140313053301-phpapp01 (1).pdf
 

Infosec

  • 2. What are agents?  Agents are the machines that perform your build steps.  An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators
  • 3. Human agent: eyes, ears, and other organs for sensors; hands, legs, mouth, and other body parts for actuators Robotic agent: cameras and infrared range finders for sensors; various motors for actuators
  • 4. Agents and environments:  The agent function maps from percept histories to actions: [f: P*  A]  The agent program runs on the physical architecture to produce f  agent = architecture + program
  • 5. Agent types: Five basic types in order of increasing generality:  Table Driven agent  Simple reflex agents  Model-based reflex agents  Goal-based agents  Utility-based agents
  • 6. TABLE-DRIVEN-AGENT function TABLE-DRIVEN-AGENT (percept) returns action static: percepts, a sequence, initially empty table, a table, indexed by percept sequences, initially fully specified append percept to the end of percepts action  LOOKUP(percepts, table) return action
  • 7. Simple reflex agent Differs from the lookup table based agent In that the condition (that determines the action) is already higher-level interpretation of the percepts Percepts could be e.g. the pixels on the camera of the automated taxi
  • 8.  A simple reflex agent works by finding a rule whose condition matches the current situation (as defined by the percept) and then doing the action associated with that rule
  • 9. Reflex Agent with Internal State:  A reflex agent with internal state works by finding a rule whose condition matches the current situation (as defined by the percept and the stored internal state) and then doing the action associated with that rule.
  • 10. Agent with Explicit Goals sensors What the world is like now What action I should do now Goals effectors Environment State How the world evolves What my actions do What it will be like if I do action A
  • 11.  Choose actions so as to achieve a (given or computed) goal = a description of desirable situations. e.g. where the taxi wants to go  Keeping track of the current state is often not enough – need to add goals to decide which situations are good  Deliberative instead of reactive  May have to consider long sequences of possible actions before deciding if goal is achieved – involves considerations of the future, “what will happen if I do…?” (search and planning)
  • 12.  More flexible than reflex agent. (e.g. rain / new destination)  In the reflex agent, the entire database of rules would have to be rewritten
  • 13. Utility-Based Agent sensors What the world is like now What action I should do now Utility effectors Environment State How the world evolves What my actions do What it will be like if I do action A How happy I will be in such as a state
  • 14.  When there are multiple possible alternatives, how to decide which one is best?  A goal specifies a crude destination from an unhappy to a happy state, but often need a more general performance measure that describes “degree of happiness”  Utility function U: State  Reals indicating a measure of success or happiness when at a given state  Allows decisions comparing  choice between conflicting goals  choice between likelihood of success and importance of goal (if achievement is uncertain)