SlideShare une entreprise Scribd logo
1  sur  41
Chapter 2: Intelligent Agents
Intelligent Agents (IA) - Concept
IA Types
Environment types
IA Behaviour
IA Structure
1
Mission
 To design/create computer programs that
have some intelligence, can do some
intelligent tasks or can do tasks that require
some intelligence.
2
What is an (Intelligent) Agent?
 An over-used, over-loaded, and misused term.
 Anything that can be viewed as perceiving its environment
through sensors and acting upon that environment through
its effectors to maximize progress towards its goals.
 PAGE (Percepts, Actions, Goals, Environment)
 Task-specific & specialized: well-defined goals and environment
 Human agent: eyes, ears, and other organs for sensors; hands,
legs, mouth, and other body parts for actuators
 Robotic agent: cameras and infrared rangefinders for sensors;
3
Intelligent Agents
 Agents interact with environments through
sensors and effectors
4
Intelligent Agents and Artificial
Intelligence
 Human mind as network of thousands or millions of agents all
working in parallel. To produce real artificial intelligence, this
school holds, we should build computer systems that also
contain many agents and systems for arbitrating among the
agents' competing results.
 Distributed decision-making
and control
 Challenges:
 Action selection: What next action
to choose
 Conflict resolution
sensors
effectors
Agency
5
Agent Research Areas
We can split agent research into two main strands:
 Distributed Artificial Intelligence (DAI) –
Multi-Agent Systems (MAS) (1980 – 1990)
 Much broader notion of "agent" (1990’s – present)
 interface, reactive, mobile, information
6
A Windscreen Agent
How do we design a agent that can wipe the windscreens when
needed?
 Goals?
 Percepts ?
 Sensors?
 Effectors ?
 Actions ?
 Environment ?
7
A Windshield Wiper Agent
(Cont’d)
 Goals: To keep windscreens clean and maintain
good visibility
 Percepts: Raining, Dirty, Clear
 Sensors: Camera (moist sensor)
 Effectors: Wipers (left, right, back)
 Actions: Off, Slow, Medium, Fast
 Environment: Nairobi city, pot-holed roads, highways,
etc
8
Interacting Agents
Collision Avoidance Agent (CAA)
 Goals: Avoid running into obstacles
 Percepts ?
 Sensors?
 Effectors ?
 Actions ?
 Environment: Highway
Lane Keeping Agent (LKA)
• Goals: Stay in current lane
• Percepts ?
• Sensors?
• Effectors ?
• Actions ?
• Environment: Freeway
9
Interacting Agents
Collision Avoidance Agent (CAA)
 Goals: Avoid running into obstacles
 Percepts: Obstacle distance, velocity, trajectory
 Sensors: Vision, proximity sensing
 Effectors: Steering Wheel, Accelerator, Brakes, Horn, Headlights
 Actions: Steer, speed up, brake, blow horn, signal (headlights)
 Environment: Highway
Lane Keeping Agent (LKA)
• Goals: Stay in current lane
• Percepts: Lane center, lane boundaries
• Sensors: Vision
• Effectors: Steering Wheel, Accelerator, Brakes
• Actions: Steer, speed up, brake
• Environment: Freeway
10
The Right Thing = The
Rational Action
 Rational Action: The action that maximizes the
expected value of the performance measure given the
percept sequence to date
 Rational = Best Yes, to the best of its knowledge
 Rational = Optimal Yes, to the best of its abilities
(including its constraints)
 Rational # omniscience
11
Rationality
 What is rational at any given time depends on
four things:
 The performance measure that defines the
criterion of success
 The agent’s prior knowledge of the env.
 The actions the agent can perform
 The agent’s percept sequence to date.
12
Rational agents
 An agent should strive to "do the right thing“,
based on what it can perceive and the actions it
can perform. The right action is the one that will
cause the agent to be most successful
 Performance measure: An objective criterion for
success of an agent's behavior
 E.g., performance measure of a vacuum-cleaner
agent could be amount of dirt cleaned up, amount
of time taken, amount of electricity consumed,
amount of noise generated, etc.
13
Rational agents
 Rational Agent: For each possible percept sequence, a
rational agent should select an action that is expected to
maximize its performance measure, given the evidence
provided by the percept sequence and whatever built-in
knowledge the agent has.
 Rationality is distinct from omniscience (all-knowing with
infinite knowledge)
 Agents can perform actions in order to modify future
percepts so as to obtain useful information (information
gathering, exploration)
 An agent is autonomous if its behavior is determined by
its own experience (with ability to learn and adapt)
14
Behavior and performance of
IAs
 Perception (sequence) to Action Mapping: f : P*  A
 Ideal mapping: specifies which actions an agent ought
to take at any point in time
 Description: Look-Up-Table,..
 Performance measure: a subjective measure to
characterize how successful an agent is (e.g., speed, power
usage, accuracy, money, etc.)
 (degree of) Autonomy: to what extent is the agent able to
make decisions and actions on its own?
15
How is an Agent different
from other software?
 Agents are autonomous, that is they act on behalf of the user
 Agents contain some level of intelligence, from fixed rules to
learning engines that allow them to adapt to changes in the
environment
 Agents don't only act reactively, but sometimes also
proactively
 Agents have social ability, that is they communicate with the
user, the system, and other agents as required
 Agents may also cooperate with other agents to carry out more
complex tasks than they themselves can handle
 Agents may migrate from one system to another to access
remote resources or even to meet other agents
16
Specifying the task environment
 The acronym PEAS can be used, which
means:
 Performance measure
 Environment
 Actuators
 Sensors
17
PEAS
 Must first specify the setting for intelligent agent design
 Consider, e.g., the task of designing an automated taxi
driver:
 Performance measure: Safe, fast, legal, comfortable
trip, maximize profits
 Environment: Roads, other traffic, pedestrians,
customers
 Actuators: Steering wheel, accelerator, brake, signal,
horn
 Sensors: Cameras, sonar, speedometer, GPS, odometer,
engine sensors, keyboard
18
Agent: Medical diagnosis system
 Performance measure: Healthy patient,
minimize costs, lawsuits
 Environment: Patient, hospital, staff
 Actuators: Screen display (questions, tests,
diagnoses, treatments, referrals)
 Sensors: Keyboard (entry of symptoms,
findings, patient's answers)
19
Agent: Part-picking robot
 Performance measure: Percentage of parts in
correct bins
 Environment: Conveyor belt with parts, bins
 Actuators: Jointed arm and hand
 Sensors: Camera, joint angle sensors
20
Agent: Interactive English tutor
 Performance measure: Maximize student's
score on test
 Environment: Set of students
 Actuators: Screen display (exercises,
suggestions, corrections)
 Sensors: Keyboard
21
Environment Types
 Characteristics
 Accessible vs. inaccessible
 An environment is accessible if the sensors detect all aspects that are
relevant to the choice of action.
 Deterministic vs. non-deterministic
 If the next state of the environment is completely determined by the
current state and the actions selected by the agents, then we say the
environment is deterministic.
 Episodic vs. non-episodic
 In an episodic environment, the agent’s experience is divided into
“episodes.” Each episode consists of the agent perceiving and then
acting.
 Static vs. dynamic
 If the environment can change while an agent is deliberating, then we
say the environment is dynamic for that agent; otherwise it is static.
22
Environment Types
 Characteristics
 Discrete vs. continuous
 If there are a limited number of distinct, clearly defined
percepts and actions we say that the environment is discrete.
 Hostile vs. friendly
 This depends on the agent perception.
 Single agent (vs. multiagent): An agent operating by itself
in an environment.
23
Environment Types
24
Structure of Intelligent Agents
 Agent = architecture + program
 Agent program: the implementation of f : P*  A, the
agent’s perception-action mapping
function: Skeleton-Agent(Percept) returns Action
memory  UpdateMemory(memory, Percept)
Action  ChooseBestAction(memory)
memory  UpdateMemory(memory, Action)
return Action
 Architecture: a device that can execute the agent program
(e.g., general-purpose computer, specialized device, specialized,
hardware/software, beobot, etc.)
25
Agent types
 Four basic types in order of increasing generality:
 Table Driven agents
 Simple reflex agents
 Model-based reflex agents
 Goal-based agents
 Utility-based agents
 Other Agents:
 Learning Agents
 Mobile Agents
 Information Agents
26
function TABLE-DRIVEN-AGENT (percept) returns action
static: percepts, a sequence, initially empty
table, a table, indexed by percept sequences, initially fully specified
append percept to the end of percepts
action  LOOKUP(percepts, table)
return action
An agent based on a prespecified lookup table. It keeps track of percept
sequence and just looks up the best action
Table-driven agent
• Problems
– Huge number of possible percepts (consider an automated taxi
with a camera as the sensor) => lookup table would be huge
– Takes long time to build the table
– Not adaptive to changes in the environment; requires entire
table to be updated if changes occur
27
Simple reflex agents
28
function SIMPLE-REFLEX-AGENT(percept) returns action
static: rules, a set of condition-action rules
state  INTERPRET-INPUT (percept)
rule  RULE-MATCH (state,rules)
action  RULE-ACTION [rule]
return action
A simple reflex agent works by finding a rule whose condition
matches the current situation (as defined by the percept) and
then doing the action associated with that rule.
First match.
No further matches sought.
Only one level of deduction.
Simple reflex agents
• Differs from the lookup table based agent is that the
condition (that determines the action) is
already higher-level interpretation of the
percepts
– Percepts could be e.g. the pixels on the camera of the automated taxi
29
Model-based reflex agents (Reflex
agents w/ state)
30
Reflex agent with internal state
…
function REFLEX-AGENT-WITH-STATE (percept) returns action
static: state, a description of the current world state
rules, a set of condition-action rules
state  UPDATE-STATE (state, percept)
rule  RULE-MATCH (state, rules)
action  RULE-ACTION [rule]
state  UPDATE-STATE (state, action)
return action
A reflex agent with internal state works by finding a rule
whose condition matches the current situation (as defined by
the percept and the stored internal state) and then doing
the action associated with that rule.
31
Goal-based agents
32
Agent with explicit goals …
• Choose actions so as to achieve a (given or computed) goal = a
description of desirable situations. e.g. where the taxi wants to go
• Keeping track of the current state is often not enough – need
to add goals to decide which situations are good
• Deliberative instead of reactive
• May have to consider long sequences of possible actions
before deciding if goal is achieved – involves considerations of
the future, “what will happen if I do…?” (search and planning)
• More flexible than reflex agent. (e.g. rain / new destination) In
the reflex agent, the entire database of rules would have to be
rewritten
33
34
Utility-based agent …
• When there are multiple possible alternatives, how to decide
which one is best?
• A goal specifies a crude destination between a happy and
unhappy state, but often need a more general performance
measure that describes “degree of happiness”
• Utility function U: State  Real Number indicating a
measure of success or happiness when at a given state
• Allows decisions comparing choice between conflicting goals,
and choice between likelihood of success and importance of
goal (if achievement is uncertain)
35
36
Learning Agents
• After an agent is programmed, can it work
immediately?
– No, it still need teaching
• In AI,
– Once an agent is done
– We teach it by giving it a set of examples
– Test it by using another set of examples
• We then say the agent learns
– A learning agent
37
Learning Agents
• Four conceptual components
– Learning element
• Making improvement
– Performance element
• Selecting external actions
– Critic
• Tells the Learning element how well the agent is doing
with respect to fixed performance standard.
(Feedback from user or examples, good or not?)
– Problem generator
• Suggest actions that will lead to new and informative
experiences.
38
Mobile agents
 Programs that can migrate from one machine to another.
 Execute in a platform-independent execution environment.
 Require agent execution environment (places).
 Mobility not necessary or sufficient condition for agenthood.
 Practical but non-functional advantages:
 Reduced communication cost (e.g., from PDA)
 Asynchronous computing (when you are not connected)
 Two types:
 One-hop mobile agents (migrate to one other place)
 Multi-hop mobile agents (roam the network from place to place)
 Applications:
 Distributed information retrieval.
 Telecommunication network routing.
39
Information agents
 Manage the explosive growth of information.
 Manipulate or collate information from many distributed
sources.
 Information agents can be mobile or static.
 Challenge: ontologies for annotating Web pages.
40
Discussion Questions
1. For each of the following, develop a PEAS
description of the task environment:
1. Robot soccer player
2. Internet book-shopping agent
3. Autonomous Mars rover
4. Mathematician’s theorem-proving assistant
2. For the above Agents characterize their
environment.
41

Contenu connexe

Tendances

Multi-agent systems
Multi-agent systemsMulti-agent systems
Multi-agent systemsR A Akerkar
 
AI Agents, Agents in Artificial Intelligence
AI Agents, Agents in Artificial IntelligenceAI Agents, Agents in Artificial Intelligence
AI Agents, Agents in Artificial IntelligenceKirti Verma
 
Lecture 2 agent and environment
Lecture 2   agent and environmentLecture 2   agent and environment
Lecture 2 agent and environmentVajira Thambawita
 
Lecture 4- Agent types
Lecture 4- Agent typesLecture 4- Agent types
Lecture 4- Agent typesAntonio Moreno
 
Artificial Intelligence -- Search Algorithms
Artificial Intelligence-- Search Algorithms Artificial Intelligence-- Search Algorithms
Artificial Intelligence -- Search Algorithms Syed Ahmed
 
Lecture 04 intelligent agents
Lecture 04 intelligent agentsLecture 04 intelligent agents
Lecture 04 intelligent agentsHema Kashyap
 
Game Playing in Artificial Intelligence
Game Playing in Artificial IntelligenceGame Playing in Artificial Intelligence
Game Playing in Artificial Intelligencelordmwesh
 
Intelligent agent
Intelligent agent Intelligent agent
Intelligent agent Arvind sahu
 
Artificial Intelligence PEAS
Artificial Intelligence PEASArtificial Intelligence PEAS
Artificial Intelligence PEASMUDASIRABBAS9
 
I.ITERATIVE DEEPENING DEPTH FIRST SEARCH(ID-DFS) II.INFORMED SEARCH IN ARTIFI...
I.ITERATIVE DEEPENING DEPTH FIRST SEARCH(ID-DFS) II.INFORMED SEARCH IN ARTIFI...I.ITERATIVE DEEPENING DEPTH FIRST SEARCH(ID-DFS) II.INFORMED SEARCH IN ARTIFI...
I.ITERATIVE DEEPENING DEPTH FIRST SEARCH(ID-DFS) II.INFORMED SEARCH IN ARTIFI...vikas dhakane
 
2-Agents- Artificial Intelligence
2-Agents- Artificial Intelligence2-Agents- Artificial Intelligence
2-Agents- Artificial IntelligenceMhd Sb
 
AI - Local Search - Hill Climbing
AI - Local Search - Hill ClimbingAI - Local Search - Hill Climbing
AI - Local Search - Hill ClimbingAndrew Ferlitsch
 
Problem solving agents
Problem solving agentsProblem solving agents
Problem solving agentsMegha Sharma
 
AI_ 3 & 4 Knowledge Representation issues
AI_ 3 & 4 Knowledge Representation issuesAI_ 3 & 4 Knowledge Representation issues
AI_ 3 & 4 Knowledge Representation issuesKhushali Kathiriya
 

Tendances (20)

Multi-agent systems
Multi-agent systemsMulti-agent systems
Multi-agent systems
 
Problem Solving
Problem Solving Problem Solving
Problem Solving
 
Ai Slides
Ai SlidesAi Slides
Ai Slides
 
AI Agents, Agents in Artificial Intelligence
AI Agents, Agents in Artificial IntelligenceAI Agents, Agents in Artificial Intelligence
AI Agents, Agents in Artificial Intelligence
 
Lecture 2 agent and environment
Lecture 2   agent and environmentLecture 2   agent and environment
Lecture 2 agent and environment
 
Lecture 4- Agent types
Lecture 4- Agent typesLecture 4- Agent types
Lecture 4- Agent types
 
Artificial Intelligence -- Search Algorithms
Artificial Intelligence-- Search Algorithms Artificial Intelligence-- Search Algorithms
Artificial Intelligence -- Search Algorithms
 
Lecture 04 intelligent agents
Lecture 04 intelligent agentsLecture 04 intelligent agents
Lecture 04 intelligent agents
 
Intelligent Agents
Intelligent Agents Intelligent Agents
Intelligent Agents
 
Game Playing in Artificial Intelligence
Game Playing in Artificial IntelligenceGame Playing in Artificial Intelligence
Game Playing in Artificial Intelligence
 
Intelligent agent
Intelligent agent Intelligent agent
Intelligent agent
 
1. peas
1. peas1. peas
1. peas
 
AI Lecture 3 (solving problems by searching)
AI Lecture 3 (solving problems by searching)AI Lecture 3 (solving problems by searching)
AI Lecture 3 (solving problems by searching)
 
Artificial Intelligence PEAS
Artificial Intelligence PEASArtificial Intelligence PEAS
Artificial Intelligence PEAS
 
I.ITERATIVE DEEPENING DEPTH FIRST SEARCH(ID-DFS) II.INFORMED SEARCH IN ARTIFI...
I.ITERATIVE DEEPENING DEPTH FIRST SEARCH(ID-DFS) II.INFORMED SEARCH IN ARTIFI...I.ITERATIVE DEEPENING DEPTH FIRST SEARCH(ID-DFS) II.INFORMED SEARCH IN ARTIFI...
I.ITERATIVE DEEPENING DEPTH FIRST SEARCH(ID-DFS) II.INFORMED SEARCH IN ARTIFI...
 
Iterative deepening search
Iterative deepening searchIterative deepening search
Iterative deepening search
 
2-Agents- Artificial Intelligence
2-Agents- Artificial Intelligence2-Agents- Artificial Intelligence
2-Agents- Artificial Intelligence
 
AI - Local Search - Hill Climbing
AI - Local Search - Hill ClimbingAI - Local Search - Hill Climbing
AI - Local Search - Hill Climbing
 
Problem solving agents
Problem solving agentsProblem solving agents
Problem solving agents
 
AI_ 3 & 4 Knowledge Representation issues
AI_ 3 & 4 Knowledge Representation issuesAI_ 3 & 4 Knowledge Representation issues
AI_ 3 & 4 Knowledge Representation issues
 

Similaire à Chapter 2 intelligent agents

Jarrar.lecture notes.aai.2011s.ch2.intelligentagents
Jarrar.lecture notes.aai.2011s.ch2.intelligentagentsJarrar.lecture notes.aai.2011s.ch2.intelligentagents
Jarrar.lecture notes.aai.2011s.ch2.intelligentagentsPalGov
 
Jarrar.lecture notes.aai.2011s.ch2.intelligentagents
Jarrar.lecture notes.aai.2011s.ch2.intelligentagentsJarrar.lecture notes.aai.2011s.ch2.intelligentagents
Jarrar.lecture notes.aai.2011s.ch2.intelligentagentsPalGov
 
ai-slides-1233566181695672-2 (1).pdf
ai-slides-1233566181695672-2 (1).pdfai-slides-1233566181695672-2 (1).pdf
ai-slides-1233566181695672-2 (1).pdfShivareddyGangam
 
chapter2-(Intelligent Agents).ppt
chapter2-(Intelligent Agents).pptchapter2-(Intelligent Agents).ppt
chapter2-(Intelligent Agents).pptssuser99ca78
 
Robotics and agents
Robotics and agentsRobotics and agents
Robotics and agentsritahani
 
Lecture 1 about the Agents in AI & .pptx
Lecture 1 about the Agents in AI & .pptxLecture 1 about the Agents in AI & .pptx
Lecture 1 about the Agents in AI & .pptxbk996051
 

Similaire à Chapter 2 intelligent agents (20)

Jarrar.lecture notes.aai.2011s.ch2.intelligentagents
Jarrar.lecture notes.aai.2011s.ch2.intelligentagentsJarrar.lecture notes.aai.2011s.ch2.intelligentagents
Jarrar.lecture notes.aai.2011s.ch2.intelligentagents
 
Jarrar.lecture notes.aai.2011s.ch2.intelligentagents
Jarrar.lecture notes.aai.2011s.ch2.intelligentagentsJarrar.lecture notes.aai.2011s.ch2.intelligentagents
Jarrar.lecture notes.aai.2011s.ch2.intelligentagents
 
ai-slides-1233566181695672-2 (1).pdf
ai-slides-1233566181695672-2 (1).pdfai-slides-1233566181695672-2 (1).pdf
ai-slides-1233566181695672-2 (1).pdf
 
Slide01 - Intelligent Agents.ppt
Slide01 - Intelligent Agents.pptSlide01 - Intelligent Agents.ppt
Slide01 - Intelligent Agents.ppt
 
LectureNote2.pdf
LectureNote2.pdfLectureNote2.pdf
LectureNote2.pdf
 
chapter2-(Intelligent Agents).ppt
chapter2-(Intelligent Agents).pptchapter2-(Intelligent Agents).ppt
chapter2-(Intelligent Agents).ppt
 
m2-agents.ppt
m2-agents.pptm2-agents.ppt
m2-agents.ppt
 
Unit 1.ppt
Unit 1.pptUnit 1.ppt
Unit 1.ppt
 
chapter2.ppt
chapter2.pptchapter2.ppt
chapter2.ppt
 
chapter2.ppt
chapter2.pptchapter2.ppt
chapter2.ppt
 
Agents_AI.ppt
Agents_AI.pptAgents_AI.ppt
Agents_AI.ppt
 
Lec 2 agents
Lec 2 agentsLec 2 agents
Lec 2 agents
 
Ai u1
Ai u1Ai u1
Ai u1
 
M2 agents
M2 agentsM2 agents
M2 agents
 
Robotics and agents
Robotics and agentsRobotics and agents
Robotics and agents
 
Lec 2-agents
Lec 2-agentsLec 2-agents
Lec 2-agents
 
Unit-1.pptx
Unit-1.pptxUnit-1.pptx
Unit-1.pptx
 
Lecture 02-agents
Lecture 02-agentsLecture 02-agents
Lecture 02-agents
 
Lecture 1 about the Agents in AI & .pptx
Lecture 1 about the Agents in AI & .pptxLecture 1 about the Agents in AI & .pptx
Lecture 1 about the Agents in AI & .pptx
 
Lecture1
Lecture1Lecture1
Lecture1
 

Dernier

Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...Neo4j
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)wesley chun
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdflior mazor
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)Gabriella Davis
 
Developing An App To Navigate The Roads of Brazil
Developing An App To Navigate The Roads of BrazilDeveloping An App To Navigate The Roads of Brazil
Developing An App To Navigate The Roads of BrazilV3cube
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsMaria Levchenko
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slidevu2urc
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptxHampshireHUG
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerThousandEyes
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationSafe Software
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Drew Madelung
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...apidays
 
HTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation StrategiesHTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation StrategiesBoston Institute of Analytics
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024The Digital Insurer
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century educationjfdjdjcjdnsjd
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024Rafal Los
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAndrey Devyatkin
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...apidays
 

Dernier (20)

Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
Developing An App To Navigate The Roads of Brazil
Developing An App To Navigate The Roads of BrazilDeveloping An App To Navigate The Roads of Brazil
Developing An App To Navigate The Roads of Brazil
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
HTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation StrategiesHTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation Strategies
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 

Chapter 2 intelligent agents

  • 1. Chapter 2: Intelligent Agents Intelligent Agents (IA) - Concept IA Types Environment types IA Behaviour IA Structure 1
  • 2. Mission  To design/create computer programs that have some intelligence, can do some intelligent tasks or can do tasks that require some intelligence. 2
  • 3. What is an (Intelligent) Agent?  An over-used, over-loaded, and misused term.  Anything that can be viewed as perceiving its environment through sensors and acting upon that environment through its effectors to maximize progress towards its goals.  PAGE (Percepts, Actions, Goals, Environment)  Task-specific & specialized: well-defined goals and environment  Human agent: eyes, ears, and other organs for sensors; hands, legs, mouth, and other body parts for actuators  Robotic agent: cameras and infrared rangefinders for sensors; 3
  • 4. Intelligent Agents  Agents interact with environments through sensors and effectors 4
  • 5. Intelligent Agents and Artificial Intelligence  Human mind as network of thousands or millions of agents all working in parallel. To produce real artificial intelligence, this school holds, we should build computer systems that also contain many agents and systems for arbitrating among the agents' competing results.  Distributed decision-making and control  Challenges:  Action selection: What next action to choose  Conflict resolution sensors effectors Agency 5
  • 6. Agent Research Areas We can split agent research into two main strands:  Distributed Artificial Intelligence (DAI) – Multi-Agent Systems (MAS) (1980 – 1990)  Much broader notion of "agent" (1990’s – present)  interface, reactive, mobile, information 6
  • 7. A Windscreen Agent How do we design a agent that can wipe the windscreens when needed?  Goals?  Percepts ?  Sensors?  Effectors ?  Actions ?  Environment ? 7
  • 8. A Windshield Wiper Agent (Cont’d)  Goals: To keep windscreens clean and maintain good visibility  Percepts: Raining, Dirty, Clear  Sensors: Camera (moist sensor)  Effectors: Wipers (left, right, back)  Actions: Off, Slow, Medium, Fast  Environment: Nairobi city, pot-holed roads, highways, etc 8
  • 9. Interacting Agents Collision Avoidance Agent (CAA)  Goals: Avoid running into obstacles  Percepts ?  Sensors?  Effectors ?  Actions ?  Environment: Highway Lane Keeping Agent (LKA) • Goals: Stay in current lane • Percepts ? • Sensors? • Effectors ? • Actions ? • Environment: Freeway 9
  • 10. Interacting Agents Collision Avoidance Agent (CAA)  Goals: Avoid running into obstacles  Percepts: Obstacle distance, velocity, trajectory  Sensors: Vision, proximity sensing  Effectors: Steering Wheel, Accelerator, Brakes, Horn, Headlights  Actions: Steer, speed up, brake, blow horn, signal (headlights)  Environment: Highway Lane Keeping Agent (LKA) • Goals: Stay in current lane • Percepts: Lane center, lane boundaries • Sensors: Vision • Effectors: Steering Wheel, Accelerator, Brakes • Actions: Steer, speed up, brake • Environment: Freeway 10
  • 11. The Right Thing = The Rational Action  Rational Action: The action that maximizes the expected value of the performance measure given the percept sequence to date  Rational = Best Yes, to the best of its knowledge  Rational = Optimal Yes, to the best of its abilities (including its constraints)  Rational # omniscience 11
  • 12. Rationality  What is rational at any given time depends on four things:  The performance measure that defines the criterion of success  The agent’s prior knowledge of the env.  The actions the agent can perform  The agent’s percept sequence to date. 12
  • 13. Rational agents  An agent should strive to "do the right thing“, based on what it can perceive and the actions it can perform. The right action is the one that will cause the agent to be most successful  Performance measure: An objective criterion for success of an agent's behavior  E.g., performance measure of a vacuum-cleaner agent could be amount of dirt cleaned up, amount of time taken, amount of electricity consumed, amount of noise generated, etc. 13
  • 14. Rational agents  Rational Agent: For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has.  Rationality is distinct from omniscience (all-knowing with infinite knowledge)  Agents can perform actions in order to modify future percepts so as to obtain useful information (information gathering, exploration)  An agent is autonomous if its behavior is determined by its own experience (with ability to learn and adapt) 14
  • 15. Behavior and performance of IAs  Perception (sequence) to Action Mapping: f : P*  A  Ideal mapping: specifies which actions an agent ought to take at any point in time  Description: Look-Up-Table,..  Performance measure: a subjective measure to characterize how successful an agent is (e.g., speed, power usage, accuracy, money, etc.)  (degree of) Autonomy: to what extent is the agent able to make decisions and actions on its own? 15
  • 16. How is an Agent different from other software?  Agents are autonomous, that is they act on behalf of the user  Agents contain some level of intelligence, from fixed rules to learning engines that allow them to adapt to changes in the environment  Agents don't only act reactively, but sometimes also proactively  Agents have social ability, that is they communicate with the user, the system, and other agents as required  Agents may also cooperate with other agents to carry out more complex tasks than they themselves can handle  Agents may migrate from one system to another to access remote resources or even to meet other agents 16
  • 17. Specifying the task environment  The acronym PEAS can be used, which means:  Performance measure  Environment  Actuators  Sensors 17
  • 18. PEAS  Must first specify the setting for intelligent agent design  Consider, e.g., the task of designing an automated taxi driver:  Performance measure: Safe, fast, legal, comfortable trip, maximize profits  Environment: Roads, other traffic, pedestrians, customers  Actuators: Steering wheel, accelerator, brake, signal, horn  Sensors: Cameras, sonar, speedometer, GPS, odometer, engine sensors, keyboard 18
  • 19. Agent: Medical diagnosis system  Performance measure: Healthy patient, minimize costs, lawsuits  Environment: Patient, hospital, staff  Actuators: Screen display (questions, tests, diagnoses, treatments, referrals)  Sensors: Keyboard (entry of symptoms, findings, patient's answers) 19
  • 20. Agent: Part-picking robot  Performance measure: Percentage of parts in correct bins  Environment: Conveyor belt with parts, bins  Actuators: Jointed arm and hand  Sensors: Camera, joint angle sensors 20
  • 21. Agent: Interactive English tutor  Performance measure: Maximize student's score on test  Environment: Set of students  Actuators: Screen display (exercises, suggestions, corrections)  Sensors: Keyboard 21
  • 22. Environment Types  Characteristics  Accessible vs. inaccessible  An environment is accessible if the sensors detect all aspects that are relevant to the choice of action.  Deterministic vs. non-deterministic  If the next state of the environment is completely determined by the current state and the actions selected by the agents, then we say the environment is deterministic.  Episodic vs. non-episodic  In an episodic environment, the agent’s experience is divided into “episodes.” Each episode consists of the agent perceiving and then acting.  Static vs. dynamic  If the environment can change while an agent is deliberating, then we say the environment is dynamic for that agent; otherwise it is static. 22
  • 23. Environment Types  Characteristics  Discrete vs. continuous  If there are a limited number of distinct, clearly defined percepts and actions we say that the environment is discrete.  Hostile vs. friendly  This depends on the agent perception.  Single agent (vs. multiagent): An agent operating by itself in an environment. 23
  • 25. Structure of Intelligent Agents  Agent = architecture + program  Agent program: the implementation of f : P*  A, the agent’s perception-action mapping function: Skeleton-Agent(Percept) returns Action memory  UpdateMemory(memory, Percept) Action  ChooseBestAction(memory) memory  UpdateMemory(memory, Action) return Action  Architecture: a device that can execute the agent program (e.g., general-purpose computer, specialized device, specialized, hardware/software, beobot, etc.) 25
  • 26. Agent types  Four basic types in order of increasing generality:  Table Driven agents  Simple reflex agents  Model-based reflex agents  Goal-based agents  Utility-based agents  Other Agents:  Learning Agents  Mobile Agents  Information Agents 26
  • 27. function TABLE-DRIVEN-AGENT (percept) returns action static: percepts, a sequence, initially empty table, a table, indexed by percept sequences, initially fully specified append percept to the end of percepts action  LOOKUP(percepts, table) return action An agent based on a prespecified lookup table. It keeps track of percept sequence and just looks up the best action Table-driven agent • Problems – Huge number of possible percepts (consider an automated taxi with a camera as the sensor) => lookup table would be huge – Takes long time to build the table – Not adaptive to changes in the environment; requires entire table to be updated if changes occur 27
  • 29. function SIMPLE-REFLEX-AGENT(percept) returns action static: rules, a set of condition-action rules state  INTERPRET-INPUT (percept) rule  RULE-MATCH (state,rules) action  RULE-ACTION [rule] return action A simple reflex agent works by finding a rule whose condition matches the current situation (as defined by the percept) and then doing the action associated with that rule. First match. No further matches sought. Only one level of deduction. Simple reflex agents • Differs from the lookup table based agent is that the condition (that determines the action) is already higher-level interpretation of the percepts – Percepts could be e.g. the pixels on the camera of the automated taxi 29
  • 30. Model-based reflex agents (Reflex agents w/ state) 30
  • 31. Reflex agent with internal state … function REFLEX-AGENT-WITH-STATE (percept) returns action static: state, a description of the current world state rules, a set of condition-action rules state  UPDATE-STATE (state, percept) rule  RULE-MATCH (state, rules) action  RULE-ACTION [rule] state  UPDATE-STATE (state, action) return action A reflex agent with internal state works by finding a rule whose condition matches the current situation (as defined by the percept and the stored internal state) and then doing the action associated with that rule. 31
  • 33. Agent with explicit goals … • Choose actions so as to achieve a (given or computed) goal = a description of desirable situations. e.g. where the taxi wants to go • Keeping track of the current state is often not enough – need to add goals to decide which situations are good • Deliberative instead of reactive • May have to consider long sequences of possible actions before deciding if goal is achieved – involves considerations of the future, “what will happen if I do…?” (search and planning) • More flexible than reflex agent. (e.g. rain / new destination) In the reflex agent, the entire database of rules would have to be rewritten 33
  • 34. 34
  • 35. Utility-based agent … • When there are multiple possible alternatives, how to decide which one is best? • A goal specifies a crude destination between a happy and unhappy state, but often need a more general performance measure that describes “degree of happiness” • Utility function U: State  Real Number indicating a measure of success or happiness when at a given state • Allows decisions comparing choice between conflicting goals, and choice between likelihood of success and importance of goal (if achievement is uncertain) 35
  • 36. 36
  • 37. Learning Agents • After an agent is programmed, can it work immediately? – No, it still need teaching • In AI, – Once an agent is done – We teach it by giving it a set of examples – Test it by using another set of examples • We then say the agent learns – A learning agent 37
  • 38. Learning Agents • Four conceptual components – Learning element • Making improvement – Performance element • Selecting external actions – Critic • Tells the Learning element how well the agent is doing with respect to fixed performance standard. (Feedback from user or examples, good or not?) – Problem generator • Suggest actions that will lead to new and informative experiences. 38
  • 39. Mobile agents  Programs that can migrate from one machine to another.  Execute in a platform-independent execution environment.  Require agent execution environment (places).  Mobility not necessary or sufficient condition for agenthood.  Practical but non-functional advantages:  Reduced communication cost (e.g., from PDA)  Asynchronous computing (when you are not connected)  Two types:  One-hop mobile agents (migrate to one other place)  Multi-hop mobile agents (roam the network from place to place)  Applications:  Distributed information retrieval.  Telecommunication network routing. 39
  • 40. Information agents  Manage the explosive growth of information.  Manipulate or collate information from many distributed sources.  Information agents can be mobile or static.  Challenge: ontologies for annotating Web pages. 40
  • 41. Discussion Questions 1. For each of the following, develop a PEAS description of the task environment: 1. Robot soccer player 2. Internet book-shopping agent 3. Autonomous Mars rover 4. Mathematician’s theorem-proving assistant 2. For the above Agents characterize their environment. 41

Notes de l'éditeur

  1. Part2:Intelligent Agents