The document discusses testing methods for software agents and multi-agent systems, noting that they present challenges due to their non-deterministic and open nature, and advocating for continuous testing using techniques like generating test cases randomly, based on ontologies, or through evolutionary methods to adequately test autonomous agents.
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Agent testing
1. Testing Software
Agents & MAS
Cu D. Nguyen, PhD.
Software Engineering (SE) unit,
Fondazione Bruno Kessler (FBK)
http://selab.fbk.eu/dnguyen/
1
2. Testing is critical
Software agents & multi-agent systems are enabling
technologies to build today’s complex systems, thanks to:
¨ Adaptivity and autonomy properties
¨ Open and dynamic nature
As they are increasingly applied, testing to build
confidence in their operations is extremely
crucial !
NASA’s agents are designed to achieve the goals and
intentions of the designers, not merely to respond to
predefined events, so that they can react to unimagined
events and still ensure that the spacecraft does not
FACT waste fuel while keeping to its mission. FACT
NASA satellites use autonomous agents to balance
49 multiple demands, such as staying on course,
49
keeping experiments running, and dealing with
the unexpected, thereby avoiding waste.
2
3. Software agents and MAS
Software agents are programs Multi-agent systems (MAS) are composed
of
that are situated and have
• Autonomous agents and their
their own control and goals. interactions
• Environment where the agents
operate
Properties:
• Rules, norms, constraints that restrict
perceive the behaviors of the agents
environment Autonomous
response Goal-oriented
to changes
Reactivity Proactivity Deliberative Agent Z
Distributed network
(Internet)
Environment N
Social ability Host N
Agent A Agent B
Collaborative Competitive
Environment 1 Host 1 MAS
3
4. Challenges in testing agents & MAS
Traditional software
• deterministic inputs outputs
• observable state α
Agent sensors
• non-deterministic, due to self-* outputs
inputs
and the instant changes of the
environment
self-*
MAS
• distributed, asynchronous
• message passing
• cooperative, emergent
behaviours
4
5. Testing phases
n Acceptance: ensure the system meets the
stakeholder goals
n System: test for the macroscopic properties and
qualities of the system
n Integration: check the collective behaviors and
the interaction of agents with the environment
n Agent: check for the integration of agent
components (goal, plan, beliefs, etc.) and agent’s
goal fulfillment.
n Unit: testing agent units: blocks of code, agent
components (plan, goal, etc.)
5
7. Some facts
n Many BDI agent dev. languages exist:
¨ JADEX, Jason, JACK Intelligent Agents,
AgentSpeak(RT)
n No “popular” de-factor language yet
n Often built on top of Java
n There are IDEs (integrated development
environment)
¨ With testing facility
n We will use JADEX as a reference language
7
8. BDI Architecture (recap)
n Beliefs: represent the informational state of the
agent
n Desires: represent the motivational state of the
agent
¨ Operationalized as Goals + [contextual
conditions]
n Intentions: represent the deliberative state of
the agent – what the agent has chosen to do
¨ Operationalized as Plans
n Events: internal/external triggers that an agent
receives/perceives and will react to.
8
9. Testing agent beliefs
n Belief state is program state in traditional testing
sense:
n Example Agent: { belief: Bank-Account-Balance
goal: Buy-A-Car }
state 1: Bank-Account-Balance = $1,000,0000
state 2: Bank-Account-Balance = $100
n What to test:
¨ Belief update (read/write)
n direct: injection, change the agent belief brutally
n indirect: perform belief update via plan
execution
9
10. Testing agent goals
n Goals are driven by contextual conditions
¨ conditions to activate
¨ conditions to hibernate/drop
¨ target/satisfactions condition
n What to test:
¨ goal triggering
¨ goal achievement
¨ goal interaction
n one goal might trigger or inhibit other goals
n goal reasoning to solve conflicts or to archive higher
level goals
10
11. Testing agent plans
n Plans are triggered by goals, a goal
activated will trigger a plan execution
n Plan execution results in:
¨ interacting with external world
¨ changing the external world
¨ changing agent belief
¨ triggering other goals
n What to test:
¨ plan instantiation
¨ plan execution results
11
12. Testing events
n Events are primarily test inputs in agent
testing
n Can be:
¨ Messages
¨ Observing state of the environment
n What to test:
¨ Event filtering, what are the events an agent
should receive
¨ Event handling, to trigger goals or update beliefs
12
13. Example: testing a cleaning agent
n Environment:
¨ wastebins
¨ charging stations
¨ obstacles
¨ waste
n This agent has to
keep the floor clean
13
14. Example (contd.)
n Example of Beliefs:
…………...
<!-- The current cleaner location. -->
<belief name="my_location" class="Location" exported="true">
<fact>new Location(0.5, 0.5)</fact>
</belief>
<!-- Last visited location -->
<belief name="last_location" class="Location">
<fact>new Location(0.5, 0.5)</fact>
</belief>
<!-- target location, moving to this location -->
<belief name="target_location" class="Location">
<fact>new Location(0.5, 0.5)</fact>
</belief>
…………...
n Test concerns:
n is my_location updated after every move?
n is the next target_location determined? how does it differ
from current_location?
n …..
14
15. Example (contd.)
n Example of goal
<!-- Observe the battery state. -->
<maintaingoal name="maintainbatteryloaded" retry="true" recur="true" retrydelay="0">
<deliberation cardinality="-1">
<inhibits ref="performlookforwaste" inhibit="when_in_process"/>
<inhibits ref="achievecleanup" inhibit="when_in_process"/>
<inhibits ref="achievepickupwaste" inhibit="when_in_process"/>
<inhibits ref="achievedropwaste" inhibit="when_in_process"/>
</deliberation>
<!-- Engage in actions when the state is below MINIMUM_BATTERY_CHARGE. -->
<maintaincondition>
$beliefbase.my_chargestate > MyConstants.MINIMUM_BATTERY_CHARGE
</maintaincondition>
<!-- The goal is satisfied when the charge state is 1.0. -->
<targetcondition>
$beliefbase.my_chargestate >= 1.0
</targetcondition>
</maintaingoal>
n Test concerns:
n are the conditions correctly specified?
n is the goal activated when the maintaincondition satisfied?
n …...
15
16. Oracles
n Different agent types demand for different types
of oracle
¨ Reactive agents: oracles can be pre-determined at
test design
¨ Proactive (autonomous, evolving) agents: “evolving
& flexsible” oracles are needed
n It’s hard to say if a behavior is correct because the agent
has evolved, learned overtime
n Some exist type of oracles:
¨ Constraint/contract based
¨ Ontology based
¨ Stakeholder soft-goal based
16
17. From OCL constraints, monitoring guards (to check con
tions) can be generated automatically, using a tool called OC
Constraint-based oracles its user-defined handler. We specialize this type of violatio
notify a local monitoring agent during testing whenever a
violated. Local monitoring agent is an agent that runs in th
n Agents’ behaviours agents under test. It isbut theymonitoring not o
with the can change, in charge of must
respect designed constracts/constraints (if any)as com
violations but also many more types of events, such
¨ low-level constraints: pre, post conditions, invariants mon
exceptions, belief changes, and so on. Details about the
will be introduced shortly.
¨ high-level constraints: norms, regulations
Following is an example of pre-/post-condition specified in
n Constraint violations are faults between 0 and 2000:
requires the order attribute to be not null and ensures that a
the proposed price must be
Example: public class ExecuteOrderPlan extends Plan {
....
@Constraint("pre: self.order->notEmptyn" +
"post: price > 0 and price < 2000")
public void body() {
....
}
....
}
The following code is generated by OCL4Java from the con
17
18. Ontology-based oracles
n Interaction ontology defines the semantics of agent
interactions
n Mismatching ontology specifications is faulty
AgentAction Propose
+book: Book
+price: float
Thing Concept
Book
+title: String
+ title: string +author: String
+ price: double
Fig. 2. The book-trading interaction ontology, specified as UML class diagram
Rule example
<owl:Restriction>
<owl:onProperty rdf:resource="#price"/>
<owl:hasValue ...>min 0 and max 2000</owl:hasValue>
</owl:Restriction>
In the course of negotiation, a buyer initiates the interaction by sending a
call for proposals for a given book (an instance of Book) to all the sellers that
18
19. Requirement-based (stakeholder soft-goals
based)
n Soft-goals capture quality requirements, e.g. performance,
safety
n Soft-goals can be represented as quality functions (metrics)
n In turn, quality functions are used to assess the agents
under test
Efficient Robust d>ε
ε
time
Good looking
time
19
20. Input space in testing agents
n Test inputs for an agents:
¨ Passive:
n Messages from other agents
n Control signals from users or controller agents
¨ Active:
n Information obtained from monitoring/sensing the environment
n Information obtained from querying third party services
n Agents often operate in an open & dynamic
environment
¨ other agents and objects can be intelligent, leading to
nondeterministic behaviors
¨ instant changes, e.g. contextual information
20
21. Example of dynamic environment
n Environment:
¨ wastebins
¨ charging stations
¨ obstacles
¨ waste
n Obstacles can move
n The locations of these
objects changes
n New objects might
come in
21
22. Mock Agents
n Mock Agents are sample implementation of
an agent used for testing
¨ Mock Agents simulate a few functionality of the
real agent
n An Agent under test can interact with mock
agents, instead of real agents, during test
execution
Payment
Agent
Example: during testing the Sale
Agent, we use a Mock Payment
Agent instead of the real one to
avoid real payment Sale
Agent Mock
Payment
Agent
22
23. Tester Agent
n Tester Agent is a special agent that plays
the role of a human tester
¨ interact with the Agent under test, use the same
language is the agent under test
¨ manipulate test inputs, generate test inputs
¨ monitoring the behavior of the agent under test
¨ evaluating the agent under test, according to the
human tester’s requirements
n Tester Agent stays on the different side, against the
agent under test!!!!
n Used in continuous testing (next part)
23
25. Why?
n Autonomous agents evolve over time
n One single test execution is not enough
because the next execution of the same test
case, test result can be different
¨ because of learning
¨ because of self-programing (e.g. genetic
programming)
25
26. Continuous testing
n Consists of input generation, execution
and monitoring, and output evaluation
n Test cases are evolved and executed
continuously and automatically
Evaluation final results
outputs
Generation
Test execution
& Monitoring
self-* Agent & Evolution
inputs
initial test cases
(random, or existing)
26
27. Test input generation
• Manual
• Random randomly-selected interaction protocol + random
messages:
‣
message content
‣ environment settings: random values of artefacts’ attributes
• Ontology
‣ rules and concept definitions can be used to generate
messages
• Evolutionary
‣ quality of test case is measured by a fitness function f(TC)
‣ use f to guide the meta-heuristic search to generate better
test cases
‣ example: quality-function-based fitness
27
28. Random generation
• Messages:
‣ select randomly a standard interaction protocol, combine it
with randomly-generated data or domain specific data
• Environmental settings: Example:
‣ identify the attributes of the
entities in the environment
‣ generate randomly values for
these attributes
28
29. Ontology-based generation
• Information available inside interaction ontology:
‣ concepts, relations, data types of properties. E.g. action Propose
is an AgentAction, two properties: book: Book , price: Double
‣ instances of concepts, user-defined or obtained from ontology
alignment
• Use these data to generate messages automatically
Example:
Book(title:”A”)
Propose(book:”A”, price:10)
agent under test
tester agent Propose(book:”A”, price:9)
29
30. Quality-function-based evolutionary
generation
• Build the fitness of test cases based on quality functions
• Use this fitness measure to guide the evolution, using a
genetic algorithm, e.g GA, NSGAII etc.
• For example:
‣ soft-goal: safety
‣ quality function: the closest distance of the agent to obstacles
must be greater than 0.5cm
‣ fitness f = d - 0.5, search for test cases that give f < 0
generation i generation i + K
distance distance
0.5 0.5
time time
30
31. Example: evolutionary testing of the cleaner
agent
n Test case (environment)
encoding
¨ coordinates (x,y) of
wastebins, charging stations,
obstacles, wastes
¨ TCi = <x1,y1,x2,y2,….,xN,yN>
n Fitness functions:
¨ fpower = 1/Total power consumption
n search for environments where the agent consumes
more power
¨ fobs = 1/Number of obstacles encountered
n search for environments where the agent encounters
more obstacles
31
32. Example (contd)
n Genetic algorithm, driven by fpower and fobs, will search
for test cases in which the agent will have higher
chance to run out of battery, and hit obstacle.
¨ that violates the user’s requirements
n Results:
n Evolutionary test generation
technique founds test cases
where:
1) wastes are far away from
wastebin -> more power
2) obstacles on the way to
wastes -> easy to hit
Black circles: obstacle
red dots: wastes - squares: charging stations, red circles: wastebins
32
33. Example (contd)
n More about the evolutionary of the
environment:
¨ http://www.youtube.com/watch?v=xx3QG5OuBz0
n The search converts to the test cases where
the 2 fitness functions are optimized.
33
34. Conclusions
n Testing software agents is important, yet still
immature.
n Concerns in testing BDI agent:
¨ BDI components: beliefs, goals, plans, events
¨ Their integrations
n Oracles:
¨ Reactive agents: can be specified at design time
¨ Proactive agents: new types of oracles are needed,
e.g. use quality function derived from soft-goals
n Many approaches to generate test inputs
¨ Evolutionary proved to be effective
34
35. Additional resources
n CD Nguyen (2009) Testing Techniques for Software Agents. PhD thesis, University of Trento,
Fondazione Bruno Kessler. http://eprints-phd.biblio.unitn.it/68/
n CD Nguyen, Anna Perini, Paolo Tonella, Simon Miles, Mark Harman, and Michael Luck. 2009.
Evolutionary testing of autonomous software agents. In Proceedings of The 8th International
Conference on Autonomous Agents and Multiagent Systems - Volume 1 (AAMAS '09), Vol. 1.
International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC,
521-528.
n CD Nguyen, A Perini, C Bernon, J Pavón, J Thangarajah, Testing in multi-agent systems,
Agent-Oriented Software Engineering X, 180-190.
n Zhiyong Zhang, Automated unit testing of agent systems, PhD Thesis, Computer Science
and Information Technology, RMIT University.
n Roberta de Souza Coelho, Uirá Kulesza, Arndt von Staa, Carlos José Pereira de Lucena,
Unit Testing in Multi-agent Systems using Mock Agents and Aspects, In Proceedings of the
2006 international workshop on Software engineering for large-scale multi-agent systems
(SELMAS '06). ACM, New York, NY, USA, 83-90. DOI=10.1145/1138063.1138079 http://
doi.acm.org/10.1145/1138063.1138079
35