SlideShare une entreprise Scribd logo
1  sur  53
ARTIFICAL INTELLIGENE
B.TECH III YEAR – II SEM (R18) (2022-
2023)
Prepared
By
Asst.Prof.M.Gokilavani
Department of Computer Science and Engineering (AI & ML)
R18 B.Tech. CSE (AIML) III & IV Year JNTU Hyderabad
ARTIFICIAL INTELLIGENCE
B.Tech. III Year II Sem. L T P C
3 1 0 4
Prerequisites:
1. A course on “Computer Programming and Data Structures”
2. A course on “Advanced Data Structures”
3. A course on “Design and Analysis of Algorithms”
4. A course on “Mathematical Foundations of Computer Science”
5. Some background in linear algebra, data structures and algorithms, and probability will all be
helpful
Course Objectives:
 To learn the distinction between optimal reasoning Vs. human like reasoning
 To understand the concepts of state space representation, exhaustive search, heuristic search
together with the time and space complexities.
 To learn different knowledge representation techniques.
 To understand the applications of AI, namely game playing, theorem proving, and machine
learning.
Course Outcomes:
 Ability to formulate an efficient problem space for a problem expressed in natural language.
 Select a search algorithm for a problem and estimate its time and space complexities.
 Possess the skill for representing knowledge using the appropriate technique for a given
problem.
 Possess the ability to apply AI techniques to solve problems of game playing, and machine
learning.
UNIT - I
Problem Solving by Search-I: Introduction to AI, Intelligent Agents
Problem Solving by Search –II: Problem-Solving Agents, Searching for Solutions, Uninformed Search
Strategies: Breadth-first search, Uniform cost search, Depth-first search, Iterative deepening Depth-first
search, Bidirectional search, Informed (Heuristic) Search Strategies: Greedy best-first search, A*
search, Heuristic Functions, Beyond Classical Search: Hill-climbing search, Simulated annealing
search, Local Search in Continuous Spaces, Searching with Non-Deterministic Actions, Searching wih
Partial Observations, Online Search Agents and Unknown Environment .
UNIT - II
Problem Solving by Search-II and Propositional Logic
Adversarial Search: Games, Optimal Decisions in Games, Alpha–Beta Pruning, Imperfect Real-Time
Decisions.
Constraint Satisfaction Problems: Defining Constraint Satisfaction Problems, Constraint
Propagation, Backtracking Search for CSPs, Local Search for CSPs, The Structure of Problems.
Propositional Logic: Knowledge-Based Agents, The Wumpus World, Logic, Propositional Logic,
Propositional Theorem Proving: Inference and proofs, Proof by resolution, Horn clauses and definite
clauses, Forward and backward chaining, Effective Propositional Model Checking, Agents Based on
Propositional Logic.
UNIT - III
Logic and Knowledge Representation
First-Order Logic: Representation, Syntax and Semantics of First-Order Logic, Using First-Order
Logic, Knowledge Engineering in First-Order Logic.
R18 B.Tech. CSE (AIML) III & IV Year JNTU Hyderabad
Inference in First-Order Logic: Propositional vs. First-Order Inference, Unification and Lifting,
Forward Chaining, Backward Chaining, Resolution.
Knowledge Representation: Ontological Engineering, Categories and Objects, Events. Mental Events
and Mental Objects, Reasoning Systems for Categories, Reasoning with Default Information.
UNIT - IV
Planning
Classical Planning: Definition of Classical Planning, Algorithms for Planning with State-Space Search,
Planning Graphs, other Classical Planning Approaches, Analysis of Planning approaches.
Planning and Acting in the Real World: Time, Schedules, and Resources, Hierarchical Planning,
Planning and Acting in Nondeterministic Domains, Multi agent Planning.
UNIT - V
Uncertain knowledge and Learning
Uncertainty: Acting under Uncertainty, Basic Probability Notation, Inference Using Full Joint
Distributions, Independence, Bayes’ Rule and Its Use,
Probabilistic Reasoning: Representing Knowledge in an Uncertain Domain, The Semantics of
Bayesian Networks, Efficient Representation of Conditional Distributions, Approximate Inference in
Bayesian Networks, Relational and First-Order Probability, Other Approaches to Uncertain Reasoning;
Dempster-Shafer theory.
Learning: Forms of Learning, Supervised Learning, Learning Decision Trees. Knowledge in Learning:
Logical Formulation of Learning, Knowledge in Learning, Explanation-Based Learning, Learning Using
Relevance Information, Inductive Logic Programming.
TEXT BOOK:
1. Artificial Intelligence A Modern Approach, Third Edition, Stuart Russell and Peter Norvig,
Pearson Education.
REFERENCE BOOKS:
1. Artificial Intelligence, 3rd
Edn, E. Rich and K.Knight (TMH)
2. Artificial Intelligence, 3rd
Edn., Patrick Henny Winston, Pearson Education.
3. Artificial Intelligence, Shivani Goel, Pearson Education.
4. Artificial Intelligence and Expert systems – Patterson, Pearson Education.
UNIT I
Problem Solving by Search-I: Introduction to AI, Intelligent Agents
Problem Solving by Search –II: Problem-Solving Agents, Searching for Solutions, Uninformed Search
Strategies: Breadth-first search, Uniform cost search, Depth-first search, Iterative deepening Depth-first
search, Bidirectional search, Informed (Heuristic) Search Strategies: Greedy best-first search, A*search,
Heuristic Functions, Beyond Classical Search: Hill-climbing search, Simulated annealing search, Local
Search in Continuous Spaces, Searching with Non-Deterministic Actions, Searching with Partial
Observations, Online Search Agents and Unknown Environment.
1. INTRODUCTION TO AI:
 AI is one of the fascinating and universal fields of Computer science which has a great scope in
future. AI holds a tendency to cause a machine to work as a human.
 Artificial Intelligence is composed of two words Artificial and Intelligence, where Artificial
defines "man-made," and intelligence defines "thinking power", hence AI means "a man-made
thinking power."
 Artificial Intelligence exists when a machine can have human based skills such as learning,
reasoning, and solving problems.
 With Artificial Intelligence you do not need to preprogram a machine to do some work, despite that
you can create a machine with programmed algorithms which can work with own intelligence, and
that is the awesomeness of AI.
 It is believed that AI is not a new technology, and some people says that as per Greek myth, there
were Mechanical men in early days which can work and behave like humans.
Turing Test in AI:
 In 1950, Alan Turing introduced a test to check whether a machine can think like a human or not, this
test is known as the Turing Test.
 In this test, Turing proposed that the computer can be said to be an intelligent if it can mimic human
response under specific conditions.
 Turing Test was introduced by Turing in his 1950 paper, "Computing Machinery and Intelligence,"
which considered the question, "Can Machine think?".
 The Turing test is based on a party game "Imitation game," with some modifications.
 This game involves three players in which one player is Computer, another player is human
responder, and the third player is a human Interrogator, who is isolated from other two players and
his job is to find that which player is machine among two of them.
 Consider, Player A is a computer, Player B is human, and Player C is an interrogator. Interrogator is
aware that one of them is machine, but he needs to identify this on the basis of questions and their
responses.
 The conversation between all players is via keyboard and screen so the result would not depend on
the machine's ability to convert words as speech.
 The test result does not depend on each correct answer, but only how closely its responses like a
human answer. The computer is permitted to do everything possible to force a wrong identification
by the interrogator.
 The questions and answers can be like:
o Interrogator: Are you a computer?
o Player A (Computer): No
o Interrogator: Multiply two large numbers such as (256896489*456725896)
o Player A: Long pause and give the wrong answer.
 In this game, if an interrogator would not be able to identify which is a machine and which is
human, then the computer passes the test successfully, and the machine is said to be intelligent and
can think like a human.
 "In 1991, the New York businessman Hugh Loebner announces the prize competition, offering a
$100,000 prize for the first computer to pass the Turing test. However, no AI program to till date,
come close to passing an undiluted Turing test".
Goals of Artificial Intelligence:
Following are the main goals of Artificial Intelligence:
1. Replicate human intelligence
2. Solve Knowledge-intensive tasks
3. An intelligent connection of perception and action
4. Building a machine which can perform tasks that requires human intelligence such as:
o Proving a theorem
o Playing chess
o Plan some surgical operation
o Driving a car in traffic
5. Creating some system which can exhibit intelligent behavior, learn new things by itself,
demonstrate, explain, and can advise to its user.
Application of AI:
 Artificial Intelligence has various applications in today's society. It is becoming essential for today's
time because it can solve complex problems with an efficient way in multiple industries, such as
Healthcare, entertainment, finance, education, etc. AI is making our daily life more comfortable and
fast.
 Following are some sectors which have the application of Artificial Intelligence:
1. AI in Astronomy
 Artificial Intelligence can be very useful to solve complex universe problems. AI technology can be
helpful for understanding the universe such as how it works, origin, etc.
2. AI in Healthcare
 In the last, five to ten years, AI becoming more advantageous for the healthcare industry and going
to have a significant impact on this industry.
 Healthcare Industries are applying AI to make a better and faster diagnosis than humans.
 AI can help doctors with diagnoses and can inform when patients are worsening so that medical
help can reach to the patient before hospitalization.
3. AI in Gaming
 AI can be used for gaming purpose. The AI machines can play strategic games like chess, where the
machine needs to think of a large number of possible places.
4. AI in Finance
 AI and finance industries are the best matches for each other.
 The finance industry is implementing automation, Chabot, adaptive intelligence, algorithm trading,
and machine learning into financial processes.
5. AI in Data Security
 The security of data is crucial for every company and cyber-attacks are growing very rapidly in the
digital world. AI can be used to make your data more safe and secure.
 Some examples such as AEG bot, AI2 Platform, are used to determine software bug and cyber-
attacks in a better way.
6. AI in Social Media
 Social Media sites such as Face book, Twitter, and Snap chat contain billions of user profiles,
which need to be stored and managed in a very efficient way.
 AI can organize and manage massive amounts of data.
 AI can analyze lots of data to identify the latest trends, hash tag, and requirement of different users.
7. AI in Travel & Transport
 AI is becoming highly demanding for travel industries.
 AI is capable of doing various travel related works such as from making travel arrangement to
suggesting the hotels, flights, and best routes to the customers.
 Travel industries are using AI-powered chat bots which can make human-like interaction with
customers for better and fast response.
8. AI in Automotive Industry
 Some Automotive industries are using AI to provide virtual assistant to their user for better
performance. Such as Tesla has introduced TeslaBot, an intelligent virtual assistant.
 Various Industries are currently working for developing self-driven cars which can make your
journey more safe and secure.
9. AI in Robotics:
 Artificial Intelligence has a remarkable role in Robotics.
 Usually, general robots are programmed such that they can perform some repetitive task, but with
the help of AI, we can create intelligent robots which can perform tasks with their own experiences
without pre-programmed.
 Humanoid Robots are best examples for AI in robotics, recently the intelligent Humanoid robot
named as Erica and Sophia has been developed which can talk and behave like humans.
10. AI in Entertainment
 We are currently using some AI based applications in our daily life with some entertainment
services such as Netflix or Amazon.
 With the help of ML/AI algorithms, these services show the recommendations for programs or
shows.
11. AI in Agriculture
 Agriculture is an area which requires various resources, labor, money, and time for best result.
Now a day's agriculture is becoming digital, and AI is emerging in this field. Agriculture is
applying AI as agriculture robotics, solid and crop monitoring, predictive analysis. AI in
agriculture can be very helpful for farmers.
12. AI in E-commerce
 AI is providing a competitive edge to the e-commerce industry, and it is becoming more
demanding in the e-commerce business.
 AI is helping shoppers to discover associated products with recommended size, color, or even brand.
13. AI in education:
 AI can automate grading so that the tutor can have more time to teach.
 AI Chabot can communicate with students as a teaching assistant.
 AI in the future can be work as a personal virtual tutor for students, which will be accessible easily at
any time and any place.
2. INTELLIGENT AGENTS:
Types of AI Agents:
 Agents can be grouped into five classes based on their degree of perceived intelligence and
capability. All these agents can improve their performance and generate better action over the
time. These are given below:
o Simple Reflex Agent
o Model-based reflex agent
o Goal-based agents
o Utility-based agent
o Learning agent
i. Simple Reflex agent:
 The Simple reflex agents are the simplest agents. These agents take decisions on the basis of the
current percepts and ignore the rest of the percept history.
 These agents only succeed in the fully observable environment.
 The Simple reflex agent does not consider any part of percepts history during their decision and
action process.
 The Simple reflex agent works on Condition-action rule, which means it maps the current state to
action. Such as a Room Cleaner agent, it works only if there is dirt in the room.
 Problems for the simple reflex agent design approach:
o They have very limited intelligence
o They do not have knowledge of non-perceptual parts of the current state
o Mostly too big to generate and to store.
o Not adaptive to changes in the environment.
ii. Model-based reflex agent:
 The Model-based agent can work in a partially observable environment, and track the situation.
 A model-based agent has two important factors:
o Model: It is knowledge about "how things happen in the world," so it is called a
Model-based agent.
o Internal State: It is a representation of the current state based on percept history.
 These agents have the model, "which is knowledge of the world" and based on the model they
perform actions.
 Updating the agent state requires information about:
o How the world evolves
o How the agent's action affects the world.
iii. Goal-based agents
 The knowledge of the current state environment is not always sufficient to decide for an agent
to what to do.
 The agent needs to know its goal which describes desirable situations.
 Goal-based agents expand the capabilities of the model-based agent by having the "goal"
information.
 They choose an action, so that they can achieve the goal.
 These agents may have to consider a long sequence of possible actions before deciding
whether the goal is achieved or not.
 Such considerations of different scenario are called searching and planning, which makes an
agent proactive.
iv. Utility-based agents
 These agents are similar to the goal-based agent but provide an extra component of utility
measurement which makes them different by providing a measure of success at a given state.
 Utility-based agent act based not only goals but also the best way to achieve the goal.
 The Utility-based agent is useful when there are multiple possible alternatives, and an agent has to
choose in order to perform the best action.
 The utility function maps each state to a real number to check how efficiently each action achieves
the goals.
v. Learning Agents
 A learning agent in AI is the type of agent which can learn from its past experiences, or it has
learning capabilities.
 It starts to act with basic knowledge and then able to act and adapt automatically through learning.
 A learning agent has mainly four conceptual components, which are:
1. Learning element: It is responsible for making improvements by learning from
environment
2. Critic: Learning element takes feedback from critic which describes that how well the
agent is doing with respect to a fixed performance standard.
3. Performance element: It is responsible for selecting external action
4. Problem generator: This component is responsible for suggesting actions that will lead to
new and informative experiences.
 Hence, learning agents are able to learn, analyze performance, and look for new ways to
improve the performance.
AGENTS:
 An AI system can be defined as the study of the rational agent and its environment. The agents
sense the environment through sensors and act on their environment through actuators. An AI agent
can have mental properties such as knowledge, belief, intention, etc.
What is an Agent?
An agent can be anything that perceive its environment through sensors and act upon that environment
through actuators. An Agent runs in the cycle of perceiving, thinking, and acting. An agent can be:
o Human-Agent: A human agent has eyes, ears, and other organs which work for sensors and
hand, legs, vocal tract work for actuators.
o Robotic Agent: A robotic agent can have cameras, infrared range finder, NLP for sensors and
various motors for actuators.
o Software Agent: Software agent can have keystrokes, file contents as sensory input and act on
those inputs and display output on the screen.
Hence the world around us is full of agents such as thermostat, cell phone, camera, and even we are
also agents. Before moving forward, we should first know about sensors, effectors, and actuators.
 Sensor: Sensor is a device which detects the change in the environment and sends the
information to other electronic devices. An agent observes its environment through sensors.
 Actuators: Actuators are the component of machines that converts energy into motion. The
actuators are only responsible for moving and controlling a system. An actuator can be an
electric motor, gears, rails, etc.
 Effectors: Effectors are the devices which affect the environment. Effectors can be legs,
wheels, arms, fingers, wings, fins, and display screen.
What is Intelligent Agents?
 An intelligent agent is an autonomous entity which acts upon an environment using sensors and
actuators for achieving goals. An intelligent agent may learn from the environment to achieve their
goals. A thermostat is an example of an intelligent agent.
Following are the main four rules for an AI agent:
o Rule 1: An AI agent must have the ability to perceive the environment.
o Rule 2: The observation must be used to make decisions.
o Rule 3: Decision should result in an action.
o Rule 4: The action taken by an AI agent must be a rational action.
What is Rational Agent?
 A rational agent is an agent which has clear preference, models uncertainty, and acts in a way to
maximize its performance measure with all possible actions.
 A rational agent is said to perform the right things. AI is about creating rational agents to use for
game theory and decision theory for various real-world scenarios.
 For an AI agent, the rational action is most important because in AI reinforcement learning
algorithm, for each best possible action, agent gets the positive reward and for each wrong action,
an agent gets a negative reward.
Define Rationality.
 The rationality of an agent is measured by its performance measure. Rationality can be judged on
the basis of following points:
o Performance measure which defines the success criterion.
o Agent prior knowledge of its environment.
o Best possible actions that an agent can perform.
o The sequence of percepts.
Structure of an AI Agent
 The task of AI is to design an agent program which implements the agent function. The structure
of an intelligent agent is a combination of architecture and agent program. It can be viewed as:
Agent = Architecture + Agent program
Following are the main three terms involved in the structure of an AI agent:
 Architecture: Architecture is machinery that an AI agent executes on.
 Agent Function: Agent function is used to map a percept to an action.
F: P* → A
 Agent program: Agent program is an implementation of agent function. An agent program
executes on the physical architecture to produce function f.
Define PEAS Representation.
 PEAS is a type of model on which an AI agent works upon. When we define an AI agent or rational
agent, then we can group its properties under PEAS representation model. It is made up of four
words:
o P: Performance measure
o E: Environment
o A: Actuators
o S: Sensors
Here performance measure is the objective for the success of an agent's behavior.
Example: PEAS for self-driving cars:
Let's suppose a self-driving car then PEAS representation will be:
 Performance: Safety, time, legal drive, comfort
 Environment: Roads, other vehicles, road signs, pedestrian
 Actuators: Steering, accelerator, brake, signal, horn
 Sensors: Camera, GPS, speedometer, odometer, accelerometer, sonar.
Properties of Task Environment:
 An environment is everything in the world which surrounds the agent, but it is not a part of an agent
itself. An environment can be described as a situation in which an agent is present.
 The environment is where agent lives, operate and provide the agent with something to sense and
act upon it. An environment is mostly said to be non-feministic.
Features of Environment
 As per Russell and Norvig, an environment can have various features from the point of view of
an agent:
1. Fully observable vs Partially Observable
2. Static vs Dynamic
3. Discrete vs Continuous
4. Deterministic vs Stochastic
5. Single-agent vs Multi-agent
6. Episodic vs sequential
7. Known vs Unknown
8. Accessible vs Inaccessible
3. PROBLEM-SOLVING AGENTS:
 In Artificial Intelligence, Search techniques are universal problem-solving methods. Rational
agents or Problem-solving agents in AI mostly used these search strategies or algorithms to solve a
specific problem and provide the best result. Problem-solving agents are the goal-based agents and use
atomic representation. In this topic, we will learn various problem-solving search algorithms.
Well define problem and Solution:
A problem can be defined formally by five components:
• The initial state that the agent starts in.
• A description of the possible actions available to the agent.
• A description of what each action does; the formal name for this is the transition model.
• The goal test, which determines whether a given state is a goal state. Sometimes there is an
explicit set of possible goal states, and the test simply checks whether the given state is one of
them.
• A path cost function that assigns a numeric cost to each path. The problem-solving agent
chooses a cost function that reflects its own performance measure.
Example: 1 Romania
• On holiday in Romania; currently in Arad.
• Flight leaves tomorrow from Bucharest
• Formulate goal:
• be in Bucharest
• Formulate problem:
• states: various cities
• actions: drive between cities
• Find solution:
• sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest
Example: 2 Toy problems
• Those intended to illustrate or exercise various problem-solving methods
• E.g., puzzle chess, etc.
Example: 3 Real-world problems
• Tend to be more difficult and whose solutions people actually care about
• E.g., Design, planning, etc.
Example: 4 Toy Problem
Possible states of Vacuum Cleaner (Toy Problem):
Example: 5 8* Puzzle Problem
Possible Moves for 8 Queen Problems:
4. SEARCHING FOR SOLUTIONS:
• Finding out a solution is done by
• searching through the state space
• All problems are transformed
• as a search tree
• generated by the initial state and successor function
Search Tree:
• Initial state
• The root of the search tree is a search node
• Expanding
• applying successor function to the current state
• thereby generating a new set of states
• leaf nodes
• the states having no successors
Fringe: Set of search nodes that have not been expanded yet.
Search tree Components:
• A node is having five components:
• STATE: which state it is in the state space
• PARENT-NODE: from which node it is generated
• ACTION: which action applied to its parent-node to generate it
• PATH-COST: the cost, g(n), from initial state to the node n itself
• DEPTH: number of steps along the path from the initial state
Search Algorithm Terminologies:
 Search: Searching is a step by step procedure to solve a search-problem in a given search
space. A search problem can have three main factors:
a. Search Space: Search space represents a set of possible solutions, which a system
may have.
b. Start State: It is a state from where agent begins the search.
c. Goal test: It is a function which observe the current state and returns whether the
goal state is achieved or not.
 Search tree: A tree representation of search problem is called Search tree. The root of the
search tree is the root node which is corresponding to the initial state.
 Actions: It gives the description of all the available actions to the agent.
 Transition model: A description of what each action do, can be represented as a transition
model.
 Path Cost: It is a function which assigns a numeric cost to each path.
 Solution: It is an action sequence which leads from the start node to the goal node.
 Optimal Solution: If a solution has the lowest cost among all solutions.
i. Properties of Search Algorithms (or) measuring problem Solving performance:
Following are the four essential properties of search algorithms to compare the efficiency of
these algorithms:
 Completeness: A search algorithm is said to be complete if it guarantees to return a
solution if at least any solution exists for any random input.
 Optimality: If a solution found for an algorithm is guaranteed to be the best solution
(lowest path cost) among all other solutions, then such a solution for is said to be an
optimal solution.
 Time Complexity: Time complexity is a measure of time for an algorithm to complete
its task.
 Space Complexity: It is the maximum storage space required at any point during the
search, as the complexity of the problem.
5. TYPES OF SEARCH ALGORITHMS:
Based on the search problems we can classify the search algorithms into uninformed (Blind search)
search and informed search (Heuristic search) algorithms.
i. Uninformed/Blind Search:
 The uninformed search does not contain any domain knowledge such as closeness, the location
of the goal.
 It operates in a brute-force way as it only includes information about how to traverse the tree
and how to identify leaf and goal nodes.
 Uninformed search applies a way in which search tree is searched without any information
about the search space like initial state operators and test for the goal, so it is also called blind
search.
 It examines each node of the tree until it achieves the goal node.
It can be divided into five main types:
o Breadth-first search
o Uniform cost search
o Depth-first search
o Iterative deepening depth-first search
o Bidirectional Search
ii. Informed Search
 Informed search algorithms use domain knowledge.
 In an informed search, problem information is available which can guide the search.
 Informed search strategies can find a solution more efficiently than an uninformed search
strategy. Informed search is also called a Heuristic search.
 A heuristic is a way which might not always be guaranteed for best solutions but guaranteed to
find a good solution in reasonable time.
 Informed search can solve much complex problem which could not be solved in another way.
 An example of informed search algorithms is a traveling salesman problem.
1. Greedy Search
2. A* Search
6. BREADTH-FIRST SEARCH (BFS):
 Breadth-first search is the most common search strategy for traversing a tree or graph. This
algorithm searches breadthwise in a tree or graph, so it is called breadth-first search.
 BFS algorithm starts searching from the root node of the tree and expands all successor
nodes at the current level before moving to nodes of next level.
 The breadth-first search algorithm is an example of a general-graph search algorithm.
 Breadth-first search implemented using FIFO queue data structure.
Algorithm:
• Step 1: SET STATUS = 1 (ready state) for each node in G
• Step 2: Enqueue the starting node A and set its STATUS = 2 (waiting state)
• Step 3: Repeat Steps 4 and 5 until QUEUE is empty
• Step 4: Dequeue a node N. Process it and set its STATUS = 3 (processed state).
• Step 5: Enqueue all the neighbours of N that are in the ready state (whose STATUS = 1) and
set
their STATUS = 2
(Waiting state)
[END OF LOOP]
• Step 6: EXIT
Example: 1
In the below tree structure, we have shown the traversing of the tree using BFS algorithm
from the root node S to goal node K. BFS search algorithm traverse in layers, so it will follow
the path which is shown by the dotted arrow, and the traversed path will be:
Solution: S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K
Example: 2
Queue Structure:
Solution: 40,10,20,30,60,50,70
Example 3:
Practice problem:
Advantages:
 BFS will provide a solution if any solution exists.
 If there is more than one solution for a given problem, then BFS will provide the minimal
solution which requires the least number of steps.
Disadvantages:
 It requires lots of memory since each level of the tree must be saved into memory to
expand the next level.
 BFS needs lots of time if the solution is far away from the root node.
7. DEPTH-FIRST SEARCH (DFS):
 Depth-first search is a recursive algorithm for traversing a tree or graph data structure.
 It is called the depth-first search because it starts from the root node and follows each path
to its greatest depth node before moving to the next path.
 DFS uses a stack data structure for its implementation.
 The process of the DFS algorithm is similar to the BFS algorithm.
Implementation steps for DFS:
• First, create a stack with the total number of vertices in the graph.
• Now, choose any vertex as the starting point of traversal, and push that vertex into the stack.
• After that, push a non-visited vertex (adjacent to the vertex on the top of the stack) to the top of
the stack.
• Now, repeat steps 3 and 4 until no vertices are left to visit from the vertex on the stack's top.
• If no vertex is left, go back and pop a vertex from the stack.
• Repeat steps 2, 3, and 4 until the stack is empty.
Algorithm:
• Step 1: SET STATUS = 1 (ready state) for each node in G
• Step 2: Push the starting node A on the stack and set its STATUS = 2 (waiting state)
• Step 3: Repeat Steps 4 and 5 until STACK is empty
• Step 4: Pop the top node N. Process it and set its STATUS = 3 (processed state)
• Step 5: Push on the stack all the neighbors of N that are in the ready state (whose STATUS =
1) and set their STATUS = 2 (waiting state)
[END OF LOOP]
• Step 6: EXIT
Example 1:
In the below search tree, we have shown the flow of depth-first search, and it will follow the order as:
Root node--->Left node ----> right node.
It will start searching from root node S, and traverse A, then B, then D and E, after traversing E, it will
backtrack the tree as E has no other successor and still goal node is not found. After backtracking it
will traverse node C and then G, and here it will terminate as it found goal node.
Solution: S,A,B,D,E,C,G,H,I,K,J.
Example 2:
Example 3:
Advantage:
 DFS requires very less memory as it only needs to store a stack of the nodes on the path
from root node to the current node.
 It takes less time to reach to the goal node than BFS algorithm (if it traverses in the right
path).
Disadvantage:
 There is the possibility that many states keep re-occurring, and there is no guarantee of
finding the solution.
 DFS algorithm goes for deep down searching and sometime it may go to the infinite loop.
Difference between BFS and DFS:
8. DEPTH-LIMITED SEARCH ALGORITHM
 A depth-limited search algorithm is similar to depth-first search with a predetermined limit.
 Depth-limited search can solve the drawback of the infinite path in the Depth-first search.
 In this algorithm, the node at the depth limit will treat as it has no successor nodes further.
Depth-limited search can be terminated with two Conditions of failure:
o Standard failure value: It indicates that problem does not have any solution.
o Cutoff failure value: It defines no solution for the problem within a given depth limit.
Advantages:
Depth-limited search is Memory efficient.
Disadvantages:
o Depth-limited search also has a disadvantage of incompleteness.
o It may not be optimal if the problem has more than one solution.
9. UNIFORM-COST SEARCH ALGORITHM:
 Uniform-cost search is a searching algorithm used for traversing a weighted tree or graph.
This algorithm comes into play when a different cost is available for each edge.
 The primary goal of the uniform-cost search is to find a path to the goal node which has the
lowest cumulative cost.
 Uniform-cost search expands nodes according to their path costs from the root node. It can
be used to solve any graph/tree where the optimal cost is in demand.
 A uniform-cost search algorithm is implemented by the priority queue.
 It gives maximum priority to the lowest cumulative cost. Uniform cost search is equivalent
to BFS algorithm if the path cost of all edges is the same.
Advantages:
Uniform cost search is optimal because at every state the path with the least cost is chosen.
Disadvantages:
It does not care about the number of steps involve in searching and only concerned about
path cost. Due to which this algorithm may be stuck in an infinite loop.
10. ITERATIVE DEEPENING DEPTH-FIRST SEARCH:
 The iterative deepening algorithm is a combination of DFS and BFS algorithms. This search
algorithm finds out the best depth limit and does it by gradually increasing the limit until a goal
is found.
 This algorithm performs depth-first search up to a certain "depth limit", and it keeps increasing
the depth limit after each iteration until the goal node is found.
 This Search algorithm combines the benefits of Breadth-first search's fast search and depth-
first search's memory efficiency.
 The iterative search algorithm is useful uninformed search when search space is large, and
depth of goal node is unknown.
Example: Following tree structure is showing the iterative deepening depth-first search. IDDFS
algorithm performs various iterations until it does not find the goal node. The iteration performed by the
algorithm is given as:
Solution:
1'st Iteration-----> A
2'nd Iteration----> A, B, C
3'rd Iteration------>A, B, D, E, C, F, G
4'th Iteration------>A, B, D, H, I, E, C, F, K, G
In the fourth iteration, the algorithm will find the goal node.
Advantages:
o It combines the benefits of BFS and DFS search algorithm in terms of fast search and memory
efficiency.
Disadvantages:
o The main drawback of IDDFS is that it repeats all the work of the previous phase.
11. BIDIRECTIONAL SEARCH ALGORITHM:
 Before moving into bidirectional search let’s first understand a few terms.
 Forward Search: Looking in-front of the end from start.
 Backward Search: Looking from end to the start back-wards.
 Bidirectional search replaces one single search graph with two small sub graphs in which one
starts the search from an initial vertex and other starts from goal vertex.
 The search stops when these two graphs intersect each other.
 Bidirectional search can use search techniques such as BFS, DFS, DLS, etc.
Example: In the below search tree, bidirectional search algorithm is applied. This algorithm divides
one graph/tree into two sub-graphs. It starts traversing from node 1 in the forward direction and starts
from goal node 16 in the backward direction. The algorithm terminates at node 9 where two searches
meet.
Advantages:
o Bidirectional search is fast.
o Bidirectional search requires less memory
Disadvantages:
o Implementation of the bidirectional search tree is difficult.
o In bidirectional search, one should know the goal state in advance.
12.INFORMED SEARCH ALGORITHM:
• Informed search algorithm contains an array of knowledge such as how far we are from the
goal, path cost, how to reach to goal node, etc. This knowledge helps agents to explore less to
the search space and find more efficiently the goal node.
Example Tree: node with information (weight)
Heuristics function:
• The informed search algorithm is more useful for large search space. Informed search algorithm
uses the idea of heuristic, so it is also called Heuristic search.
• Heuristics function: Heuristic is a function which is used in Informed Search, and it finds the
most promising path.
• It takes the current state of the agent as its input and produces the estimation of how close agent
is from the goal.
• The heuristic method, however, might not always give the best solution, but it guaranteed to
find a good solution in reasonable time. Heuristic function estimates how close a state is to the
goal.
• It is represented by h(n), and it calculates the cost of an optimal path between the pair of states.
The value of the heuristic function is always positive.
Where,
h(n) <= h*(n)
Here,
h(n) is heuristic cost,
h*(n) is the estimated cost.
Hence heuristic cost should be less than or equal to the estimated cost.
Pure Heuristic Search:
 Pure heuristic search is the simplest form of heuristic search algorithms. It expands nodes
based on their heuristic value h(n).
 It maintains two lists, OPEN and CLOSED list.
 In the CLOSED list, it places those nodes which have already expanded and in the OPEN list,
it places nodes which have yet not been expanded.
 On each iteration, each node n with the lowest heuristic value is expanded and generates all its
successors and n is placed to the closed list. The algorithm continues unit a goal state is found.
In the informed search we will discuss two main algorithms which are given below:
o Best First Search Algorithm(Greedy search)
o A* Search Algorithm
13.BEST FIRST SEARCH ALGORITHM (GREEDY SEARCH):
• BFS uses the concept of a Priority queue and heuristic search. To search the graph space, the
BFS method uses two lists for tracking the traversal.
• An ‘Open’ list that keeps track of the current ‘immediate’ nodes available for traversal and a
‘CLOSED’ list that keeps track of the nodes already traversed.
• In the best first search algorithm, we expand the node which is closest to the goal node and the
closest cost is estimated by heuristic function,
• Where,
F (n) = g (n)
g (n) path distance
Algorithm:
1. Create 2 empty lists: OPEN and CLOSED
2. Start from the initial node (say N) and put it in the ‘ordered’ OPEN list
3. Repeat the next steps until the GOAL node is reached
 If the OPEN list is empty, then EXIT the loop returning ‘False’
 Select the first/top node (say N) in the OPEN list and move it to the CLOSED list.
Also, capture the information of the parent node
 If N is a GOAL node, then move the node to the closed list and exit the loop returning
‘True’. The solution can be found by backtracking the path
 If N is not the GOAL node, expand node N to generate the ‘immediate’ next nodes
linked to node N and add all those to the OPEN list
 Reorder the nodes in the OPEN list in ascending order according to an evaluation
function f (n).
Example:1
Solutions:
Expand the nodes of S and put in the CLOSED list
• Initialization: Open [A, B], Closed [S]
• Iteration 1: Open [A], Closed [S, B]
• Iteration 2: Open [E, F, A], Closed [S, B]
: Open [E, A], Closed [S, B, F]
• Iteration 3: Open [I, G, E, A], Closed [S, B, F]
: Open [I, E, A], Closed [S, B, F, G]
Hence the final solution path will be:
S----> B----->F----> G
Example 2:
Example 3:
Advantages:
– Best first search can switch between BFS and DFS by gaining the advantages of both
the algorithms.
– This algorithm is more efficient than BFS and DFS algorithms.
Disadvantages:
– It can behave as an unguided depth-first search in the worst case scenario.
– It can get stuck in a loop as DFS.
– This algorithm is not optimal.
14.A* SEARCHING ALGORITHM:
• A* Algorithm is one of the best and popular techniques used for path finding and graph
traversals.
• A lot of games and web-based maps use this algorithm for finding the shortest path efficiently.
• It is essentially a best first search algorithm.
• This is informed search technique also called as HEURISTIC search. This algorithm Works
using heuristic value.
Working of A* Search algorithm:
A* Algorithm works as-
• It maintains a tree of paths originating at the start node.
• It extends those paths one edge at a time.
• It continues until its termination criterion is satisfied.
• A* Algorithm extends the path that minimizes the following function-
• Evaluation function
F (n) = g(n) + h(n)
Here,
• ‘n’ is the last node on the path
• g(n) is the cost of the path from start node to node ‘n’
• h(n) is a heuristic function that estimates cost of the cheapest path from node ‘n’
to the goal node
Algorithm:
• The implementation of A* Algorithm involves maintaining two lists- OPEN and CLOSED.
• OPEN contains those nodes that have been evaluated by the heuristic function but have not
been expanded into successors yet.
• CLOSED contains those nodes that have already been visited.
The algorithm is as follows-
Step-01:
• Define a list OPEN.
• Initially, OPEN consists solely of a single node, the start node S.
Step-02: If the list is empty, return failure and exit.
Step-03: Remove node n with the smallest value of f(n) from OPEN and move it to list
CLOSED. If node n is a goal state, return success and exit.
Step-04: Expand node n.
Step-05: If any successor to n is the goal node, return success and the solution by tracing the
path from goal node to S. Otherwise, go to Step-06.
Step-06: For each successor node, apply the evaluation function f to the node. If the node has
not been in either list, add it to OPEN.
Step-07: Go back to Step-02.
Example: 1
Consider the following graph,
• The numbers written on edges represent the distance between the nodes.
• The numbers written on nodes represent the heuristic value.
• Find the most cost-effective path to reach from start state A to final state J using A* Algorithm.
Step-01:
• We start with node A.
• Node B and Node F can be reached from node A.
A* Algorithm calculates f (B) and f (F).
• f(B) = 6 + 8 = 14
• f(F) = 3 + 6 = 9
Since f (F) < f (B), so it decides to go to node F.
Path- A → F
Step-02:
• Node G and Node H can be reached from node F.
A* Algorithm calculates f (G) and f (H).
• f(G) = (3+1) + 5 = 9
• f(H) = (3+7) + 3 = 13
Since f (G) < f (H), so it decides to go to node G.
Path- A → F → G
Step-03:
• Node I can be reached from node G.
• A* Algorithm calculates f (I).
f (I) = (3+1+3) + 1 = 8
• It decides to go to node I.
Path- A → F → G → I
Step-04:
• Node E, Node H and Node J can be reached from node I.
• A* Algorithm calculates f (E), f (H) and f (J).
• f(E) = (3+1+3+5) + 3 = 15
• f(H) = (3+1+3+2) + 3 = 12
• f(J) = (3+1+3+3) + 0 = 10
• Since f (J) is least, so it decides to go to node J.
Path- A → F → G → I → J
• This is the required shortest path from node A to node J.
Solution:
Example: 2
Example 3:
Example: 4: 8 Puzzle problem using A* searching algorithm
Given an initial state of an 8-puzzle problem and final state to be reached-
Find the most cost-effective path to reach the final state from initial state using A* Algorithm.
Consider,
G (n) = Depth of node
H (n) = Number of misplaced tiles.
Solution:
 A* Algorithm maintains a tree of paths originating at the initial state.
 It extends those paths one edge at a time.
 It continues until final state is reached.
Advantages of A* Searching
• A* Algorithm is one of the best path finding algorithms.
• It is Complete & Optimal
• Used to solve complex problems.
Disadvantages of A* searching
• Requires more memory
15. BEYOND CLASSICAL SEARCH:
• We have seen methods that systematically explore the search space, possibly using principled
pruning (e.g., A*)
What if we have much larger search spaces?
• Search spaces for some real-world problems may be much larger e.g., 1030,000 states as in
certain reasoning and planning tasks.
• Some of these problems can be solved by Iterative Improvement Methods.
Local search algorithm and optimization problem:
• In many optimization problems the goal state itself is the solution.
• The state space is a set of complete configurations.
• Search is about finding the optimal configuration (as in TSP) or just a feasible configuration (as
in scheduling problems).
• In such cases, one can use iterative improvement, or local search, methods.
• An evaluation, or objective, function h must be available that measures the quality of each state.
• Main Idea: Start with a random initial configuration and make small, local changes to it that
improve its quality.
Hill Climbing Algorithm:
• In Hill-Climbing technique, starting at the base of a hill, we walk upwards until we reach the
top of the hill.
• In other words, we start with initial state and we keep improving the solution until it’s optimal.
• It's a variation of a generate-and-test algorithm which discards all states which do not look
promising or seem unlikely to lead us to the goal state.
• To take such decisions, it uses heuristics (an evaluation function) which indicates how close
the current state is to the goal state.
Hill-Climbing = generate-and-test + heuristics
Feature of hill climbing Algorithm:
Following are some main features of Hill Climbing Algorithm:
• Generate and Test variant: Hill Climbing is the variant of Generate and Test method. The
Generate and Test method produce feedback which helps to decide which direction to move in
the search space.
• Greedy approach: Hill-climbing algorithm search moves in the direction which optimizes the
cost.
• No backtracking: It does not backtrack the search space, as it does not remember the previous
states.
State-space Diagram for Hill Climbing:
 The state-space landscape is a graphical representation of the hill-climbing algorithm which is
showing a graph between various states of algorithm and Objective function/Cost.
 On Y-axis we have taken the function which can be an objective function or cost function, and
state-space on the x-axis.
 If the function on Y-axis is cost then, the goal of search is to find the global minimum and
local minimum.
 If the function of Y-axis is Objective function, then the goal of the search is to find the global
maximum and local maximum.
• Local Maximum: Local maximum is a state which is better than its neighbor states, but there is also
another state which is higher than it.
• Global Maximum: Global maximum is the best possible state of state space landscape. It has the
highest value of objective function.
• Current state: It is a state in a landscape diagram where an agent is currently present.
• Flat local maximum: It is a flat space in the landscape where all the neighbor states of current states
have the same value.
• Shoulder: It is a plateau region which has an uphill edge.
Types of Hill Climbing Algorithm:
o Simple hill Climbing
o Steepest-Ascent hill-climbing
o Stochastic hill Climbing
1. Simple Hill Climbing:
 Simple hill climbing is the simplest way to implement a hill climbing algorithm.
 It only evaluates the neighbor node state at a time and selects the first one which
optimizes current cost and set it as a current state.
 It only checks it's one successor state, and if it finds better than the current state, then move
else be in the same state.
 This algorithm has the following features:
o Less time consuming
o Less optimal solution and the solution is not guaranteed
Algorithm for Simple Hill Climbing:
o Step 1: Evaluate the initial state, if it is goal state then return success and Stop.
o Step 2: Loop Until a solution is found or there is no new operator left to apply.
o Step 3: Select and apply an operator to the current state.
o Step 4: Check new state:
a. If it is goal state, then return success and quit.
b. Else if it is better than the current state then assign new state as a current state.
c. Else if not better than the current state, then return to step2.
o Step 5: Exit.
Example:
• Key point while solving any hill-climbing problem is to choose an
appropriate heuristic function.
• Let's define such function h:
• h (x) = +1 for all the blocks in the support structure if the block is correctly
positioned otherwise -1 for all the blocks in the support structure.
Solution:
2. Steepest-Ascent hill climbing:
 The steepest-Ascent algorithm is a variation of simple hill climbing algorithm.
 This algorithm examines all the neighboring nodes of the current state and selects one neighbor
node which is closest to the goal state.
 This algorithm consumes more time as it searches for multiple neighbors
Algorithm for Steepest-Ascent hill climbing:
o Step 1: Evaluate the initial state, if it is goal state then return success and stop, else make
current state as initial state.
o Step 2: Loop until a solution is found or the current state does not change.
a. Let SUCC be a state such that any successor of the current state will be better than it.
b. For each operator that applies to the current state:
a. Apply the new operator and generate a new state.
b. Evaluate the new state.
c. If it is goal state, then return it and quit, else compare it to the SUCC.
d. If it is better than SUCC, then set new state as SUCC.
e. If the SUCC is better than the current state, then set current state to SUCC.
o Step 5: Exit.
3. Stochastic hill climbing:
 Stochastic hill climbing does not examine for all its neighbor before moving.
 Rather, this search algorithm selects one neighbor node at random and decides whether to choose
it as a current state or examine another state.
16. SIMULATED ANNEALING:
 A hill-climbing algorithm which never makes a move towards a lower value guaranteed to be
incomplete because it can get stuck on a local maximum.
 And if algorithm applies a random walk, by moving a successor, then it may complete but not
efficient.
 Simulated Annealing is an algorithm which yields both efficiency and completeness.
 In mechanical term Annealing is a process of hardening a metal or glass to a high temperature
then cooling gradually, so this allows the metal to reach a low-energy crystalline state.
 The same process is used in simulated annealing in which the algorithm picks a random move,
instead of picking the best move.
 If the random move improves the state, then it follows the same path. Otherwise, the algorithm
follows the path which has a probability of less than 1 or it moves downhill and chooses
another path.
Genetic Algorithm:
17. LOCAL SEARCH IN CONTINUOUS SPACE:
 The distinction between discrete and continuous environments pointing out that
most real-world environments are continuous.
 A discrete variable or categorical variable is a type of statistical variable that can
assume only fixed number of distinct values.
 Continuous variable, as the name suggest is a random variable that assumes all the
possible values in a continuum.
 Which leads to a solution state required to reach the goal node.
 But beyond these “classical search algorithms," we have some “local search
algorithms” where the path cost does not matters, and only focus on
solution-state needed to reach the goal node.
o Example: Greedy BFS* Algorithm.
 A local search algorithm completes its task by traversing on a single current node
rather than multiple paths and following the neighbors of that node generally.
o Example: Hill climbing and simulated annealing can handle continuous
state and action spaces, because they have infinite branching factors.
Solution for Continuous Space:
 One way to avoid continuous problems is simply to discretize the neighborhood
of each state.
 Many methods attempt to use the gradient of the landscape to find a maximum.
The gradient of the objective function is a vector ∇f that gives the magnitude and
direction of the steepest slope.
Local search in continuous space:
Does the local search algorithm work for a pure optimized problem?
• Yes, the local search algorithm works for pure optimized problems.
• A pure optimization problem is one where all the nodes can give a solution. But the
target is to find the best state out of all according to the objective function.
• Unfortunately, the pure optimization problem fails to find high-quality solutions to
reach the goal state from the current state.
• Note: An objective function is a function whose value is either minimized or
maximized in different contexts of the optimization problems.
• In the case of search algorithms, an objective function can be the path cost for
reaching the goal node, etc.
Working of a Local search algorithm:
Problems in Hill Climbing Algorithm:
1. Local Maximum: A local maximum is a peak state in the landscape which is better
than each of its neighboring states, but there is another state also present which is higher
than the local maximum.
Solution: Backtracking technique can be a solution of the local maximum in state space
landscape. Create a list of the promising path so that the algorithm can backtrack the
search space and explore other paths as well.
2. Plateau: A plateau is the flat area of the search space in which all the neighbor states
of the current state contains the same value, because of this algorithm does not find any
best direction to move. A hill-climbing search might be lost in the plateau area.
Solution: The solution for the plateau is to take big steps or very little steps while
searching, to solve the problem. Randomly select a state which is far away from the
current state so it is possible that the algorithm could find non-plateau region.
3. Ridges: A ridge is a special form of the local maximum. It has an area which is
higher than its surrounding areas, but itself has a slope, and cannot be reached in a
single move.
Solution: With the use of bidirectional search, or by moving in different directions, we
can improve this problem.
Conclusion:
• Local search often works well on large problems
– optimality
– Always has some answer available (best found so far)
18. SEARCHING WITH NON-DETERMINISTIC ACTIONS:
 In an environment, the agent can calculate exactly which state results from any
sequence of actions and always knows which state it is in.
 Searching with non-deterministic Actions
 Searching with partial observations
 When the environment is nondeterministic, percepts tell the agent which of the
possible outcomes of its actions has actually occurred.
 In a partially observable environment, every percept helps narrow down the set of
possible states the agent might be in, thus making it easier for the agent to achieve its
goals.
Example: Vacuum world, v2.0
• In the erratic vacuum world, the Suck action works as follows:
• When applied to a dirty square the action cleans the square and sometimes
cleans up dirt in an adjacent square, too.
• When applied to a clean square the action sometimes deposits dirt on the
carpet.
• Solutions for nondeterministic problems can contain nested if–then–else statements;
this means that they are trees rather than sequences.
The eight possible states of the vacuum world; states 7 and 8 are goal states.
• Suck(p1, dirty)= (p1,clean) and sometimes (p2, clean)
• Suck(p1, clean)= sometimes (p1,dirty)
Solution: contingency plan
• [Suck, if State = 5 then [Right, Suck] else [ ]].
• nested if–then–else statements
AND–OR search trees:
• Non-deterministic action= there may be several possible outcomes
• Search space is an AND-OR tree
• Alternating OR and AND layers
• Find solution= search this tree using same methods.
• Solution in a non-deterministic search space
• Not simple action sequence
• Solution= subtree within search tree with:
• Goal node at each leaf (plan covers all contingencies)
• One action at each OR node
• A branch at AND nodes, representing all possible outcomes
• Execution of a solution = essentially
• The first two levels of the search tree for the erratic vacuum world.
• State nodes are OR nodes where some action must be chosen.
• At the AND nodes, shown as circles, every outcome must be handled, as indicated
by the arc linking the outgoing branches.
• The solution found is shown in bold lines.
i. Non-deterministic search trees:
 Start state = 1
 One solution:
o Suck,
o if(state=5) then [right, suck]
ii. Non-determinism: Actions that fail (Try, try again):
• Action failure is often a non-deterministic outcome
• Creates a cycle in the search tree.
• If no successful solution (plan) without a cycle:
• May return a solution that contains a cycle
• Represents retrying the action
• Infinite loop in plan execution?
• Depends on environment
• Action guaranteed to succeed eventually?
• In practice: can limit loops
• Plan no longer complete (could fail)
• Part of the search graph for the slippery vacuum world, where we have shown (some)
cycles explicitly.
• All solutions for this problem are cyclic plans because there is no way to move reliably.
19. SEARCHING WITH PARTIAL OBSERVATIONS:
 In a partially observable environment, every percept helps narrow down the set of
possible states the agent might be in, thus making it easier for the agent to achieve its
goals.
 The key concept required for solving partially observable problems is the belief state.
o Belief state -representing the agent’s current belief about the possible
physical states.
 Searching with no observations
 Searching with observations
Conformant (sensorless) search: Example space:
• Belief state space for the super simple vacuum world
Observations:
– Only 12 reachable states. Versus 2^8= 256 possible belief states
– State space still gets huge very fast! à seldom feasible in practice
– We need sensors! à Reduce state space greatly!
i. Searching with no observations:
(a) Predicting the next belief state for the sensorless vacuum world with a
deterministic action, Right.
(b) Prediction for the same belief state and action in the slippery version of the
sensorless vacuum world.
ii. Searching with observations:
(a) In the deterministic world, Right is applied in the initial belief state, resulting in a
new belief state with two possible physical states; [B, Dirty] and [B, Clean].
(b) In the slippery world, Right is applied in the initial belief state, giving a new belief
state with four physical states; [A, Dirty], [B, Dirty], and [B, Clean].
20.ONLINE SEARCH AGENTS AND UNKNOWN ENVIRONMENTS:
 An online search problem must be solved by an agent executing actions, rather than by pure
computation.
 We assume a deterministic and fully observable environment but we stipulate that the agent knows
only the following:
o ACTIONS(s), which returns a list of actions allowed in state’s;
o The step-cost function c(s, a, s’)—note that this cannot be used until the agent knows
that s’ is the outcome; and
o GOAL-TEST(s).
Considered “offline” search problem
 Works “offline” à searches to compute a whole plan...before ever acting
 Even with percepts à gets HUGE fast in real world
 Lots of possible actions, lots of possible percepts...plus non-det.
Online search
 Idea: Search as you go. Interleave search + action
 Problem : actual percepts prune huge subtree of search space @ each move
 Condition: plan ahead less à don’t foresee problems
 Best case = wasted effort. Reverse actions and re-plan
 Worst case: not reversible actions. Stuck!
Online search only possible method in some worlds
 Agent doesn’t know what states exist (exploration problem)
 Agent doesn’t know what effect actions have (discovery learning)
 Possibly: do online search for a while
o until learn enough to do more predictive search
Example:
The nature of active online search:
Executing online search = algorithm for planning/acting
 Very different than offline search algorithm!
 Offline: search virtually for a plan in constructed search space...
 Can use any search algorithm, e.g., A* with strong h(n)
 A* can expand any node it wants on the frontier (jump around)
 Online agent: Agent literally is in some place!
 Agent is at one node (state) on frontier of search tree
 Can’t just jump around to other states...must plan from current state.
 (Modified) Depth first algorithms are ideal candidates!
 Heuristic functions remain critical!
 H (n) tells depth first which of the successors to explore!
 Admissibility remains relevant too: want to explore likely optimal paths first
 Real agent = real results. At some point I find the goal
 Can compare actual path cost to that predicted at each state by H(n)
 Competitive Ratio: Actual path cost/predicted cost. Lower is better.
 Could also be basis for developing (learning!) improved H (n) over time.
Online local search for agents:
• Hill-climbing is already an online search algorithm but stops at local optimum.
How about randomization?
• Cannot do random restart (you can’t teleport a robot)
• How about just a random walk instead of hill-climbing?
• Can be very bad (two ways back for every way forward above)
• Let’s augment HC with memory
• Learning real-time A* (LRTA*)
• Updates cost estimates, g(s), for the state it leaves
• Likes unexplored states
• f(s) = h(s) not g(s) + h(s) for unexplored states
LRTA* Example:
• We are in shaded state
LRTA* algorithm:

Contenu connexe

Tendances

Stuart russell and peter norvig artificial intelligence - a modern approach...
Stuart russell and peter norvig   artificial intelligence - a modern approach...Stuart russell and peter norvig   artificial intelligence - a modern approach...
Stuart russell and peter norvig artificial intelligence - a modern approach...Lê Anh Đạt
 
I.BEST FIRST SEARCH IN AI
I.BEST FIRST SEARCH IN AII.BEST FIRST SEARCH IN AI
I.BEST FIRST SEARCH IN AIvikas dhakane
 
Knowledge Representation in Artificial intelligence
Knowledge Representation in Artificial intelligence Knowledge Representation in Artificial intelligence
Knowledge Representation in Artificial intelligence Yasir Khan
 
Propositional logic
Propositional logicPropositional logic
Propositional logicRushdi Shams
 
Heuristic Search Techniques Unit -II.ppt
Heuristic Search Techniques Unit -II.pptHeuristic Search Techniques Unit -II.ppt
Heuristic Search Techniques Unit -II.pptkarthikaparthasarath
 
Heuristic search-in-artificial-intelligence
Heuristic search-in-artificial-intelligenceHeuristic search-in-artificial-intelligence
Heuristic search-in-artificial-intelligencegrinu
 
Adversarial search
Adversarial searchAdversarial search
Adversarial searchNilu Desai
 
Introduction Artificial Intelligence a modern approach by Russel and Norvig 1
Introduction Artificial Intelligence a modern approach by Russel and Norvig 1Introduction Artificial Intelligence a modern approach by Russel and Norvig 1
Introduction Artificial Intelligence a modern approach by Russel and Norvig 1Garry D. Lasaga
 
Informed and Uninformed search Strategies
Informed and Uninformed search StrategiesInformed and Uninformed search Strategies
Informed and Uninformed search StrategiesAmey Kerkar
 
Lecture 14 Heuristic Search-A star algorithm
Lecture 14 Heuristic Search-A star algorithmLecture 14 Heuristic Search-A star algorithm
Lecture 14 Heuristic Search-A star algorithmHema Kashyap
 
Ai 03 solving_problems_by_searching
Ai 03 solving_problems_by_searchingAi 03 solving_problems_by_searching
Ai 03 solving_problems_by_searchingMohammed Romi
 
Hill climbing algorithm in artificial intelligence
Hill climbing algorithm in artificial intelligenceHill climbing algorithm in artificial intelligence
Hill climbing algorithm in artificial intelligencesandeep54552
 
Artificial intelligence- Logic Agents
Artificial intelligence- Logic AgentsArtificial intelligence- Logic Agents
Artificial intelligence- Logic AgentsNuruzzaman Milon
 
AI_Session 9 Hill climbing algorithm.pptx
AI_Session 9 Hill climbing algorithm.pptxAI_Session 9 Hill climbing algorithm.pptx
AI_Session 9 Hill climbing algorithm.pptxAsst.prof M.Gokilavani
 
Forward and Backward chaining in AI
Forward and Backward chaining in AIForward and Backward chaining in AI
Forward and Backward chaining in AIMegha Sharma
 

Tendances (20)

Stuart russell and peter norvig artificial intelligence - a modern approach...
Stuart russell and peter norvig   artificial intelligence - a modern approach...Stuart russell and peter norvig   artificial intelligence - a modern approach...
Stuart russell and peter norvig artificial intelligence - a modern approach...
 
AI Lecture 4 (informed search and exploration)
AI Lecture 4 (informed search and exploration)AI Lecture 4 (informed search and exploration)
AI Lecture 4 (informed search and exploration)
 
I.BEST FIRST SEARCH IN AI
I.BEST FIRST SEARCH IN AII.BEST FIRST SEARCH IN AI
I.BEST FIRST SEARCH IN AI
 
Knowledge Representation in Artificial intelligence
Knowledge Representation in Artificial intelligence Knowledge Representation in Artificial intelligence
Knowledge Representation in Artificial intelligence
 
Propositional logic
Propositional logicPropositional logic
Propositional logic
 
Heuristic Search Techniques Unit -II.ppt
Heuristic Search Techniques Unit -II.pptHeuristic Search Techniques Unit -II.ppt
Heuristic Search Techniques Unit -II.ppt
 
Heuristic search-in-artificial-intelligence
Heuristic search-in-artificial-intelligenceHeuristic search-in-artificial-intelligence
Heuristic search-in-artificial-intelligence
 
Adversarial search
Adversarial searchAdversarial search
Adversarial search
 
Introduction Artificial Intelligence a modern approach by Russel and Norvig 1
Introduction Artificial Intelligence a modern approach by Russel and Norvig 1Introduction Artificial Intelligence a modern approach by Russel and Norvig 1
Introduction Artificial Intelligence a modern approach by Russel and Norvig 1
 
Informed and Uninformed search Strategies
Informed and Uninformed search StrategiesInformed and Uninformed search Strategies
Informed and Uninformed search Strategies
 
Informed search
Informed searchInformed search
Informed search
 
Lecture 14 Heuristic Search-A star algorithm
Lecture 14 Heuristic Search-A star algorithmLecture 14 Heuristic Search-A star algorithm
Lecture 14 Heuristic Search-A star algorithm
 
AI: AI & Problem Solving
AI: AI & Problem SolvingAI: AI & Problem Solving
AI: AI & Problem Solving
 
Ai 03 solving_problems_by_searching
Ai 03 solving_problems_by_searchingAi 03 solving_problems_by_searching
Ai 03 solving_problems_by_searching
 
Hill climbing algorithm in artificial intelligence
Hill climbing algorithm in artificial intelligenceHill climbing algorithm in artificial intelligence
Hill climbing algorithm in artificial intelligence
 
Artificial intelligence- Logic Agents
Artificial intelligence- Logic AgentsArtificial intelligence- Logic Agents
Artificial intelligence- Logic Agents
 
AI 3 | Uninformed Search
AI 3 | Uninformed SearchAI 3 | Uninformed Search
AI 3 | Uninformed Search
 
A* Search Algorithm
A* Search AlgorithmA* Search Algorithm
A* Search Algorithm
 
AI_Session 9 Hill climbing algorithm.pptx
AI_Session 9 Hill climbing algorithm.pptxAI_Session 9 Hill climbing algorithm.pptx
AI_Session 9 Hill climbing algorithm.pptx
 
Forward and Backward chaining in AI
Forward and Backward chaining in AIForward and Backward chaining in AI
Forward and Backward chaining in AI
 

Similaire à AI_Unit I notes .pdf

Similaire à AI_Unit I notes .pdf (20)

1.introduction to ai
1.introduction to ai1.introduction to ai
1.introduction to ai
 
EELU AI lecture 1- fall 2022-2023 - Chapter 01- Introduction.ppt
EELU AI  lecture 1- fall 2022-2023 - Chapter 01- Introduction.pptEELU AI  lecture 1- fall 2022-2023 - Chapter 01- Introduction.ppt
EELU AI lecture 1- fall 2022-2023 - Chapter 01- Introduction.ppt
 
LEC_2_AI_INTRODUCTION - Copy.pptx
LEC_2_AI_INTRODUCTION - Copy.pptxLEC_2_AI_INTRODUCTION - Copy.pptx
LEC_2_AI_INTRODUCTION - Copy.pptx
 
lecture1423723637.pdf
lecture1423723637.pdflecture1423723637.pdf
lecture1423723637.pdf
 
AI Mod1@AzDOCUMENTS.in.pdf
AI Mod1@AzDOCUMENTS.in.pdfAI Mod1@AzDOCUMENTS.in.pdf
AI Mod1@AzDOCUMENTS.in.pdf
 
AI Lesson 01
AI Lesson 01AI Lesson 01
AI Lesson 01
 
1.Introduction.ppt
1.Introduction.ppt1.Introduction.ppt
1.Introduction.ppt
 
AI_01_introduction.pptx
AI_01_introduction.pptxAI_01_introduction.pptx
AI_01_introduction.pptx
 
Artificial intelligence
Artificial intelligenceArtificial intelligence
Artificial intelligence
 
Understanding ai
Understanding aiUnderstanding ai
Understanding ai
 
Understanding ai
Understanding aiUnderstanding ai
Understanding ai
 
Ai introduction
Ai  introductionAi  introduction
Ai introduction
 
AI3391 ARTIFICIAL INTELLIGENCE Unit I notes.pdf
AI3391 ARTIFICIAL INTELLIGENCE Unit I notes.pdfAI3391 ARTIFICIAL INTELLIGENCE Unit I notes.pdf
AI3391 ARTIFICIAL INTELLIGENCE Unit I notes.pdf
 
Lecture 1
Lecture 1Lecture 1
Lecture 1
 
Ai notes
Ai notesAi notes
Ai notes
 
AI CHAPTER 1.pdf
AI CHAPTER 1.pdfAI CHAPTER 1.pdf
AI CHAPTER 1.pdf
 
Cosc 208 lecture note-1
Cosc 208 lecture note-1Cosc 208 lecture note-1
Cosc 208 lecture note-1
 
Unit 1
Unit 1Unit 1
Unit 1
 
01 introduction
01 introduction01 introduction
01 introduction
 
Ai
AiAi
Ai
 

Plus de Asst.prof M.Gokilavani

CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordCCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordAsst.prof M.Gokilavani
 
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdfCCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdfAsst.prof M.Gokilavani
 
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfCCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfAsst.prof M.Gokilavani
 
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdfCCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdfAsst.prof M.Gokilavani
 
IT8073_Information Security_UNIT I _.pdf
IT8073_Information Security_UNIT I _.pdfIT8073_Information Security_UNIT I _.pdf
IT8073_Information Security_UNIT I _.pdfAsst.prof M.Gokilavani
 
IT8073 _Information Security _UNIT I Full notes
IT8073 _Information Security _UNIT I Full notesIT8073 _Information Security _UNIT I Full notes
IT8073 _Information Security _UNIT I Full notesAsst.prof M.Gokilavani
 
GE3151 PSPP UNIT IV QUESTION BANK.docx.pdf
GE3151 PSPP UNIT IV QUESTION BANK.docx.pdfGE3151 PSPP UNIT IV QUESTION BANK.docx.pdf
GE3151 PSPP UNIT IV QUESTION BANK.docx.pdfAsst.prof M.Gokilavani
 
GE3151 PSPP UNIT III QUESTION BANK.docx.pdf
GE3151 PSPP UNIT III QUESTION BANK.docx.pdfGE3151 PSPP UNIT III QUESTION BANK.docx.pdf
GE3151 PSPP UNIT III QUESTION BANK.docx.pdfAsst.prof M.Gokilavani
 
GE3151 PSPP All unit question bank.pdf
GE3151 PSPP All unit question bank.pdfGE3151 PSPP All unit question bank.pdf
GE3151 PSPP All unit question bank.pdfAsst.prof M.Gokilavani
 
AI3391 Artificial intelligence Unit IV Notes _ merged.pdf
AI3391 Artificial intelligence Unit IV Notes _ merged.pdfAI3391 Artificial intelligence Unit IV Notes _ merged.pdf
AI3391 Artificial intelligence Unit IV Notes _ merged.pdfAsst.prof M.Gokilavani
 
AI3391 Artificial intelligence Session 29 Forward and backward chaining.pdf
AI3391 Artificial intelligence Session 29 Forward and backward chaining.pdfAI3391 Artificial intelligence Session 29 Forward and backward chaining.pdf
AI3391 Artificial intelligence Session 29 Forward and backward chaining.pdfAsst.prof M.Gokilavani
 
AI3391 Artificial intelligence Session 28 Resolution.pptx
AI3391 Artificial intelligence Session 28 Resolution.pptxAI3391 Artificial intelligence Session 28 Resolution.pptx
AI3391 Artificial intelligence Session 28 Resolution.pptxAsst.prof M.Gokilavani
 
AI3391 Artificial intelligence session 27 inference and unification.pptx
AI3391 Artificial intelligence session 27 inference and unification.pptxAI3391 Artificial intelligence session 27 inference and unification.pptx
AI3391 Artificial intelligence session 27 inference and unification.pptxAsst.prof M.Gokilavani
 
AI3391 Artificial Intelligence Session 26 First order logic.pptx
AI3391 Artificial Intelligence Session 26 First order logic.pptxAI3391 Artificial Intelligence Session 26 First order logic.pptx
AI3391 Artificial Intelligence Session 26 First order logic.pptxAsst.prof M.Gokilavani
 

Plus de Asst.prof M.Gokilavani (20)

CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordCCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
 
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdfCCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
 
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfCCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
 
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdfCCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
 
IT8073_Information Security_UNIT I _.pdf
IT8073_Information Security_UNIT I _.pdfIT8073_Information Security_UNIT I _.pdf
IT8073_Information Security_UNIT I _.pdf
 
IT8073 _Information Security _UNIT I Full notes
IT8073 _Information Security _UNIT I Full notesIT8073 _Information Security _UNIT I Full notes
IT8073 _Information Security _UNIT I Full notes
 
GE3151 PSPP UNIT IV QUESTION BANK.docx.pdf
GE3151 PSPP UNIT IV QUESTION BANK.docx.pdfGE3151 PSPP UNIT IV QUESTION BANK.docx.pdf
GE3151 PSPP UNIT IV QUESTION BANK.docx.pdf
 
GE3151 PSPP UNIT III QUESTION BANK.docx.pdf
GE3151 PSPP UNIT III QUESTION BANK.docx.pdfGE3151 PSPP UNIT III QUESTION BANK.docx.pdf
GE3151 PSPP UNIT III QUESTION BANK.docx.pdf
 
GE3151 UNIT II Study material .pdf
GE3151 UNIT II Study material .pdfGE3151 UNIT II Study material .pdf
GE3151 UNIT II Study material .pdf
 
GE3151 PSPP All unit question bank.pdf
GE3151 PSPP All unit question bank.pdfGE3151 PSPP All unit question bank.pdf
GE3151 PSPP All unit question bank.pdf
 
GE3151_PSPP_All unit _Notes
GE3151_PSPP_All unit _NotesGE3151_PSPP_All unit _Notes
GE3151_PSPP_All unit _Notes
 
GE3151_PSPP_UNIT_5_Notes
GE3151_PSPP_UNIT_5_NotesGE3151_PSPP_UNIT_5_Notes
GE3151_PSPP_UNIT_5_Notes
 
GE3151_PSPP_UNIT_4_Notes
GE3151_PSPP_UNIT_4_NotesGE3151_PSPP_UNIT_4_Notes
GE3151_PSPP_UNIT_4_Notes
 
GE3151_PSPP_UNIT_3_Notes
GE3151_PSPP_UNIT_3_NotesGE3151_PSPP_UNIT_3_Notes
GE3151_PSPP_UNIT_3_Notes
 
GE3151_PSPP_UNIT_2_Notes
GE3151_PSPP_UNIT_2_NotesGE3151_PSPP_UNIT_2_Notes
GE3151_PSPP_UNIT_2_Notes
 
AI3391 Artificial intelligence Unit IV Notes _ merged.pdf
AI3391 Artificial intelligence Unit IV Notes _ merged.pdfAI3391 Artificial intelligence Unit IV Notes _ merged.pdf
AI3391 Artificial intelligence Unit IV Notes _ merged.pdf
 
AI3391 Artificial intelligence Session 29 Forward and backward chaining.pdf
AI3391 Artificial intelligence Session 29 Forward and backward chaining.pdfAI3391 Artificial intelligence Session 29 Forward and backward chaining.pdf
AI3391 Artificial intelligence Session 29 Forward and backward chaining.pdf
 
AI3391 Artificial intelligence Session 28 Resolution.pptx
AI3391 Artificial intelligence Session 28 Resolution.pptxAI3391 Artificial intelligence Session 28 Resolution.pptx
AI3391 Artificial intelligence Session 28 Resolution.pptx
 
AI3391 Artificial intelligence session 27 inference and unification.pptx
AI3391 Artificial intelligence session 27 inference and unification.pptxAI3391 Artificial intelligence session 27 inference and unification.pptx
AI3391 Artificial intelligence session 27 inference and unification.pptx
 
AI3391 Artificial Intelligence Session 26 First order logic.pptx
AI3391 Artificial Intelligence Session 26 First order logic.pptxAI3391 Artificial Intelligence Session 26 First order logic.pptx
AI3391 Artificial Intelligence Session 26 First order logic.pptx
 

Dernier

VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 BookingVIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Bookingdharasingh5698
 
Intro To Electric Vehicles PDF Notes.pdf
Intro To Electric Vehicles PDF Notes.pdfIntro To Electric Vehicles PDF Notes.pdf
Intro To Electric Vehicles PDF Notes.pdfrs7054576148
 
Call Girls Wakad Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Wakad Call Me 7737669865 Budget Friendly No Advance BookingCall Girls Wakad Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Wakad Call Me 7737669865 Budget Friendly No Advance Bookingroncy bisnoi
 
Unit 2- Effective stress & Permeability.pdf
Unit 2- Effective stress & Permeability.pdfUnit 2- Effective stress & Permeability.pdf
Unit 2- Effective stress & Permeability.pdfRagavanV2
 
Online banking management system project.pdf
Online banking management system project.pdfOnline banking management system project.pdf
Online banking management system project.pdfKamal Acharya
 
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdfONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdfKamal Acharya
 
KubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghlyKubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghlysanyuktamishra911
 
Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...
Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...
Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...Arindam Chakraborty, Ph.D., P.E. (CA, TX)
 
Call Girls In Bangalore ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Bangalore ☎ 7737669865 🥵 Book Your One night StandCall Girls In Bangalore ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Bangalore ☎ 7737669865 🥵 Book Your One night Standamitlee9823
 
chapter 5.pptx: drainage and irrigation engineering
chapter 5.pptx: drainage and irrigation engineeringchapter 5.pptx: drainage and irrigation engineering
chapter 5.pptx: drainage and irrigation engineeringmulugeta48
 
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...SUHANI PANDEY
 
Booking open Available Pune Call Girls Pargaon 6297143586 Call Hot Indian Gi...
Booking open Available Pune Call Girls Pargaon  6297143586 Call Hot Indian Gi...Booking open Available Pune Call Girls Pargaon  6297143586 Call Hot Indian Gi...
Booking open Available Pune Call Girls Pargaon 6297143586 Call Hot Indian Gi...Call Girls in Nagpur High Profile
 
University management System project report..pdf
University management System project report..pdfUniversity management System project report..pdf
University management System project report..pdfKamal Acharya
 
Work-Permit-Receiver-in-Saudi-Aramco.pptx
Work-Permit-Receiver-in-Saudi-Aramco.pptxWork-Permit-Receiver-in-Saudi-Aramco.pptx
Work-Permit-Receiver-in-Saudi-Aramco.pptxJuliansyahHarahap1
 
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXssuser89054b
 

Dernier (20)

NFPA 5000 2024 standard .
NFPA 5000 2024 standard                                  .NFPA 5000 2024 standard                                  .
NFPA 5000 2024 standard .
 
Water Industry Process Automation & Control Monthly - April 2024
Water Industry Process Automation & Control Monthly - April 2024Water Industry Process Automation & Control Monthly - April 2024
Water Industry Process Automation & Control Monthly - April 2024
 
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 BookingVIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
 
Intro To Electric Vehicles PDF Notes.pdf
Intro To Electric Vehicles PDF Notes.pdfIntro To Electric Vehicles PDF Notes.pdf
Intro To Electric Vehicles PDF Notes.pdf
 
Call Girls Wakad Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Wakad Call Me 7737669865 Budget Friendly No Advance BookingCall Girls Wakad Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Wakad Call Me 7737669865 Budget Friendly No Advance Booking
 
Call Girls in Netaji Nagar, Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
Call Girls in Netaji Nagar, Delhi 💯 Call Us 🔝9953056974 🔝 Escort ServiceCall Girls in Netaji Nagar, Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
Call Girls in Netaji Nagar, Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
 
Unit 2- Effective stress & Permeability.pdf
Unit 2- Effective stress & Permeability.pdfUnit 2- Effective stress & Permeability.pdf
Unit 2- Effective stress & Permeability.pdf
 
Online banking management system project.pdf
Online banking management system project.pdfOnline banking management system project.pdf
Online banking management system project.pdf
 
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdfONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
 
KubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghlyKubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghly
 
Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...
Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...
Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...
 
Call Girls In Bangalore ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Bangalore ☎ 7737669865 🥵 Book Your One night StandCall Girls In Bangalore ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Bangalore ☎ 7737669865 🥵 Book Your One night Stand
 
chapter 5.pptx: drainage and irrigation engineering
chapter 5.pptx: drainage and irrigation engineeringchapter 5.pptx: drainage and irrigation engineering
chapter 5.pptx: drainage and irrigation engineering
 
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...
 
Booking open Available Pune Call Girls Pargaon 6297143586 Call Hot Indian Gi...
Booking open Available Pune Call Girls Pargaon  6297143586 Call Hot Indian Gi...Booking open Available Pune Call Girls Pargaon  6297143586 Call Hot Indian Gi...
Booking open Available Pune Call Girls Pargaon 6297143586 Call Hot Indian Gi...
 
University management System project report..pdf
University management System project report..pdfUniversity management System project report..pdf
University management System project report..pdf
 
Work-Permit-Receiver-in-Saudi-Aramco.pptx
Work-Permit-Receiver-in-Saudi-Aramco.pptxWork-Permit-Receiver-in-Saudi-Aramco.pptx
Work-Permit-Receiver-in-Saudi-Aramco.pptx
 
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
 
Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar ≼🔝 Delhi door step de...
Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar  ≼🔝 Delhi door step de...Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar  ≼🔝 Delhi door step de...
Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar ≼🔝 Delhi door step de...
 
(INDIRA) Call Girl Bhosari Call Now 8617697112 Bhosari Escorts 24x7
(INDIRA) Call Girl Bhosari Call Now 8617697112 Bhosari Escorts 24x7(INDIRA) Call Girl Bhosari Call Now 8617697112 Bhosari Escorts 24x7
(INDIRA) Call Girl Bhosari Call Now 8617697112 Bhosari Escorts 24x7
 

AI_Unit I notes .pdf

  • 1. ARTIFICAL INTELLIGENE B.TECH III YEAR – II SEM (R18) (2022- 2023) Prepared By Asst.Prof.M.Gokilavani Department of Computer Science and Engineering (AI & ML)
  • 2. R18 B.Tech. CSE (AIML) III & IV Year JNTU Hyderabad ARTIFICIAL INTELLIGENCE B.Tech. III Year II Sem. L T P C 3 1 0 4 Prerequisites: 1. A course on “Computer Programming and Data Structures” 2. A course on “Advanced Data Structures” 3. A course on “Design and Analysis of Algorithms” 4. A course on “Mathematical Foundations of Computer Science” 5. Some background in linear algebra, data structures and algorithms, and probability will all be helpful Course Objectives:  To learn the distinction between optimal reasoning Vs. human like reasoning  To understand the concepts of state space representation, exhaustive search, heuristic search together with the time and space complexities.  To learn different knowledge representation techniques.  To understand the applications of AI, namely game playing, theorem proving, and machine learning. Course Outcomes:  Ability to formulate an efficient problem space for a problem expressed in natural language.  Select a search algorithm for a problem and estimate its time and space complexities.  Possess the skill for representing knowledge using the appropriate technique for a given problem.  Possess the ability to apply AI techniques to solve problems of game playing, and machine learning. UNIT - I Problem Solving by Search-I: Introduction to AI, Intelligent Agents Problem Solving by Search –II: Problem-Solving Agents, Searching for Solutions, Uninformed Search Strategies: Breadth-first search, Uniform cost search, Depth-first search, Iterative deepening Depth-first search, Bidirectional search, Informed (Heuristic) Search Strategies: Greedy best-first search, A* search, Heuristic Functions, Beyond Classical Search: Hill-climbing search, Simulated annealing search, Local Search in Continuous Spaces, Searching with Non-Deterministic Actions, Searching wih Partial Observations, Online Search Agents and Unknown Environment . UNIT - II Problem Solving by Search-II and Propositional Logic Adversarial Search: Games, Optimal Decisions in Games, Alpha–Beta Pruning, Imperfect Real-Time Decisions. Constraint Satisfaction Problems: Defining Constraint Satisfaction Problems, Constraint Propagation, Backtracking Search for CSPs, Local Search for CSPs, The Structure of Problems. Propositional Logic: Knowledge-Based Agents, The Wumpus World, Logic, Propositional Logic, Propositional Theorem Proving: Inference and proofs, Proof by resolution, Horn clauses and definite clauses, Forward and backward chaining, Effective Propositional Model Checking, Agents Based on Propositional Logic. UNIT - III Logic and Knowledge Representation First-Order Logic: Representation, Syntax and Semantics of First-Order Logic, Using First-Order Logic, Knowledge Engineering in First-Order Logic.
  • 3. R18 B.Tech. CSE (AIML) III & IV Year JNTU Hyderabad Inference in First-Order Logic: Propositional vs. First-Order Inference, Unification and Lifting, Forward Chaining, Backward Chaining, Resolution. Knowledge Representation: Ontological Engineering, Categories and Objects, Events. Mental Events and Mental Objects, Reasoning Systems for Categories, Reasoning with Default Information. UNIT - IV Planning Classical Planning: Definition of Classical Planning, Algorithms for Planning with State-Space Search, Planning Graphs, other Classical Planning Approaches, Analysis of Planning approaches. Planning and Acting in the Real World: Time, Schedules, and Resources, Hierarchical Planning, Planning and Acting in Nondeterministic Domains, Multi agent Planning. UNIT - V Uncertain knowledge and Learning Uncertainty: Acting under Uncertainty, Basic Probability Notation, Inference Using Full Joint Distributions, Independence, Bayes’ Rule and Its Use, Probabilistic Reasoning: Representing Knowledge in an Uncertain Domain, The Semantics of Bayesian Networks, Efficient Representation of Conditional Distributions, Approximate Inference in Bayesian Networks, Relational and First-Order Probability, Other Approaches to Uncertain Reasoning; Dempster-Shafer theory. Learning: Forms of Learning, Supervised Learning, Learning Decision Trees. Knowledge in Learning: Logical Formulation of Learning, Knowledge in Learning, Explanation-Based Learning, Learning Using Relevance Information, Inductive Logic Programming. TEXT BOOK: 1. Artificial Intelligence A Modern Approach, Third Edition, Stuart Russell and Peter Norvig, Pearson Education. REFERENCE BOOKS: 1. Artificial Intelligence, 3rd Edn, E. Rich and K.Knight (TMH) 2. Artificial Intelligence, 3rd Edn., Patrick Henny Winston, Pearson Education. 3. Artificial Intelligence, Shivani Goel, Pearson Education. 4. Artificial Intelligence and Expert systems – Patterson, Pearson Education.
  • 4. UNIT I Problem Solving by Search-I: Introduction to AI, Intelligent Agents Problem Solving by Search –II: Problem-Solving Agents, Searching for Solutions, Uninformed Search Strategies: Breadth-first search, Uniform cost search, Depth-first search, Iterative deepening Depth-first search, Bidirectional search, Informed (Heuristic) Search Strategies: Greedy best-first search, A*search, Heuristic Functions, Beyond Classical Search: Hill-climbing search, Simulated annealing search, Local Search in Continuous Spaces, Searching with Non-Deterministic Actions, Searching with Partial Observations, Online Search Agents and Unknown Environment. 1. INTRODUCTION TO AI:  AI is one of the fascinating and universal fields of Computer science which has a great scope in future. AI holds a tendency to cause a machine to work as a human.  Artificial Intelligence is composed of two words Artificial and Intelligence, where Artificial defines "man-made," and intelligence defines "thinking power", hence AI means "a man-made thinking power."  Artificial Intelligence exists when a machine can have human based skills such as learning, reasoning, and solving problems.  With Artificial Intelligence you do not need to preprogram a machine to do some work, despite that you can create a machine with programmed algorithms which can work with own intelligence, and that is the awesomeness of AI.  It is believed that AI is not a new technology, and some people says that as per Greek myth, there were Mechanical men in early days which can work and behave like humans.
  • 5. Turing Test in AI:  In 1950, Alan Turing introduced a test to check whether a machine can think like a human or not, this test is known as the Turing Test.  In this test, Turing proposed that the computer can be said to be an intelligent if it can mimic human response under specific conditions.  Turing Test was introduced by Turing in his 1950 paper, "Computing Machinery and Intelligence," which considered the question, "Can Machine think?".  The Turing test is based on a party game "Imitation game," with some modifications.  This game involves three players in which one player is Computer, another player is human responder, and the third player is a human Interrogator, who is isolated from other two players and his job is to find that which player is machine among two of them.  Consider, Player A is a computer, Player B is human, and Player C is an interrogator. Interrogator is aware that one of them is machine, but he needs to identify this on the basis of questions and their responses.  The conversation between all players is via keyboard and screen so the result would not depend on the machine's ability to convert words as speech.  The test result does not depend on each correct answer, but only how closely its responses like a human answer. The computer is permitted to do everything possible to force a wrong identification by the interrogator.  The questions and answers can be like: o Interrogator: Are you a computer? o Player A (Computer): No o Interrogator: Multiply two large numbers such as (256896489*456725896) o Player A: Long pause and give the wrong answer.  In this game, if an interrogator would not be able to identify which is a machine and which is human, then the computer passes the test successfully, and the machine is said to be intelligent and can think like a human.  "In 1991, the New York businessman Hugh Loebner announces the prize competition, offering a $100,000 prize for the first computer to pass the Turing test. However, no AI program to till date, come close to passing an undiluted Turing test". Goals of Artificial Intelligence: Following are the main goals of Artificial Intelligence:
  • 6. 1. Replicate human intelligence 2. Solve Knowledge-intensive tasks 3. An intelligent connection of perception and action 4. Building a machine which can perform tasks that requires human intelligence such as: o Proving a theorem o Playing chess o Plan some surgical operation o Driving a car in traffic 5. Creating some system which can exhibit intelligent behavior, learn new things by itself, demonstrate, explain, and can advise to its user. Application of AI:  Artificial Intelligence has various applications in today's society. It is becoming essential for today's time because it can solve complex problems with an efficient way in multiple industries, such as Healthcare, entertainment, finance, education, etc. AI is making our daily life more comfortable and fast.  Following are some sectors which have the application of Artificial Intelligence: 1. AI in Astronomy  Artificial Intelligence can be very useful to solve complex universe problems. AI technology can be helpful for understanding the universe such as how it works, origin, etc. 2. AI in Healthcare  In the last, five to ten years, AI becoming more advantageous for the healthcare industry and going to have a significant impact on this industry.  Healthcare Industries are applying AI to make a better and faster diagnosis than humans.
  • 7.  AI can help doctors with diagnoses and can inform when patients are worsening so that medical help can reach to the patient before hospitalization. 3. AI in Gaming  AI can be used for gaming purpose. The AI machines can play strategic games like chess, where the machine needs to think of a large number of possible places. 4. AI in Finance  AI and finance industries are the best matches for each other.  The finance industry is implementing automation, Chabot, adaptive intelligence, algorithm trading, and machine learning into financial processes. 5. AI in Data Security  The security of data is crucial for every company and cyber-attacks are growing very rapidly in the digital world. AI can be used to make your data more safe and secure.  Some examples such as AEG bot, AI2 Platform, are used to determine software bug and cyber- attacks in a better way. 6. AI in Social Media  Social Media sites such as Face book, Twitter, and Snap chat contain billions of user profiles, which need to be stored and managed in a very efficient way.  AI can organize and manage massive amounts of data.  AI can analyze lots of data to identify the latest trends, hash tag, and requirement of different users. 7. AI in Travel & Transport  AI is becoming highly demanding for travel industries.  AI is capable of doing various travel related works such as from making travel arrangement to suggesting the hotels, flights, and best routes to the customers.  Travel industries are using AI-powered chat bots which can make human-like interaction with customers for better and fast response. 8. AI in Automotive Industry  Some Automotive industries are using AI to provide virtual assistant to their user for better performance. Such as Tesla has introduced TeslaBot, an intelligent virtual assistant.  Various Industries are currently working for developing self-driven cars which can make your journey more safe and secure. 9. AI in Robotics:  Artificial Intelligence has a remarkable role in Robotics.  Usually, general robots are programmed such that they can perform some repetitive task, but with the help of AI, we can create intelligent robots which can perform tasks with their own experiences without pre-programmed.  Humanoid Robots are best examples for AI in robotics, recently the intelligent Humanoid robot named as Erica and Sophia has been developed which can talk and behave like humans. 10. AI in Entertainment  We are currently using some AI based applications in our daily life with some entertainment services such as Netflix or Amazon.
  • 8.  With the help of ML/AI algorithms, these services show the recommendations for programs or shows. 11. AI in Agriculture  Agriculture is an area which requires various resources, labor, money, and time for best result. Now a day's agriculture is becoming digital, and AI is emerging in this field. Agriculture is applying AI as agriculture robotics, solid and crop monitoring, predictive analysis. AI in agriculture can be very helpful for farmers. 12. AI in E-commerce  AI is providing a competitive edge to the e-commerce industry, and it is becoming more demanding in the e-commerce business.  AI is helping shoppers to discover associated products with recommended size, color, or even brand. 13. AI in education:  AI can automate grading so that the tutor can have more time to teach.  AI Chabot can communicate with students as a teaching assistant.  AI in the future can be work as a personal virtual tutor for students, which will be accessible easily at any time and any place. 2. INTELLIGENT AGENTS: Types of AI Agents:  Agents can be grouped into five classes based on their degree of perceived intelligence and capability. All these agents can improve their performance and generate better action over the time. These are given below: o Simple Reflex Agent o Model-based reflex agent o Goal-based agents o Utility-based agent o Learning agent i. Simple Reflex agent:  The Simple reflex agents are the simplest agents. These agents take decisions on the basis of the current percepts and ignore the rest of the percept history.  These agents only succeed in the fully observable environment.  The Simple reflex agent does not consider any part of percepts history during their decision and action process.  The Simple reflex agent works on Condition-action rule, which means it maps the current state to action. Such as a Room Cleaner agent, it works only if there is dirt in the room.  Problems for the simple reflex agent design approach: o They have very limited intelligence o They do not have knowledge of non-perceptual parts of the current state o Mostly too big to generate and to store. o Not adaptive to changes in the environment.
  • 9. ii. Model-based reflex agent:  The Model-based agent can work in a partially observable environment, and track the situation.  A model-based agent has two important factors: o Model: It is knowledge about "how things happen in the world," so it is called a Model-based agent. o Internal State: It is a representation of the current state based on percept history.  These agents have the model, "which is knowledge of the world" and based on the model they perform actions.  Updating the agent state requires information about: o How the world evolves o How the agent's action affects the world. iii. Goal-based agents  The knowledge of the current state environment is not always sufficient to decide for an agent to what to do.  The agent needs to know its goal which describes desirable situations.  Goal-based agents expand the capabilities of the model-based agent by having the "goal" information.  They choose an action, so that they can achieve the goal.  These agents may have to consider a long sequence of possible actions before deciding whether the goal is achieved or not.  Such considerations of different scenario are called searching and planning, which makes an agent proactive.
  • 10. iv. Utility-based agents  These agents are similar to the goal-based agent but provide an extra component of utility measurement which makes them different by providing a measure of success at a given state.  Utility-based agent act based not only goals but also the best way to achieve the goal.  The Utility-based agent is useful when there are multiple possible alternatives, and an agent has to choose in order to perform the best action.  The utility function maps each state to a real number to check how efficiently each action achieves the goals. v. Learning Agents  A learning agent in AI is the type of agent which can learn from its past experiences, or it has learning capabilities.  It starts to act with basic knowledge and then able to act and adapt automatically through learning.  A learning agent has mainly four conceptual components, which are: 1. Learning element: It is responsible for making improvements by learning from environment 2. Critic: Learning element takes feedback from critic which describes that how well the agent is doing with respect to a fixed performance standard.
  • 11. 3. Performance element: It is responsible for selecting external action 4. Problem generator: This component is responsible for suggesting actions that will lead to new and informative experiences.  Hence, learning agents are able to learn, analyze performance, and look for new ways to improve the performance. AGENTS:  An AI system can be defined as the study of the rational agent and its environment. The agents sense the environment through sensors and act on their environment through actuators. An AI agent can have mental properties such as knowledge, belief, intention, etc. What is an Agent? An agent can be anything that perceive its environment through sensors and act upon that environment through actuators. An Agent runs in the cycle of perceiving, thinking, and acting. An agent can be: o Human-Agent: A human agent has eyes, ears, and other organs which work for sensors and hand, legs, vocal tract work for actuators. o Robotic Agent: A robotic agent can have cameras, infrared range finder, NLP for sensors and various motors for actuators. o Software Agent: Software agent can have keystrokes, file contents as sensory input and act on those inputs and display output on the screen. Hence the world around us is full of agents such as thermostat, cell phone, camera, and even we are also agents. Before moving forward, we should first know about sensors, effectors, and actuators.  Sensor: Sensor is a device which detects the change in the environment and sends the information to other electronic devices. An agent observes its environment through sensors.  Actuators: Actuators are the component of machines that converts energy into motion. The actuators are only responsible for moving and controlling a system. An actuator can be an electric motor, gears, rails, etc.  Effectors: Effectors are the devices which affect the environment. Effectors can be legs, wheels, arms, fingers, wings, fins, and display screen.
  • 12. What is Intelligent Agents?  An intelligent agent is an autonomous entity which acts upon an environment using sensors and actuators for achieving goals. An intelligent agent may learn from the environment to achieve their goals. A thermostat is an example of an intelligent agent. Following are the main four rules for an AI agent: o Rule 1: An AI agent must have the ability to perceive the environment. o Rule 2: The observation must be used to make decisions. o Rule 3: Decision should result in an action. o Rule 4: The action taken by an AI agent must be a rational action. What is Rational Agent?  A rational agent is an agent which has clear preference, models uncertainty, and acts in a way to maximize its performance measure with all possible actions.  A rational agent is said to perform the right things. AI is about creating rational agents to use for game theory and decision theory for various real-world scenarios.  For an AI agent, the rational action is most important because in AI reinforcement learning algorithm, for each best possible action, agent gets the positive reward and for each wrong action, an agent gets a negative reward. Define Rationality.  The rationality of an agent is measured by its performance measure. Rationality can be judged on the basis of following points: o Performance measure which defines the success criterion. o Agent prior knowledge of its environment. o Best possible actions that an agent can perform. o The sequence of percepts. Structure of an AI Agent  The task of AI is to design an agent program which implements the agent function. The structure of an intelligent agent is a combination of architecture and agent program. It can be viewed as:
  • 13. Agent = Architecture + Agent program Following are the main three terms involved in the structure of an AI agent:  Architecture: Architecture is machinery that an AI agent executes on.  Agent Function: Agent function is used to map a percept to an action. F: P* → A  Agent program: Agent program is an implementation of agent function. An agent program executes on the physical architecture to produce function f. Define PEAS Representation.  PEAS is a type of model on which an AI agent works upon. When we define an AI agent or rational agent, then we can group its properties under PEAS representation model. It is made up of four words: o P: Performance measure o E: Environment o A: Actuators o S: Sensors Here performance measure is the objective for the success of an agent's behavior. Example: PEAS for self-driving cars: Let's suppose a self-driving car then PEAS representation will be:  Performance: Safety, time, legal drive, comfort  Environment: Roads, other vehicles, road signs, pedestrian  Actuators: Steering, accelerator, brake, signal, horn  Sensors: Camera, GPS, speedometer, odometer, accelerometer, sonar. Properties of Task Environment:  An environment is everything in the world which surrounds the agent, but it is not a part of an agent itself. An environment can be described as a situation in which an agent is present.  The environment is where agent lives, operate and provide the agent with something to sense and act upon it. An environment is mostly said to be non-feministic. Features of Environment  As per Russell and Norvig, an environment can have various features from the point of view of an agent: 1. Fully observable vs Partially Observable 2. Static vs Dynamic 3. Discrete vs Continuous 4. Deterministic vs Stochastic 5. Single-agent vs Multi-agent 6. Episodic vs sequential 7. Known vs Unknown 8. Accessible vs Inaccessible
  • 14. 3. PROBLEM-SOLVING AGENTS:  In Artificial Intelligence, Search techniques are universal problem-solving methods. Rational agents or Problem-solving agents in AI mostly used these search strategies or algorithms to solve a specific problem and provide the best result. Problem-solving agents are the goal-based agents and use atomic representation. In this topic, we will learn various problem-solving search algorithms. Well define problem and Solution: A problem can be defined formally by five components: • The initial state that the agent starts in. • A description of the possible actions available to the agent. • A description of what each action does; the formal name for this is the transition model. • The goal test, which determines whether a given state is a goal state. Sometimes there is an explicit set of possible goal states, and the test simply checks whether the given state is one of them. • A path cost function that assigns a numeric cost to each path. The problem-solving agent chooses a cost function that reflects its own performance measure. Example: 1 Romania • On holiday in Romania; currently in Arad. • Flight leaves tomorrow from Bucharest • Formulate goal: • be in Bucharest • Formulate problem: • states: various cities • actions: drive between cities • Find solution: • sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest Example: 2 Toy problems • Those intended to illustrate or exercise various problem-solving methods • E.g., puzzle chess, etc.
  • 15. Example: 3 Real-world problems • Tend to be more difficult and whose solutions people actually care about • E.g., Design, planning, etc. Example: 4 Toy Problem Possible states of Vacuum Cleaner (Toy Problem):
  • 16. Example: 5 8* Puzzle Problem Possible Moves for 8 Queen Problems:
  • 17. 4. SEARCHING FOR SOLUTIONS: • Finding out a solution is done by • searching through the state space • All problems are transformed • as a search tree • generated by the initial state and successor function Search Tree: • Initial state • The root of the search tree is a search node • Expanding • applying successor function to the current state • thereby generating a new set of states • leaf nodes • the states having no successors Fringe: Set of search nodes that have not been expanded yet. Search tree Components: • A node is having five components: • STATE: which state it is in the state space • PARENT-NODE: from which node it is generated • ACTION: which action applied to its parent-node to generate it • PATH-COST: the cost, g(n), from initial state to the node n itself • DEPTH: number of steps along the path from the initial state Search Algorithm Terminologies:  Search: Searching is a step by step procedure to solve a search-problem in a given search space. A search problem can have three main factors: a. Search Space: Search space represents a set of possible solutions, which a system may have. b. Start State: It is a state from where agent begins the search. c. Goal test: It is a function which observe the current state and returns whether the goal state is achieved or not.  Search tree: A tree representation of search problem is called Search tree. The root of the search tree is the root node which is corresponding to the initial state.  Actions: It gives the description of all the available actions to the agent.  Transition model: A description of what each action do, can be represented as a transition model.  Path Cost: It is a function which assigns a numeric cost to each path.  Solution: It is an action sequence which leads from the start node to the goal node.  Optimal Solution: If a solution has the lowest cost among all solutions. i. Properties of Search Algorithms (or) measuring problem Solving performance: Following are the four essential properties of search algorithms to compare the efficiency of these algorithms:
  • 18.  Completeness: A search algorithm is said to be complete if it guarantees to return a solution if at least any solution exists for any random input.  Optimality: If a solution found for an algorithm is guaranteed to be the best solution (lowest path cost) among all other solutions, then such a solution for is said to be an optimal solution.  Time Complexity: Time complexity is a measure of time for an algorithm to complete its task.  Space Complexity: It is the maximum storage space required at any point during the search, as the complexity of the problem. 5. TYPES OF SEARCH ALGORITHMS: Based on the search problems we can classify the search algorithms into uninformed (Blind search) search and informed search (Heuristic search) algorithms. i. Uninformed/Blind Search:  The uninformed search does not contain any domain knowledge such as closeness, the location of the goal.  It operates in a brute-force way as it only includes information about how to traverse the tree and how to identify leaf and goal nodes.  Uninformed search applies a way in which search tree is searched without any information about the search space like initial state operators and test for the goal, so it is also called blind search.  It examines each node of the tree until it achieves the goal node. It can be divided into five main types: o Breadth-first search o Uniform cost search o Depth-first search o Iterative deepening depth-first search o Bidirectional Search ii. Informed Search  Informed search algorithms use domain knowledge.
  • 19.  In an informed search, problem information is available which can guide the search.  Informed search strategies can find a solution more efficiently than an uninformed search strategy. Informed search is also called a Heuristic search.  A heuristic is a way which might not always be guaranteed for best solutions but guaranteed to find a good solution in reasonable time.  Informed search can solve much complex problem which could not be solved in another way.  An example of informed search algorithms is a traveling salesman problem. 1. Greedy Search 2. A* Search 6. BREADTH-FIRST SEARCH (BFS):  Breadth-first search is the most common search strategy for traversing a tree or graph. This algorithm searches breadthwise in a tree or graph, so it is called breadth-first search.  BFS algorithm starts searching from the root node of the tree and expands all successor nodes at the current level before moving to nodes of next level.  The breadth-first search algorithm is an example of a general-graph search algorithm.  Breadth-first search implemented using FIFO queue data structure. Algorithm: • Step 1: SET STATUS = 1 (ready state) for each node in G • Step 2: Enqueue the starting node A and set its STATUS = 2 (waiting state) • Step 3: Repeat Steps 4 and 5 until QUEUE is empty • Step 4: Dequeue a node N. Process it and set its STATUS = 3 (processed state). • Step 5: Enqueue all the neighbours of N that are in the ready state (whose STATUS = 1) and set their STATUS = 2 (Waiting state) [END OF LOOP] • Step 6: EXIT Example: 1 In the below tree structure, we have shown the traversing of the tree using BFS algorithm from the root node S to goal node K. BFS search algorithm traverse in layers, so it will follow the path which is shown by the dotted arrow, and the traversed path will be:
  • 20. Solution: S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K Example: 2 Queue Structure: Solution: 40,10,20,30,60,50,70
  • 21. Example 3: Practice problem: Advantages:  BFS will provide a solution if any solution exists.  If there is more than one solution for a given problem, then BFS will provide the minimal solution which requires the least number of steps. Disadvantages:  It requires lots of memory since each level of the tree must be saved into memory to expand the next level.  BFS needs lots of time if the solution is far away from the root node. 7. DEPTH-FIRST SEARCH (DFS):  Depth-first search is a recursive algorithm for traversing a tree or graph data structure.  It is called the depth-first search because it starts from the root node and follows each path to its greatest depth node before moving to the next path.  DFS uses a stack data structure for its implementation.  The process of the DFS algorithm is similar to the BFS algorithm. Implementation steps for DFS: • First, create a stack with the total number of vertices in the graph. • Now, choose any vertex as the starting point of traversal, and push that vertex into the stack.
  • 22. • After that, push a non-visited vertex (adjacent to the vertex on the top of the stack) to the top of the stack. • Now, repeat steps 3 and 4 until no vertices are left to visit from the vertex on the stack's top. • If no vertex is left, go back and pop a vertex from the stack. • Repeat steps 2, 3, and 4 until the stack is empty. Algorithm: • Step 1: SET STATUS = 1 (ready state) for each node in G • Step 2: Push the starting node A on the stack and set its STATUS = 2 (waiting state) • Step 3: Repeat Steps 4 and 5 until STACK is empty • Step 4: Pop the top node N. Process it and set its STATUS = 3 (processed state) • Step 5: Push on the stack all the neighbors of N that are in the ready state (whose STATUS = 1) and set their STATUS = 2 (waiting state) [END OF LOOP] • Step 6: EXIT Example 1: In the below search tree, we have shown the flow of depth-first search, and it will follow the order as: Root node--->Left node ----> right node. It will start searching from root node S, and traverse A, then B, then D and E, after traversing E, it will backtrack the tree as E has no other successor and still goal node is not found. After backtracking it will traverse node C and then G, and here it will terminate as it found goal node. Solution: S,A,B,D,E,C,G,H,I,K,J.
  • 23. Example 2: Example 3: Advantage:  DFS requires very less memory as it only needs to store a stack of the nodes on the path from root node to the current node.  It takes less time to reach to the goal node than BFS algorithm (if it traverses in the right path). Disadvantage:  There is the possibility that many states keep re-occurring, and there is no guarantee of finding the solution.  DFS algorithm goes for deep down searching and sometime it may go to the infinite loop.
  • 24. Difference between BFS and DFS: 8. DEPTH-LIMITED SEARCH ALGORITHM  A depth-limited search algorithm is similar to depth-first search with a predetermined limit.  Depth-limited search can solve the drawback of the infinite path in the Depth-first search.  In this algorithm, the node at the depth limit will treat as it has no successor nodes further. Depth-limited search can be terminated with two Conditions of failure: o Standard failure value: It indicates that problem does not have any solution. o Cutoff failure value: It defines no solution for the problem within a given depth limit.
  • 25. Advantages: Depth-limited search is Memory efficient. Disadvantages: o Depth-limited search also has a disadvantage of incompleteness. o It may not be optimal if the problem has more than one solution. 9. UNIFORM-COST SEARCH ALGORITHM:  Uniform-cost search is a searching algorithm used for traversing a weighted tree or graph. This algorithm comes into play when a different cost is available for each edge.  The primary goal of the uniform-cost search is to find a path to the goal node which has the lowest cumulative cost.  Uniform-cost search expands nodes according to their path costs from the root node. It can be used to solve any graph/tree where the optimal cost is in demand.  A uniform-cost search algorithm is implemented by the priority queue.  It gives maximum priority to the lowest cumulative cost. Uniform cost search is equivalent to BFS algorithm if the path cost of all edges is the same. Advantages: Uniform cost search is optimal because at every state the path with the least cost is chosen. Disadvantages: It does not care about the number of steps involve in searching and only concerned about path cost. Due to which this algorithm may be stuck in an infinite loop. 10. ITERATIVE DEEPENING DEPTH-FIRST SEARCH:  The iterative deepening algorithm is a combination of DFS and BFS algorithms. This search algorithm finds out the best depth limit and does it by gradually increasing the limit until a goal is found.  This algorithm performs depth-first search up to a certain "depth limit", and it keeps increasing the depth limit after each iteration until the goal node is found.  This Search algorithm combines the benefits of Breadth-first search's fast search and depth- first search's memory efficiency.  The iterative search algorithm is useful uninformed search when search space is large, and depth of goal node is unknown. Example: Following tree structure is showing the iterative deepening depth-first search. IDDFS algorithm performs various iterations until it does not find the goal node. The iteration performed by the algorithm is given as:
  • 26. Solution: 1'st Iteration-----> A 2'nd Iteration----> A, B, C 3'rd Iteration------>A, B, D, E, C, F, G 4'th Iteration------>A, B, D, H, I, E, C, F, K, G In the fourth iteration, the algorithm will find the goal node. Advantages: o It combines the benefits of BFS and DFS search algorithm in terms of fast search and memory efficiency. Disadvantages: o The main drawback of IDDFS is that it repeats all the work of the previous phase. 11. BIDIRECTIONAL SEARCH ALGORITHM:  Before moving into bidirectional search let’s first understand a few terms.  Forward Search: Looking in-front of the end from start.  Backward Search: Looking from end to the start back-wards.  Bidirectional search replaces one single search graph with two small sub graphs in which one starts the search from an initial vertex and other starts from goal vertex.  The search stops when these two graphs intersect each other.  Bidirectional search can use search techniques such as BFS, DFS, DLS, etc. Example: In the below search tree, bidirectional search algorithm is applied. This algorithm divides one graph/tree into two sub-graphs. It starts traversing from node 1 in the forward direction and starts from goal node 16 in the backward direction. The algorithm terminates at node 9 where two searches meet.
  • 27. Advantages: o Bidirectional search is fast. o Bidirectional search requires less memory Disadvantages: o Implementation of the bidirectional search tree is difficult. o In bidirectional search, one should know the goal state in advance. 12.INFORMED SEARCH ALGORITHM: • Informed search algorithm contains an array of knowledge such as how far we are from the goal, path cost, how to reach to goal node, etc. This knowledge helps agents to explore less to the search space and find more efficiently the goal node. Example Tree: node with information (weight)
  • 28. Heuristics function: • The informed search algorithm is more useful for large search space. Informed search algorithm uses the idea of heuristic, so it is also called Heuristic search. • Heuristics function: Heuristic is a function which is used in Informed Search, and it finds the most promising path. • It takes the current state of the agent as its input and produces the estimation of how close agent is from the goal. • The heuristic method, however, might not always give the best solution, but it guaranteed to find a good solution in reasonable time. Heuristic function estimates how close a state is to the goal. • It is represented by h(n), and it calculates the cost of an optimal path between the pair of states. The value of the heuristic function is always positive. Where, h(n) <= h*(n) Here, h(n) is heuristic cost, h*(n) is the estimated cost. Hence heuristic cost should be less than or equal to the estimated cost. Pure Heuristic Search:  Pure heuristic search is the simplest form of heuristic search algorithms. It expands nodes based on their heuristic value h(n).  It maintains two lists, OPEN and CLOSED list.  In the CLOSED list, it places those nodes which have already expanded and in the OPEN list, it places nodes which have yet not been expanded.  On each iteration, each node n with the lowest heuristic value is expanded and generates all its successors and n is placed to the closed list. The algorithm continues unit a goal state is found. In the informed search we will discuss two main algorithms which are given below: o Best First Search Algorithm(Greedy search) o A* Search Algorithm 13.BEST FIRST SEARCH ALGORITHM (GREEDY SEARCH): • BFS uses the concept of a Priority queue and heuristic search. To search the graph space, the BFS method uses two lists for tracking the traversal. • An ‘Open’ list that keeps track of the current ‘immediate’ nodes available for traversal and a ‘CLOSED’ list that keeps track of the nodes already traversed. • In the best first search algorithm, we expand the node which is closest to the goal node and the closest cost is estimated by heuristic function, • Where, F (n) = g (n) g (n) path distance Algorithm: 1. Create 2 empty lists: OPEN and CLOSED
  • 29. 2. Start from the initial node (say N) and put it in the ‘ordered’ OPEN list 3. Repeat the next steps until the GOAL node is reached  If the OPEN list is empty, then EXIT the loop returning ‘False’  Select the first/top node (say N) in the OPEN list and move it to the CLOSED list. Also, capture the information of the parent node  If N is a GOAL node, then move the node to the closed list and exit the loop returning ‘True’. The solution can be found by backtracking the path  If N is not the GOAL node, expand node N to generate the ‘immediate’ next nodes linked to node N and add all those to the OPEN list  Reorder the nodes in the OPEN list in ascending order according to an evaluation function f (n). Example:1 Solutions: Expand the nodes of S and put in the CLOSED list • Initialization: Open [A, B], Closed [S] • Iteration 1: Open [A], Closed [S, B] • Iteration 2: Open [E, F, A], Closed [S, B] : Open [E, A], Closed [S, B, F] • Iteration 3: Open [I, G, E, A], Closed [S, B, F] : Open [I, E, A], Closed [S, B, F, G] Hence the final solution path will be: S----> B----->F----> G
  • 31. Advantages: – Best first search can switch between BFS and DFS by gaining the advantages of both the algorithms. – This algorithm is more efficient than BFS and DFS algorithms. Disadvantages: – It can behave as an unguided depth-first search in the worst case scenario. – It can get stuck in a loop as DFS. – This algorithm is not optimal. 14.A* SEARCHING ALGORITHM: • A* Algorithm is one of the best and popular techniques used for path finding and graph traversals. • A lot of games and web-based maps use this algorithm for finding the shortest path efficiently. • It is essentially a best first search algorithm. • This is informed search technique also called as HEURISTIC search. This algorithm Works using heuristic value. Working of A* Search algorithm: A* Algorithm works as- • It maintains a tree of paths originating at the start node. • It extends those paths one edge at a time. • It continues until its termination criterion is satisfied. • A* Algorithm extends the path that minimizes the following function- • Evaluation function F (n) = g(n) + h(n) Here, • ‘n’ is the last node on the path • g(n) is the cost of the path from start node to node ‘n’ • h(n) is a heuristic function that estimates cost of the cheapest path from node ‘n’ to the goal node Algorithm: • The implementation of A* Algorithm involves maintaining two lists- OPEN and CLOSED. • OPEN contains those nodes that have been evaluated by the heuristic function but have not been expanded into successors yet. • CLOSED contains those nodes that have already been visited. The algorithm is as follows- Step-01: • Define a list OPEN. • Initially, OPEN consists solely of a single node, the start node S. Step-02: If the list is empty, return failure and exit. Step-03: Remove node n with the smallest value of f(n) from OPEN and move it to list CLOSED. If node n is a goal state, return success and exit. Step-04: Expand node n.
  • 32. Step-05: If any successor to n is the goal node, return success and the solution by tracing the path from goal node to S. Otherwise, go to Step-06. Step-06: For each successor node, apply the evaluation function f to the node. If the node has not been in either list, add it to OPEN. Step-07: Go back to Step-02. Example: 1 Consider the following graph, • The numbers written on edges represent the distance between the nodes. • The numbers written on nodes represent the heuristic value. • Find the most cost-effective path to reach from start state A to final state J using A* Algorithm. Step-01: • We start with node A. • Node B and Node F can be reached from node A. A* Algorithm calculates f (B) and f (F). • f(B) = 6 + 8 = 14 • f(F) = 3 + 6 = 9 Since f (F) < f (B), so it decides to go to node F. Path- A → F Step-02: • Node G and Node H can be reached from node F. A* Algorithm calculates f (G) and f (H). • f(G) = (3+1) + 5 = 9 • f(H) = (3+7) + 3 = 13 Since f (G) < f (H), so it decides to go to node G. Path- A → F → G Step-03: • Node I can be reached from node G. • A* Algorithm calculates f (I). f (I) = (3+1+3) + 1 = 8 • It decides to go to node I.
  • 33. Path- A → F → G → I Step-04: • Node E, Node H and Node J can be reached from node I. • A* Algorithm calculates f (E), f (H) and f (J). • f(E) = (3+1+3+5) + 3 = 15 • f(H) = (3+1+3+2) + 3 = 12 • f(J) = (3+1+3+3) + 0 = 10 • Since f (J) is least, so it decides to go to node J. Path- A → F → G → I → J • This is the required shortest path from node A to node J. Solution: Example: 2
  • 34. Example 3: Example: 4: 8 Puzzle problem using A* searching algorithm Given an initial state of an 8-puzzle problem and final state to be reached- Find the most cost-effective path to reach the final state from initial state using A* Algorithm. Consider, G (n) = Depth of node H (n) = Number of misplaced tiles. Solution:  A* Algorithm maintains a tree of paths originating at the initial state.  It extends those paths one edge at a time.  It continues until final state is reached.
  • 35.
  • 36. Advantages of A* Searching • A* Algorithm is one of the best path finding algorithms. • It is Complete & Optimal • Used to solve complex problems. Disadvantages of A* searching • Requires more memory 15. BEYOND CLASSICAL SEARCH: • We have seen methods that systematically explore the search space, possibly using principled pruning (e.g., A*) What if we have much larger search spaces? • Search spaces for some real-world problems may be much larger e.g., 1030,000 states as in certain reasoning and planning tasks. • Some of these problems can be solved by Iterative Improvement Methods. Local search algorithm and optimization problem: • In many optimization problems the goal state itself is the solution. • The state space is a set of complete configurations. • Search is about finding the optimal configuration (as in TSP) or just a feasible configuration (as in scheduling problems). • In such cases, one can use iterative improvement, or local search, methods. • An evaluation, or objective, function h must be available that measures the quality of each state. • Main Idea: Start with a random initial configuration and make small, local changes to it that improve its quality. Hill Climbing Algorithm: • In Hill-Climbing technique, starting at the base of a hill, we walk upwards until we reach the top of the hill. • In other words, we start with initial state and we keep improving the solution until it’s optimal. • It's a variation of a generate-and-test algorithm which discards all states which do not look promising or seem unlikely to lead us to the goal state. • To take such decisions, it uses heuristics (an evaluation function) which indicates how close the current state is to the goal state. Hill-Climbing = generate-and-test + heuristics Feature of hill climbing Algorithm: Following are some main features of Hill Climbing Algorithm: • Generate and Test variant: Hill Climbing is the variant of Generate and Test method. The Generate and Test method produce feedback which helps to decide which direction to move in the search space. • Greedy approach: Hill-climbing algorithm search moves in the direction which optimizes the cost. • No backtracking: It does not backtrack the search space, as it does not remember the previous states. State-space Diagram for Hill Climbing:  The state-space landscape is a graphical representation of the hill-climbing algorithm which is showing a graph between various states of algorithm and Objective function/Cost.
  • 37.  On Y-axis we have taken the function which can be an objective function or cost function, and state-space on the x-axis.  If the function on Y-axis is cost then, the goal of search is to find the global minimum and local minimum.  If the function of Y-axis is Objective function, then the goal of the search is to find the global maximum and local maximum. • Local Maximum: Local maximum is a state which is better than its neighbor states, but there is also another state which is higher than it. • Global Maximum: Global maximum is the best possible state of state space landscape. It has the highest value of objective function. • Current state: It is a state in a landscape diagram where an agent is currently present. • Flat local maximum: It is a flat space in the landscape where all the neighbor states of current states have the same value. • Shoulder: It is a plateau region which has an uphill edge. Types of Hill Climbing Algorithm: o Simple hill Climbing o Steepest-Ascent hill-climbing o Stochastic hill Climbing 1. Simple Hill Climbing:  Simple hill climbing is the simplest way to implement a hill climbing algorithm.
  • 38.  It only evaluates the neighbor node state at a time and selects the first one which optimizes current cost and set it as a current state.  It only checks it's one successor state, and if it finds better than the current state, then move else be in the same state.  This algorithm has the following features: o Less time consuming o Less optimal solution and the solution is not guaranteed Algorithm for Simple Hill Climbing: o Step 1: Evaluate the initial state, if it is goal state then return success and Stop. o Step 2: Loop Until a solution is found or there is no new operator left to apply. o Step 3: Select and apply an operator to the current state. o Step 4: Check new state: a. If it is goal state, then return success and quit. b. Else if it is better than the current state then assign new state as a current state. c. Else if not better than the current state, then return to step2. o Step 5: Exit. Example: • Key point while solving any hill-climbing problem is to choose an appropriate heuristic function. • Let's define such function h: • h (x) = +1 for all the blocks in the support structure if the block is correctly positioned otherwise -1 for all the blocks in the support structure.
  • 39. Solution: 2. Steepest-Ascent hill climbing:  The steepest-Ascent algorithm is a variation of simple hill climbing algorithm.  This algorithm examines all the neighboring nodes of the current state and selects one neighbor node which is closest to the goal state.  This algorithm consumes more time as it searches for multiple neighbors Algorithm for Steepest-Ascent hill climbing: o Step 1: Evaluate the initial state, if it is goal state then return success and stop, else make current state as initial state. o Step 2: Loop until a solution is found or the current state does not change. a. Let SUCC be a state such that any successor of the current state will be better than it. b. For each operator that applies to the current state: a. Apply the new operator and generate a new state. b. Evaluate the new state. c. If it is goal state, then return it and quit, else compare it to the SUCC. d. If it is better than SUCC, then set new state as SUCC. e. If the SUCC is better than the current state, then set current state to SUCC. o Step 5: Exit.
  • 40. 3. Stochastic hill climbing:  Stochastic hill climbing does not examine for all its neighbor before moving.  Rather, this search algorithm selects one neighbor node at random and decides whether to choose it as a current state or examine another state. 16. SIMULATED ANNEALING:  A hill-climbing algorithm which never makes a move towards a lower value guaranteed to be incomplete because it can get stuck on a local maximum.  And if algorithm applies a random walk, by moving a successor, then it may complete but not efficient.  Simulated Annealing is an algorithm which yields both efficiency and completeness.  In mechanical term Annealing is a process of hardening a metal or glass to a high temperature then cooling gradually, so this allows the metal to reach a low-energy crystalline state.  The same process is used in simulated annealing in which the algorithm picks a random move, instead of picking the best move.  If the random move improves the state, then it follows the same path. Otherwise, the algorithm follows the path which has a probability of less than 1 or it moves downhill and chooses another path. Genetic Algorithm:
  • 41.
  • 42. 17. LOCAL SEARCH IN CONTINUOUS SPACE:  The distinction between discrete and continuous environments pointing out that most real-world environments are continuous.  A discrete variable or categorical variable is a type of statistical variable that can assume only fixed number of distinct values.  Continuous variable, as the name suggest is a random variable that assumes all the possible values in a continuum.  Which leads to a solution state required to reach the goal node.  But beyond these “classical search algorithms," we have some “local search algorithms” where the path cost does not matters, and only focus on solution-state needed to reach the goal node. o Example: Greedy BFS* Algorithm.  A local search algorithm completes its task by traversing on a single current node rather than multiple paths and following the neighbors of that node generally. o Example: Hill climbing and simulated annealing can handle continuous state and action spaces, because they have infinite branching factors. Solution for Continuous Space:  One way to avoid continuous problems is simply to discretize the neighborhood of each state.  Many methods attempt to use the gradient of the landscape to find a maximum. The gradient of the objective function is a vector ∇f that gives the magnitude and direction of the steepest slope. Local search in continuous space:
  • 43. Does the local search algorithm work for a pure optimized problem? • Yes, the local search algorithm works for pure optimized problems. • A pure optimization problem is one where all the nodes can give a solution. But the target is to find the best state out of all according to the objective function. • Unfortunately, the pure optimization problem fails to find high-quality solutions to reach the goal state from the current state. • Note: An objective function is a function whose value is either minimized or maximized in different contexts of the optimization problems. • In the case of search algorithms, an objective function can be the path cost for reaching the goal node, etc. Working of a Local search algorithm:
  • 44. Problems in Hill Climbing Algorithm: 1. Local Maximum: A local maximum is a peak state in the landscape which is better than each of its neighboring states, but there is another state also present which is higher than the local maximum. Solution: Backtracking technique can be a solution of the local maximum in state space landscape. Create a list of the promising path so that the algorithm can backtrack the search space and explore other paths as well. 2. Plateau: A plateau is the flat area of the search space in which all the neighbor states of the current state contains the same value, because of this algorithm does not find any best direction to move. A hill-climbing search might be lost in the plateau area. Solution: The solution for the plateau is to take big steps or very little steps while searching, to solve the problem. Randomly select a state which is far away from the current state so it is possible that the algorithm could find non-plateau region. 3. Ridges: A ridge is a special form of the local maximum. It has an area which is higher than its surrounding areas, but itself has a slope, and cannot be reached in a single move.
  • 45. Solution: With the use of bidirectional search, or by moving in different directions, we can improve this problem. Conclusion: • Local search often works well on large problems – optimality – Always has some answer available (best found so far) 18. SEARCHING WITH NON-DETERMINISTIC ACTIONS:  In an environment, the agent can calculate exactly which state results from any sequence of actions and always knows which state it is in.  Searching with non-deterministic Actions  Searching with partial observations  When the environment is nondeterministic, percepts tell the agent which of the possible outcomes of its actions has actually occurred.  In a partially observable environment, every percept helps narrow down the set of possible states the agent might be in, thus making it easier for the agent to achieve its goals. Example: Vacuum world, v2.0 • In the erratic vacuum world, the Suck action works as follows: • When applied to a dirty square the action cleans the square and sometimes cleans up dirt in an adjacent square, too. • When applied to a clean square the action sometimes deposits dirt on the carpet. • Solutions for nondeterministic problems can contain nested if–then–else statements; this means that they are trees rather than sequences. The eight possible states of the vacuum world; states 7 and 8 are goal states. • Suck(p1, dirty)= (p1,clean) and sometimes (p2, clean) • Suck(p1, clean)= sometimes (p1,dirty) Solution: contingency plan • [Suck, if State = 5 then [Right, Suck] else [ ]]. • nested if–then–else statements
  • 46. AND–OR search trees: • Non-deterministic action= there may be several possible outcomes • Search space is an AND-OR tree • Alternating OR and AND layers • Find solution= search this tree using same methods. • Solution in a non-deterministic search space • Not simple action sequence • Solution= subtree within search tree with: • Goal node at each leaf (plan covers all contingencies) • One action at each OR node • A branch at AND nodes, representing all possible outcomes • Execution of a solution = essentially • The first two levels of the search tree for the erratic vacuum world. • State nodes are OR nodes where some action must be chosen. • At the AND nodes, shown as circles, every outcome must be handled, as indicated by the arc linking the outgoing branches. • The solution found is shown in bold lines.
  • 47. i. Non-deterministic search trees:  Start state = 1  One solution: o Suck, o if(state=5) then [right, suck] ii. Non-determinism: Actions that fail (Try, try again): • Action failure is often a non-deterministic outcome • Creates a cycle in the search tree. • If no successful solution (plan) without a cycle: • May return a solution that contains a cycle • Represents retrying the action • Infinite loop in plan execution? • Depends on environment • Action guaranteed to succeed eventually? • In practice: can limit loops • Plan no longer complete (could fail)
  • 48. • Part of the search graph for the slippery vacuum world, where we have shown (some) cycles explicitly. • All solutions for this problem are cyclic plans because there is no way to move reliably. 19. SEARCHING WITH PARTIAL OBSERVATIONS:  In a partially observable environment, every percept helps narrow down the set of possible states the agent might be in, thus making it easier for the agent to achieve its goals.  The key concept required for solving partially observable problems is the belief state. o Belief state -representing the agent’s current belief about the possible physical states.  Searching with no observations  Searching with observations Conformant (sensorless) search: Example space: • Belief state space for the super simple vacuum world Observations: – Only 12 reachable states. Versus 2^8= 256 possible belief states – State space still gets huge very fast! à seldom feasible in practice – We need sensors! à Reduce state space greatly!
  • 49. i. Searching with no observations: (a) Predicting the next belief state for the sensorless vacuum world with a deterministic action, Right. (b) Prediction for the same belief state and action in the slippery version of the sensorless vacuum world.
  • 50. ii. Searching with observations: (a) In the deterministic world, Right is applied in the initial belief state, resulting in a new belief state with two possible physical states; [B, Dirty] and [B, Clean]. (b) In the slippery world, Right is applied in the initial belief state, giving a new belief state with four physical states; [A, Dirty], [B, Dirty], and [B, Clean].
  • 51. 20.ONLINE SEARCH AGENTS AND UNKNOWN ENVIRONMENTS:  An online search problem must be solved by an agent executing actions, rather than by pure computation.  We assume a deterministic and fully observable environment but we stipulate that the agent knows only the following: o ACTIONS(s), which returns a list of actions allowed in state’s; o The step-cost function c(s, a, s’)—note that this cannot be used until the agent knows that s’ is the outcome; and o GOAL-TEST(s). Considered “offline” search problem  Works “offline” à searches to compute a whole plan...before ever acting  Even with percepts à gets HUGE fast in real world  Lots of possible actions, lots of possible percepts...plus non-det. Online search  Idea: Search as you go. Interleave search + action  Problem : actual percepts prune huge subtree of search space @ each move  Condition: plan ahead less à don’t foresee problems  Best case = wasted effort. Reverse actions and re-plan  Worst case: not reversible actions. Stuck! Online search only possible method in some worlds  Agent doesn’t know what states exist (exploration problem)  Agent doesn’t know what effect actions have (discovery learning)  Possibly: do online search for a while o until learn enough to do more predictive search Example:
  • 52. The nature of active online search: Executing online search = algorithm for planning/acting  Very different than offline search algorithm!  Offline: search virtually for a plan in constructed search space...  Can use any search algorithm, e.g., A* with strong h(n)  A* can expand any node it wants on the frontier (jump around)  Online agent: Agent literally is in some place!  Agent is at one node (state) on frontier of search tree  Can’t just jump around to other states...must plan from current state.  (Modified) Depth first algorithms are ideal candidates!  Heuristic functions remain critical!  H (n) tells depth first which of the successors to explore!  Admissibility remains relevant too: want to explore likely optimal paths first  Real agent = real results. At some point I find the goal  Can compare actual path cost to that predicted at each state by H(n)  Competitive Ratio: Actual path cost/predicted cost. Lower is better.  Could also be basis for developing (learning!) improved H (n) over time. Online local search for agents: • Hill-climbing is already an online search algorithm but stops at local optimum. How about randomization? • Cannot do random restart (you can’t teleport a robot) • How about just a random walk instead of hill-climbing? • Can be very bad (two ways back for every way forward above) • Let’s augment HC with memory • Learning real-time A* (LRTA*) • Updates cost estimates, g(s), for the state it leaves • Likes unexplored states • f(s) = h(s) not g(s) + h(s) for unexplored states LRTA* Example: • We are in shaded state