SlideShare une entreprise Scribd logo
1  sur  25
Nadar saraswathi college of arts &
science, theni.
Department of cs & it
ARTIFICIAL INTELLIGENCE
PRESENTED BY
G.KAVIYA
M.SC(IT)
TOPIC:LEARNING FROM
OBSERVATION
LEARNING
LEARNING FROM OBSERVATION
 Forms of learning
 Ensemble learning
 Computational learning theory
LEARNING:
Learning is Agent’s percepts should be used for
acting.
It also used for improving the agents ability to act
in the future.
Learning takes places as the agents observes, its
interactions with the world and its own decision making
processes.
FORMS OF LEARNING:
Learning Agent can be thought of as containing
a Performance Element, that decides, what actions to take,
and a Learning Elements that modifies the performance
elements to take better decisions.
Three major issues in learning element design
 Which components the performance element are to be
learned.
 What feedback is available to learn these components.
 What representation is used for the components.
Components of Agents are;
 A direct mapping from conditions on
the current state to actions.
 A means to infer relevant properties of
the world from the percept sequence.
 Information about the way the world
evolves and about the results of the
possible action the agent can take.
 Utility information indicating the
desirability of world states.
 Action value information indicating the
desirability of action.
 Goals the describe classes of the state
whose achievement maximizes the
agent’s utility.
Classified into three categories:
Supervised Learning.
Unsupervised Learning.
Reinforcement Learning.
Supervised Learning:
The Learning here is performed with the
help of teacher. Let us take the example of the learning
process of the small child.
The child doesn’t know how to read/write.
He/she is being taught by the parents at home and by the
teacher in school.
The children are recognize the alphabet,
numerals, etc. Their and every action is supervised by a
teacher.
Continue;
Actually, a child works on
the basis of the output that
he/she has to produce. All
these real-time events
involve supervised learning
methodology.
Similarly, in ANNs
following the supervised
learning, each input vector
requires a corresponding
target vector, which
represents the desired
outputs.
The input vector
along with the target vector
is called training pair.
Continue;
 In this type of training, a
supervisor or teacher is
required for error
minimization. Hence, the
network trained by this
method is said to be using
supervised training
methodology.
 In supervised learning, It is
assumed that the correct
“target” output values are
known for each input pattern.
 The input vector is
presented to the network,
Which result is an output
vector. The output vector is the
actual output vector. Then the
actual output vector is
compared with the desired
output vector.
 If there exists a difference
between the two output
vectors then an error signal is
generated by the network. This
error signal is used for
adjustment of weights until the
actual output matches the
desired output.
Un Supervised Learning:
 The learning here is
performed without the help of
teacher. Consider, learning
process of a tadpole, it learns
by itself, that is, a child fish
learns to swim by itself, it is not
taught by its mother.
 Thus, Its learning process
is independent and is not
supervised by a teacher.
 In ANNs following
unsupervised learning, the
input vectors of similar type
are grouped without the use of.
Reinforcement Learning:
 The learning process is
similar to supervised learning.
In the case of supervised
learning the correct target
output values are known for
each input pattern.
 But, In some cases, Less
information might be available.
For example, the network
might be told that its actual
output is only “50% correct “or
so. Thus, Here only critic
information is available, not the
exact information.
 The learning information
based on the critic information
is called reinforcement learning
and the feedback sent is called
reinforcement signal.
ENSEMBLE OF LEARNING:
 Learn multiple
alternative definitions of
a concept using different
training data or different
learning algorithms.
 Combine decisions
of multiple definitions,
eg. Using weighted
voting.
VALUE OF ENSEMBLES
When combing multiple independent and diverse
decisions each of which is at least more accurate than random
guessing, random errors cancel each other out, correct decisions
are reinforcement.
Generate a group of base-learners which when
combined has higher accuracy.
Different learners use different;
Algorithm.
Hyperparameters.
Representations/Modalities/Views.
Training sets.
Subproblems.
Ensembles:
BOOSTING:
Also uses voting/averaging but models are
weighted according to their performance.
Iterative procedure: new models are influenced
by performance of previously built ones.
* New model is encouraged to become
expert for instances classified incorrectly by earlier
models.
* Intuitive justification: models should be
experts that complement each other.
There are several variants of this algorithm.
Continue;
 STRONG LEARNER:
Objective of machine learning.
o Take labeled data for training.
o Produce a classifier which can be
arbitrarily accurate.
o Strong learners are very difficult to
construct.
 WEAKER LEARNER:
o Take labeled data for training.
o Produce a classifier which is more
accurate than random guessing.
o Constructing weaker learners is
relatively easy.
ADAPTIVE BOOSTING:
Each rectangle corresponds to an example, with
weight proportional to its height.
Crosses corresponds to misclassified examples.
Size of decision tree indicates the weight of that
classifier in the final ensemble.
Using Different Data Distribution
* Start with uniform weighting.
* During each step of learning.
Increase weights of the examples which are not
correctly learned by the weak learner.
Decrease weights of the examples which are
correctly learned by the weak learner.
Continue;
IDEA:
focus on difficult example which
are not correctly classified in the
previous steps
WEIGHTED VOTING:
construct strong classifier by
weighted voting of the weak classifier.
IDEA:
Better weak classifier gets a
larger weight.
Iteratively add weak classifiers
Increase accuracy of the
combined classifier through
minimization of a cost function.
COMPUTATIONAL LEARNING
THEORY
 Computational learning theory characterize. The difficulty of
several types of machine learning problem.
 Capabilities of several types of ML algorithm.
 CLT seeks answer, question such as;
a) “under what conditions is successful learning possible and
impossible?”
b) “under what conditions is a particular learning algorithm
assured of learning successfully?” it means that, what kind of
task are learnable, what kind of data is required for
learnability.
Various issues are:
Sample complexity:
How many training examples are needed for a
learner to converge (with high probability) to a successful
hypothesis?
Computational complexity:
How much computational effort is needed for a
learner to converge to a successful hypothesis?
Mistake bound:
How many training examples will the learner
misclassify before converging to a successful hypothesis?
(PAC) Probably Learning an Approximately
Correct hypothesis:-
A particular setting for the learning problem,
called the probably approximately correct(PAC) learning
model.
This model of learning is based on following
points:
1. Specifying problem setting that defines PAC
model.
2. How many training examples are required.
3. How much computational are required in order
to learn various classes of target functions within PAC
Problem setting:-
X : Set of all the instance (eg: set of people)
each described by attributes <age, height>.
C : Target concept the leaner need to learn.
C: X{0,1}
L : Learner have to learn “people who are
skiers”
C(x) =1 : positive training example
C(x) =0 : negative training example
Error of a hypothesis: True error, denoted by errorD
(h), of hypothesis h w.r.t target concept c and
distribution D is probability that h will misclassify an
instance drawn at random according to D.
errorD (h) = pr [c(x) not equal to h(x)]
XED
PAC Learnability:
No of training examples needed to learn a
hypothesis h, for which
errorD (h) = 0.
For these two difficulties, following measures can be
taken:-
1. No requirement of zero error hypothesis for learner L. So
a bound to error can be set by constant E, that can be made small.
2.Not necessary that learner succeed for every sequence of
randomly drawn training example. So learner probably learn a
hypothesis that is approximately correct.
Bounded by same constant S which is,
Definition:
Consider a concept class define over a set of instance
X of kngtn n and a learner L using hypothesis space H.
C is PAC learnable by L using H if for all CEC.
distribution D over X
E such that 0<E<1/2 and
S such that 0<S<1/2
Learner L will with Probably at least (1-8)
output a hypothesis nEH such that
errorD (n) <= E.
Artificial Intelligence.pptx

Contenu connexe

Similaire à Artificial Intelligence.pptx

Introduction
IntroductionIntroduction
Introductionbutest
 
Introduction
IntroductionIntroduction
Introductionbutest
 
Introduction
IntroductionIntroduction
Introductionbutest
 
Implementation of Naive Bayesian Classifier and Ada-Boost Algorithm Using Mai...
Implementation of Naive Bayesian Classifier and Ada-Boost Algorithm Using Mai...Implementation of Naive Bayesian Classifier and Ada-Boost Algorithm Using Mai...
Implementation of Naive Bayesian Classifier and Ada-Boost Algorithm Using Mai...ijistjournal
 
notes as .ppt
notes as .pptnotes as .ppt
notes as .pptbutest
 
Machine Learning presentation.
Machine Learning presentation.Machine Learning presentation.
Machine Learning presentation.butest
 
Rd1 r17a19 datawarehousing and mining_cap617t_cap617
Rd1 r17a19 datawarehousing and mining_cap617t_cap617Rd1 r17a19 datawarehousing and mining_cap617t_cap617
Rd1 r17a19 datawarehousing and mining_cap617t_cap617Ravi Kumar
 
Implementation of Naive Bayesian Classifier and Ada-Boost Algorithm Using Mai...
Implementation of Naive Bayesian Classifier and Ada-Boost Algorithm Using Mai...Implementation of Naive Bayesian Classifier and Ada-Boost Algorithm Using Mai...
Implementation of Naive Bayesian Classifier and Ada-Boost Algorithm Using Mai...ijistjournal
 
PPT SLIDES
PPT SLIDESPPT SLIDES
PPT SLIDESbutest
 
PPT SLIDES
PPT SLIDESPPT SLIDES
PPT SLIDESbutest
 
ICTCM 2013 Presentation -- Dan DuPort
ICTCM 2013 Presentation  --  Dan DuPortICTCM 2013 Presentation  --  Dan DuPort
ICTCM 2013 Presentation -- Dan DuPortDan DuPort
 
Lecture 09(introduction to machine learning)
Lecture 09(introduction to machine learning)Lecture 09(introduction to machine learning)
Lecture 09(introduction to machine learning)Jeet Das
 
AI_06_Machine Learning.pptx
AI_06_Machine Learning.pptxAI_06_Machine Learning.pptx
AI_06_Machine Learning.pptxYousef Aburawi
 
Statistical foundations of ml
Statistical foundations of mlStatistical foundations of ml
Statistical foundations of mlVipul Kalamkar
 
AI_Unit-4_Learning.pptx
AI_Unit-4_Learning.pptxAI_Unit-4_Learning.pptx
AI_Unit-4_Learning.pptxMohammadAsim91
 

Similaire à Artificial Intelligence.pptx (20)

Introduction
IntroductionIntroduction
Introduction
 
Introduction
IntroductionIntroduction
Introduction
 
Introduction
IntroductionIntroduction
Introduction
 
Implementation of Naive Bayesian Classifier and Ada-Boost Algorithm Using Mai...
Implementation of Naive Bayesian Classifier and Ada-Boost Algorithm Using Mai...Implementation of Naive Bayesian Classifier and Ada-Boost Algorithm Using Mai...
Implementation of Naive Bayesian Classifier and Ada-Boost Algorithm Using Mai...
 
notes as .ppt
notes as .pptnotes as .ppt
notes as .ppt
 
Machine Learning presentation.
Machine Learning presentation.Machine Learning presentation.
Machine Learning presentation.
 
Rd1 r17a19 datawarehousing and mining_cap617t_cap617
Rd1 r17a19 datawarehousing and mining_cap617t_cap617Rd1 r17a19 datawarehousing and mining_cap617t_cap617
Rd1 r17a19 datawarehousing and mining_cap617t_cap617
 
Machine Learning_PPT.pptx
Machine Learning_PPT.pptxMachine Learning_PPT.pptx
Machine Learning_PPT.pptx
 
Implementation of Naive Bayesian Classifier and Ada-Boost Algorithm Using Mai...
Implementation of Naive Bayesian Classifier and Ada-Boost Algorithm Using Mai...Implementation of Naive Bayesian Classifier and Ada-Boost Algorithm Using Mai...
Implementation of Naive Bayesian Classifier and Ada-Boost Algorithm Using Mai...
 
PPT SLIDES
PPT SLIDESPPT SLIDES
PPT SLIDES
 
PPT SLIDES
PPT SLIDESPPT SLIDES
PPT SLIDES
 
Name.pptx
Name.pptxName.pptx
Name.pptx
 
Introduction to machine learning
Introduction to machine learningIntroduction to machine learning
Introduction to machine learning
 
ICTCM 2013 Presentation -- Dan DuPort
ICTCM 2013 Presentation  --  Dan DuPortICTCM 2013 Presentation  --  Dan DuPort
ICTCM 2013 Presentation -- Dan DuPort
 
Lecture 09(introduction to machine learning)
Lecture 09(introduction to machine learning)Lecture 09(introduction to machine learning)
Lecture 09(introduction to machine learning)
 
ML_lec1.pdf
ML_lec1.pdfML_lec1.pdf
ML_lec1.pdf
 
AI_06_Machine Learning.pptx
AI_06_Machine Learning.pptxAI_06_Machine Learning.pptx
AI_06_Machine Learning.pptx
 
Learning in AI
Learning in AILearning in AI
Learning in AI
 
Statistical foundations of ml
Statistical foundations of mlStatistical foundations of ml
Statistical foundations of ml
 
AI_Unit-4_Learning.pptx
AI_Unit-4_Learning.pptxAI_Unit-4_Learning.pptx
AI_Unit-4_Learning.pptx
 

Plus de Kaviya452563

softcomputing.pptx
softcomputing.pptxsoftcomputing.pptx
softcomputing.pptxKaviya452563
 
Big Data Analytics.pptx
Big Data Analytics.pptxBig Data Analytics.pptx
Big Data Analytics.pptxKaviya452563
 
client server computing.pptx
client server computing.pptxclient server computing.pptx
client server computing.pptxKaviya452563
 
Internet of Things.pptx
Internet of Things.pptxInternet of Things.pptx
Internet of Things.pptxKaviya452563
 
python programming.pptx
python programming.pptxpython programming.pptx
python programming.pptxKaviya452563
 
Distributing computing.pptx
Distributing computing.pptxDistributing computing.pptx
Distributing computing.pptxKaviya452563
 
Advanced java programming
Advanced java programmingAdvanced java programming
Advanced java programmingKaviya452563
 
Network and internet security
Network and internet securityNetwork and internet security
Network and internet securityKaviya452563
 
Advanced computer architecture
Advanced computer architectureAdvanced computer architecture
Advanced computer architectureKaviya452563
 
Data structures and algorithms
Data structures and algorithmsData structures and algorithms
Data structures and algorithmsKaviya452563
 

Plus de Kaviya452563 (14)

softcomputing.pptx
softcomputing.pptxsoftcomputing.pptx
softcomputing.pptx
 
OOAD.pptx
OOAD.pptxOOAD.pptx
OOAD.pptx
 
DIP.pptx
DIP.pptxDIP.pptx
DIP.pptx
 
Big Data Analytics.pptx
Big Data Analytics.pptxBig Data Analytics.pptx
Big Data Analytics.pptx
 
client server computing.pptx
client server computing.pptxclient server computing.pptx
client server computing.pptx
 
WE.pptx
WE.pptxWE.pptx
WE.pptx
 
Internet of Things.pptx
Internet of Things.pptxInternet of Things.pptx
Internet of Things.pptx
 
data mining.pptx
data mining.pptxdata mining.pptx
data mining.pptx
 
python programming.pptx
python programming.pptxpython programming.pptx
python programming.pptx
 
Distributing computing.pptx
Distributing computing.pptxDistributing computing.pptx
Distributing computing.pptx
 
Advanced java programming
Advanced java programmingAdvanced java programming
Advanced java programming
 
Network and internet security
Network and internet securityNetwork and internet security
Network and internet security
 
Advanced computer architecture
Advanced computer architectureAdvanced computer architecture
Advanced computer architecture
 
Data structures and algorithms
Data structures and algorithmsData structures and algorithms
Data structures and algorithms
 

Dernier

Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeThiyagu K
 
ComPTIA Overview | Comptia Security+ Book SY0-701
ComPTIA Overview | Comptia Security+ Book SY0-701ComPTIA Overview | Comptia Security+ Book SY0-701
ComPTIA Overview | Comptia Security+ Book SY0-701bronxfugly43
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdfQucHHunhnh
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingTechSoup
 
Making and Justifying Mathematical Decisions.pdf
Making and Justifying Mathematical Decisions.pdfMaking and Justifying Mathematical Decisions.pdf
Making and Justifying Mathematical Decisions.pdfChris Hunter
 
Measures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDMeasures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDThiyagu K
 
PROCESS RECORDING FORMAT.docx
PROCESS      RECORDING        FORMAT.docxPROCESS      RECORDING        FORMAT.docx
PROCESS RECORDING FORMAT.docxPoojaSen20
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfAdmir Softic
 
Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Celine George
 
Introduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsIntroduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsTechSoup
 
ICT role in 21st century education and it's challenges.
ICT role in 21st century education and it's challenges.ICT role in 21st century education and it's challenges.
ICT role in 21st century education and it's challenges.MaryamAhmad92
 
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...Nguyen Thanh Tu Collection
 
Application orientated numerical on hev.ppt
Application orientated numerical on hev.pptApplication orientated numerical on hev.ppt
Application orientated numerical on hev.pptRamjanShidvankar
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introductionMaksud Ahmed
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfciinovamais
 
Basic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptxBasic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptxDenish Jangid
 
How to Give a Domain for a Field in Odoo 17
How to Give a Domain for a Field in Odoo 17How to Give a Domain for a Field in Odoo 17
How to Give a Domain for a Field in Odoo 17Celine George
 
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...Shubhangi Sonawane
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdfQucHHunhnh
 
This PowerPoint helps students to consider the concept of infinity.
This PowerPoint helps students to consider the concept of infinity.This PowerPoint helps students to consider the concept of infinity.
This PowerPoint helps students to consider the concept of infinity.christianmathematics
 

Dernier (20)

Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and Mode
 
ComPTIA Overview | Comptia Security+ Book SY0-701
ComPTIA Overview | Comptia Security+ Book SY0-701ComPTIA Overview | Comptia Security+ Book SY0-701
ComPTIA Overview | Comptia Security+ Book SY0-701
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdf
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy Consulting
 
Making and Justifying Mathematical Decisions.pdf
Making and Justifying Mathematical Decisions.pdfMaking and Justifying Mathematical Decisions.pdf
Making and Justifying Mathematical Decisions.pdf
 
Measures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDMeasures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SD
 
PROCESS RECORDING FORMAT.docx
PROCESS      RECORDING        FORMAT.docxPROCESS      RECORDING        FORMAT.docx
PROCESS RECORDING FORMAT.docx
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdf
 
Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17
 
Introduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsIntroduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The Basics
 
ICT role in 21st century education and it's challenges.
ICT role in 21st century education and it's challenges.ICT role in 21st century education and it's challenges.
ICT role in 21st century education and it's challenges.
 
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
 
Application orientated numerical on hev.ppt
Application orientated numerical on hev.pptApplication orientated numerical on hev.ppt
Application orientated numerical on hev.ppt
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introduction
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
 
Basic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptxBasic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptx
 
How to Give a Domain for a Field in Odoo 17
How to Give a Domain for a Field in Odoo 17How to Give a Domain for a Field in Odoo 17
How to Give a Domain for a Field in Odoo 17
 
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
 
This PowerPoint helps students to consider the concept of infinity.
This PowerPoint helps students to consider the concept of infinity.This PowerPoint helps students to consider the concept of infinity.
This PowerPoint helps students to consider the concept of infinity.
 

Artificial Intelligence.pptx

  • 1. Nadar saraswathi college of arts & science, theni. Department of cs & it ARTIFICIAL INTELLIGENCE PRESENTED BY G.KAVIYA M.SC(IT) TOPIC:LEARNING FROM OBSERVATION
  • 2. LEARNING LEARNING FROM OBSERVATION  Forms of learning  Ensemble learning  Computational learning theory
  • 3. LEARNING: Learning is Agent’s percepts should be used for acting. It also used for improving the agents ability to act in the future. Learning takes places as the agents observes, its interactions with the world and its own decision making processes.
  • 4. FORMS OF LEARNING: Learning Agent can be thought of as containing a Performance Element, that decides, what actions to take, and a Learning Elements that modifies the performance elements to take better decisions. Three major issues in learning element design  Which components the performance element are to be learned.  What feedback is available to learn these components.  What representation is used for the components.
  • 5. Components of Agents are;  A direct mapping from conditions on the current state to actions.  A means to infer relevant properties of the world from the percept sequence.  Information about the way the world evolves and about the results of the possible action the agent can take.  Utility information indicating the desirability of world states.  Action value information indicating the desirability of action.  Goals the describe classes of the state whose achievement maximizes the agent’s utility.
  • 6. Classified into three categories: Supervised Learning. Unsupervised Learning. Reinforcement Learning.
  • 7. Supervised Learning: The Learning here is performed with the help of teacher. Let us take the example of the learning process of the small child. The child doesn’t know how to read/write. He/she is being taught by the parents at home and by the teacher in school. The children are recognize the alphabet, numerals, etc. Their and every action is supervised by a teacher.
  • 8. Continue; Actually, a child works on the basis of the output that he/she has to produce. All these real-time events involve supervised learning methodology. Similarly, in ANNs following the supervised learning, each input vector requires a corresponding target vector, which represents the desired outputs. The input vector along with the target vector is called training pair.
  • 9. Continue;  In this type of training, a supervisor or teacher is required for error minimization. Hence, the network trained by this method is said to be using supervised training methodology.  In supervised learning, It is assumed that the correct “target” output values are known for each input pattern.  The input vector is presented to the network, Which result is an output vector. The output vector is the actual output vector. Then the actual output vector is compared with the desired output vector.  If there exists a difference between the two output vectors then an error signal is generated by the network. This error signal is used for adjustment of weights until the actual output matches the desired output.
  • 10. Un Supervised Learning:  The learning here is performed without the help of teacher. Consider, learning process of a tadpole, it learns by itself, that is, a child fish learns to swim by itself, it is not taught by its mother.  Thus, Its learning process is independent and is not supervised by a teacher.  In ANNs following unsupervised learning, the input vectors of similar type are grouped without the use of.
  • 11. Reinforcement Learning:  The learning process is similar to supervised learning. In the case of supervised learning the correct target output values are known for each input pattern.  But, In some cases, Less information might be available. For example, the network might be told that its actual output is only “50% correct “or so. Thus, Here only critic information is available, not the exact information.  The learning information based on the critic information is called reinforcement learning and the feedback sent is called reinforcement signal.
  • 12. ENSEMBLE OF LEARNING:  Learn multiple alternative definitions of a concept using different training data or different learning algorithms.  Combine decisions of multiple definitions, eg. Using weighted voting.
  • 13. VALUE OF ENSEMBLES When combing multiple independent and diverse decisions each of which is at least more accurate than random guessing, random errors cancel each other out, correct decisions are reinforcement. Generate a group of base-learners which when combined has higher accuracy. Different learners use different; Algorithm. Hyperparameters. Representations/Modalities/Views. Training sets. Subproblems.
  • 15. BOOSTING: Also uses voting/averaging but models are weighted according to their performance. Iterative procedure: new models are influenced by performance of previously built ones. * New model is encouraged to become expert for instances classified incorrectly by earlier models. * Intuitive justification: models should be experts that complement each other. There are several variants of this algorithm.
  • 16. Continue;  STRONG LEARNER: Objective of machine learning. o Take labeled data for training. o Produce a classifier which can be arbitrarily accurate. o Strong learners are very difficult to construct.  WEAKER LEARNER: o Take labeled data for training. o Produce a classifier which is more accurate than random guessing. o Constructing weaker learners is relatively easy.
  • 17. ADAPTIVE BOOSTING: Each rectangle corresponds to an example, with weight proportional to its height. Crosses corresponds to misclassified examples. Size of decision tree indicates the weight of that classifier in the final ensemble. Using Different Data Distribution * Start with uniform weighting. * During each step of learning. Increase weights of the examples which are not correctly learned by the weak learner. Decrease weights of the examples which are correctly learned by the weak learner.
  • 18. Continue; IDEA: focus on difficult example which are not correctly classified in the previous steps WEIGHTED VOTING: construct strong classifier by weighted voting of the weak classifier. IDEA: Better weak classifier gets a larger weight. Iteratively add weak classifiers Increase accuracy of the combined classifier through minimization of a cost function.
  • 19. COMPUTATIONAL LEARNING THEORY  Computational learning theory characterize. The difficulty of several types of machine learning problem.  Capabilities of several types of ML algorithm.  CLT seeks answer, question such as; a) “under what conditions is successful learning possible and impossible?” b) “under what conditions is a particular learning algorithm assured of learning successfully?” it means that, what kind of task are learnable, what kind of data is required for learnability.
  • 20. Various issues are: Sample complexity: How many training examples are needed for a learner to converge (with high probability) to a successful hypothesis? Computational complexity: How much computational effort is needed for a learner to converge to a successful hypothesis? Mistake bound: How many training examples will the learner misclassify before converging to a successful hypothesis?
  • 21. (PAC) Probably Learning an Approximately Correct hypothesis:- A particular setting for the learning problem, called the probably approximately correct(PAC) learning model. This model of learning is based on following points: 1. Specifying problem setting that defines PAC model. 2. How many training examples are required. 3. How much computational are required in order to learn various classes of target functions within PAC
  • 22. Problem setting:- X : Set of all the instance (eg: set of people) each described by attributes <age, height>. C : Target concept the leaner need to learn. C: X{0,1} L : Learner have to learn “people who are skiers” C(x) =1 : positive training example C(x) =0 : negative training example Error of a hypothesis: True error, denoted by errorD (h), of hypothesis h w.r.t target concept c and distribution D is probability that h will misclassify an instance drawn at random according to D. errorD (h) = pr [c(x) not equal to h(x)] XED
  • 23. PAC Learnability: No of training examples needed to learn a hypothesis h, for which errorD (h) = 0. For these two difficulties, following measures can be taken:- 1. No requirement of zero error hypothesis for learner L. So a bound to error can be set by constant E, that can be made small. 2.Not necessary that learner succeed for every sequence of randomly drawn training example. So learner probably learn a hypothesis that is approximately correct. Bounded by same constant S which is,
  • 24. Definition: Consider a concept class define over a set of instance X of kngtn n and a learner L using hypothesis space H. C is PAC learnable by L using H if for all CEC. distribution D over X E such that 0<E<1/2 and S such that 0<S<1/2 Learner L will with Probably at least (1-8) output a hypothesis nEH such that errorD (n) <= E.