SlideShare une entreprise Scribd logo
1  sur  15
Télécharger pour lire hors ligne
A Theory of the Learnable
Leslie Valiant
Dhruv Gairola
Computational Complexity, Michael Soltys
gairold@mcmaster.ca ; dhruvgairola.blogspot.ca

November 13, 2013

Dhruv Gairola (McMaster Univ.)

A Theory of the Learnable

November 13, 2013

1 / 15
Overview

1

Learning

2

Contribution

3

PAC learning
Sample complexity
Boolean functions
k-decision lists

4

Conclusion

Dhruv Gairola (McMaster Univ.)

A Theory of the Learnable

November 13, 2013

2 / 15
Learning

Humans can learn.
Machine learning (ML) : learning from data; knowledge acquisition
w/o explicit programming.
Explore computational models for learning.
Use models to get insights about learning.
Use models to develop new learning algorithms.

Dhruv Gairola (McMaster Univ.)

A Theory of the Learnable

November 13, 2013

3 / 15
Modelling supervised Learning

Given training set of labelled examples, learning algorithm generates a
hypothesis (candidate function). Run hypothesis on test set to check
how good it is.
But how good really? Maybe training and test data consists of bad
examples so the hypothesis doesn’t generalize well.
Insight : Introduce probabilities to measure degree of certainty and
correctness.

Dhruv Gairola (McMaster Univ.)

A Theory of the Learnable

November 13, 2013

4 / 15
Contribution

With high probability an (efficient) learning algorithm will find a
hypothesis that is approximately identical to the hidden target
function.
Intuition : A hypothesis built from a large amount of training data is
unlikely to be wrong i.e., Probably approximately correct (PAC).

Dhruv Gairola (McMaster Univ.)

A Theory of the Learnable

November 13, 2013

5 / 15
PAC learning

Goal : show that after training, with high probability, all good
hypothesis will be approximately correct.
Notation :
X : set of all possible examples
D : distribution from which examples are drawn
H : set of all possible hypothesis
N : |Xtraining |
f : target function

Dhruv Gairola (McMaster Univ.)

A Theory of the Learnable

November 13, 2013

6 / 15
PAC learning (2)

Hypothesis hg ∈ H is approximately correct if :
error (hg ) ≤ where
error(h) = P(h(x) = f (x)| x drawn from D)

Bad hypothesis :
error (hb ) >
P(hb disagrees with 1 example) >

Dhruv Gairola (McMaster Univ.)

A Theory of the Learnable

November 13, 2013

7 / 15
PAC learning (3)

P(hb agrees with 1 example) ≤ (1 − ).
P(hb agrees with N examples) ≤ (1 − )N .
P(Hb contains a good hypothesis) ≤ |Hb |(1 − )N ≤ |H|(1 − )N .
Lets say |H|(1 − )N ≤ δ.
...
N ≥ ( 1 )(ln 1 + ln|H|)
δ
This expresses sample complexity.

Dhruv Gairola (McMaster Univ.)

A Theory of the Learnable

November 13, 2013

8 / 15
Sample complexity

N ≥ ( 1 )(ln 1 + ln|H|)
δ
If you train the learning algo with Xtraining of size N, then the
returned hypothesis is PAC because there exists a probability (1 − δ)
that this hypothesis will have an error of at most (approximately).
e.g., if you want smaller and smaller δ, you need more N’s (more
examples).
Lets look at example of H : boolean functions.

Dhruv Gairola (McMaster Univ.)

A Theory of the Learnable

November 13, 2013

9 / 15
Why boolean functions?

Because boolean functions can represent concepts, which is what we
commonly want machines to learn.
Concepts are predicates e.g., isMaleOrFemale(height).

Dhruv Gairola (McMaster Univ.)

A Theory of the Learnable

November 13, 2013

10 / 15
Boolean functions

Boolean functions are of the form f : {0, 1}n → {0, 1} where n are
the number of literals.
n

Let H = {all boolean functions on n literals} ∴ |H| = 22

Substituting H into sample complexity expression gives O(2n ) i.e.,
boolean functions are not PAC-learnable.
Can we restrict size of H?

Dhruv Gairola (McMaster Univ.)

A Theory of the Learnable

November 13, 2013

11 / 15
k-decision lists

A single decision list (DL) is a representation of a single boolean
function. DL is not PAC-learnable either.
A single DL consists of a series of tests.
e.g. if f1 then return b1 ; elseif f2 then return b2 ; ... elseif fn return bn ;
A single DL corresponds to a single hypothesis.
Apply restriction : A k-decision list is a decision list where each test is
a conjunction of at most k literals.

Dhruv Gairola (McMaster Univ.)

A Theory of the Learnable

November 13, 2013

12 / 15
k-decision lists (2)

What is |H| for k-DL i.e., what is |k-DL(n)| where n is number of
literals?
k
k
After calculations, |k-DL(n)| = 2O(n log (n ))
Substitute |k-DL(n)| into sample complexity expression :
N ≥ 1 (ln 1 + O(nk log (nk )))
δ
δ
Sample complexity is poly! What about learning complexity?
There are efficient algorithms for learning k-decision lists! (e.g.,
greedy algorithm)
We have polynomial sample complexity and efficient k-DL algorithms
∴ k-DL is PAC learnable!

Dhruv Gairola (McMaster Univ.)

A Theory of the Learnable

November 13, 2013

13 / 15
Conclusion

PAC learning : with high
probability an (efficient)
learning algorithm will find a
hypothesis that is
approximately identical to
the hidden target hypothesis.
k-DL is PAC learnable.
Computational learning
theory : concerned with the
analysis of ML algorithms
and covers a lot of fields.

Dhruv Gairola (McMaster Univ.)

A Theory of the Learnable

November 13, 2013

14 / 15
References

Carla Gomes, Cornell, Foundations of AI notes

Dhruv Gairola (McMaster Univ.)

A Theory of the Learnable

November 13, 2013

15 / 15

Contenu connexe

Tendances

Linear Regression Analysis | Linear Regression in Python | Machine Learning A...
Linear Regression Analysis | Linear Regression in Python | Machine Learning A...Linear Regression Analysis | Linear Regression in Python | Machine Learning A...
Linear Regression Analysis | Linear Regression in Python | Machine Learning A...
Simplilearn
 
Decision Tree - C4.5&CART
Decision Tree - C4.5&CARTDecision Tree - C4.5&CART
Decision Tree - C4.5&CART
Xueping Peng
 
K Means Clustering Algorithm | K Means Clustering Example | Machine Learning ...
K Means Clustering Algorithm | K Means Clustering Example | Machine Learning ...K Means Clustering Algorithm | K Means Clustering Example | Machine Learning ...
K Means Clustering Algorithm | K Means Clustering Example | Machine Learning ...
Simplilearn
 

Tendances (20)

Machine Learning and Data Mining: 16 Classifiers Ensembles
Machine Learning and Data Mining: 16 Classifiers EnsemblesMachine Learning and Data Mining: 16 Classifiers Ensembles
Machine Learning and Data Mining: 16 Classifiers Ensembles
 
Linear Regression Analysis | Linear Regression in Python | Machine Learning A...
Linear Regression Analysis | Linear Regression in Python | Machine Learning A...Linear Regression Analysis | Linear Regression in Python | Machine Learning A...
Linear Regression Analysis | Linear Regression in Python | Machine Learning A...
 
Knn 160904075605-converted
Knn 160904075605-convertedKnn 160904075605-converted
Knn 160904075605-converted
 
What is the Expectation Maximization (EM) Algorithm?
What is the Expectation Maximization (EM) Algorithm?What is the Expectation Maximization (EM) Algorithm?
What is the Expectation Maximization (EM) Algorithm?
 
PCA and SVD in brief
PCA and SVD in briefPCA and SVD in brief
PCA and SVD in brief
 
Fuzzy Clustering(C-means, K-means)
Fuzzy Clustering(C-means, K-means)Fuzzy Clustering(C-means, K-means)
Fuzzy Clustering(C-means, K-means)
 
Bayes Theorem.pdf
Bayes Theorem.pdfBayes Theorem.pdf
Bayes Theorem.pdf
 
2.5 backpropagation
2.5 backpropagation2.5 backpropagation
2.5 backpropagation
 
Support Vector Machine ppt presentation
Support Vector Machine ppt presentationSupport Vector Machine ppt presentation
Support Vector Machine ppt presentation
 
Concept learning and candidate elimination algorithm
Concept learning and candidate elimination algorithmConcept learning and candidate elimination algorithm
Concept learning and candidate elimination algorithm
 
Decision Tree - C4.5&CART
Decision Tree - C4.5&CARTDecision Tree - C4.5&CART
Decision Tree - C4.5&CART
 
k medoid clustering.pptx
k medoid clustering.pptxk medoid clustering.pptx
k medoid clustering.pptx
 
NAIVE BAYES CLASSIFIER
NAIVE BAYES CLASSIFIERNAIVE BAYES CLASSIFIER
NAIVE BAYES CLASSIFIER
 
K Means Clustering Algorithm | K Means Clustering Example | Machine Learning ...
K Means Clustering Algorithm | K Means Clustering Example | Machine Learning ...K Means Clustering Algorithm | K Means Clustering Example | Machine Learning ...
K Means Clustering Algorithm | K Means Clustering Example | Machine Learning ...
 
Unsupervised learning represenation with DCGAN
Unsupervised learning represenation with DCGANUnsupervised learning represenation with DCGAN
Unsupervised learning represenation with DCGAN
 
Reinforcement Learning Tutorial | Edureka
Reinforcement Learning Tutorial | EdurekaReinforcement Learning Tutorial | Edureka
Reinforcement Learning Tutorial | Edureka
 
KNN
KNN KNN
KNN
 
Multi-Armed Bandit and Applications
Multi-Armed Bandit and ApplicationsMulti-Armed Bandit and Applications
Multi-Armed Bandit and Applications
 
Lecture10 - Naïve Bayes
Lecture10 - Naïve BayesLecture10 - Naïve Bayes
Lecture10 - Naïve Bayes
 
Clique and sting
Clique and stingClique and sting
Clique and sting
 

Similaire à A Theory of the Learnable; PAC Learning

Data.Mining.C.6(II).classification and prediction
Data.Mining.C.6(II).classification and predictionData.Mining.C.6(II).classification and prediction
Data.Mining.C.6(II).classification and prediction
Margaret Wang
 
Introduction to machine learning
Introduction to machine learningIntroduction to machine learning
Introduction to machine learning
butest
 

Similaire à A Theory of the Learnable; PAC Learning (20)

Data.Mining.C.6(II).classification and prediction
Data.Mining.C.6(II).classification and predictionData.Mining.C.6(II).classification and prediction
Data.Mining.C.6(II).classification and prediction
 
PRIMES is in P
PRIMES is in PPRIMES is in P
PRIMES is in P
 
Latent Dirichlet Allocation
Latent Dirichlet AllocationLatent Dirichlet Allocation
Latent Dirichlet Allocation
 
pres_coconat
pres_coconatpres_coconat
pres_coconat
 
Lecture 3 (Supervised learning)
Lecture 3 (Supervised learning)Lecture 3 (Supervised learning)
Lecture 3 (Supervised learning)
 
Сергей Кольцов —НИУ ВШЭ —ICBDA 2015
Сергей Кольцов —НИУ ВШЭ —ICBDA 2015Сергей Кольцов —НИУ ВШЭ —ICBDA 2015
Сергей Кольцов —НИУ ВШЭ —ICBDA 2015
 
Basic review on topic modeling
Basic review on  topic modelingBasic review on  topic modeling
Basic review on topic modeling
 
Bayesian Hierarchical Models
Bayesian Hierarchical ModelsBayesian Hierarchical Models
Bayesian Hierarchical Models
 
ppt
pptppt
ppt
 
PAGOdA poster
PAGOdA posterPAGOdA poster
PAGOdA poster
 
Chapter1p2.pptx
Chapter1p2.pptxChapter1p2.pptx
Chapter1p2.pptx
 
Chapter1p2.pptx
Chapter1p2.pptxChapter1p2.pptx
Chapter1p2.pptx
 
2.7 other classifiers
2.7 other classifiers2.7 other classifiers
2.7 other classifiers
 
Day 3 SPSS
Day 3 SPSSDay 3 SPSS
Day 3 SPSS
 
A new generalized lindley distribution
A new generalized lindley distributionA new generalized lindley distribution
A new generalized lindley distribution
 
Introduction to machine learning
Introduction to machine learningIntroduction to machine learning
Introduction to machine learning
 
Linear Discriminant Analysis and Its Generalization
Linear Discriminant Analysis and Its GeneralizationLinear Discriminant Analysis and Its Generalization
Linear Discriminant Analysis and Its Generalization
 
Deep Learning Opening Workshop - Horseshoe Regularization for Machine Learnin...
Deep Learning Opening Workshop - Horseshoe Regularization for Machine Learnin...Deep Learning Opening Workshop - Horseshoe Regularization for Machine Learnin...
Deep Learning Opening Workshop - Horseshoe Regularization for Machine Learnin...
 
Deep Domain Adaptation using Adversarial Learning and GAN
Deep Domain Adaptation using Adversarial Learning and GAN Deep Domain Adaptation using Adversarial Learning and GAN
Deep Domain Adaptation using Adversarial Learning and GAN
 
Matrix Completion Presentation
Matrix Completion PresentationMatrix Completion Presentation
Matrix Completion Presentation
 

Plus de dhruvgairola (7)

A Generic Algebraic Model for the Analysis of Cryptographic Key Assignment Sc...
A Generic Algebraic Model for the Analysis of Cryptographic Key Assignment Sc...A Generic Algebraic Model for the Analysis of Cryptographic Key Assignment Sc...
A Generic Algebraic Model for the Analysis of Cryptographic Key Assignment Sc...
 
Differences bet. versions of UML diagrams.
Differences bet. versions of UML diagrams.Differences bet. versions of UML diagrams.
Differences bet. versions of UML diagrams.
 
Beginning jQuery
Beginning jQueryBeginning jQuery
Beginning jQuery
 
Beginning CSS.
Beginning CSS.Beginning CSS.
Beginning CSS.
 
Discussion : Info sharing across private DBs
Discussion : Info sharing across private DBsDiscussion : Info sharing across private DBs
Discussion : Info sharing across private DBs
 
Ajax
AjaxAjax
Ajax
 
Potters wheel
Potters wheelPotters wheel
Potters wheel
 

Dernier

EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
Earley Information Science
 

Dernier (20)

presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
Tech Trends Report 2024 Future Today Institute.pdf
Tech Trends Report 2024 Future Today Institute.pdfTech Trends Report 2024 Future Today Institute.pdf
Tech Trends Report 2024 Future Today Institute.pdf
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 

A Theory of the Learnable; PAC Learning

  • 1. A Theory of the Learnable Leslie Valiant Dhruv Gairola Computational Complexity, Michael Soltys gairold@mcmaster.ca ; dhruvgairola.blogspot.ca November 13, 2013 Dhruv Gairola (McMaster Univ.) A Theory of the Learnable November 13, 2013 1 / 15
  • 2. Overview 1 Learning 2 Contribution 3 PAC learning Sample complexity Boolean functions k-decision lists 4 Conclusion Dhruv Gairola (McMaster Univ.) A Theory of the Learnable November 13, 2013 2 / 15
  • 3. Learning Humans can learn. Machine learning (ML) : learning from data; knowledge acquisition w/o explicit programming. Explore computational models for learning. Use models to get insights about learning. Use models to develop new learning algorithms. Dhruv Gairola (McMaster Univ.) A Theory of the Learnable November 13, 2013 3 / 15
  • 4. Modelling supervised Learning Given training set of labelled examples, learning algorithm generates a hypothesis (candidate function). Run hypothesis on test set to check how good it is. But how good really? Maybe training and test data consists of bad examples so the hypothesis doesn’t generalize well. Insight : Introduce probabilities to measure degree of certainty and correctness. Dhruv Gairola (McMaster Univ.) A Theory of the Learnable November 13, 2013 4 / 15
  • 5. Contribution With high probability an (efficient) learning algorithm will find a hypothesis that is approximately identical to the hidden target function. Intuition : A hypothesis built from a large amount of training data is unlikely to be wrong i.e., Probably approximately correct (PAC). Dhruv Gairola (McMaster Univ.) A Theory of the Learnable November 13, 2013 5 / 15
  • 6. PAC learning Goal : show that after training, with high probability, all good hypothesis will be approximately correct. Notation : X : set of all possible examples D : distribution from which examples are drawn H : set of all possible hypothesis N : |Xtraining | f : target function Dhruv Gairola (McMaster Univ.) A Theory of the Learnable November 13, 2013 6 / 15
  • 7. PAC learning (2) Hypothesis hg ∈ H is approximately correct if : error (hg ) ≤ where error(h) = P(h(x) = f (x)| x drawn from D) Bad hypothesis : error (hb ) > P(hb disagrees with 1 example) > Dhruv Gairola (McMaster Univ.) A Theory of the Learnable November 13, 2013 7 / 15
  • 8. PAC learning (3) P(hb agrees with 1 example) ≤ (1 − ). P(hb agrees with N examples) ≤ (1 − )N . P(Hb contains a good hypothesis) ≤ |Hb |(1 − )N ≤ |H|(1 − )N . Lets say |H|(1 − )N ≤ δ. ... N ≥ ( 1 )(ln 1 + ln|H|) δ This expresses sample complexity. Dhruv Gairola (McMaster Univ.) A Theory of the Learnable November 13, 2013 8 / 15
  • 9. Sample complexity N ≥ ( 1 )(ln 1 + ln|H|) δ If you train the learning algo with Xtraining of size N, then the returned hypothesis is PAC because there exists a probability (1 − δ) that this hypothesis will have an error of at most (approximately). e.g., if you want smaller and smaller δ, you need more N’s (more examples). Lets look at example of H : boolean functions. Dhruv Gairola (McMaster Univ.) A Theory of the Learnable November 13, 2013 9 / 15
  • 10. Why boolean functions? Because boolean functions can represent concepts, which is what we commonly want machines to learn. Concepts are predicates e.g., isMaleOrFemale(height). Dhruv Gairola (McMaster Univ.) A Theory of the Learnable November 13, 2013 10 / 15
  • 11. Boolean functions Boolean functions are of the form f : {0, 1}n → {0, 1} where n are the number of literals. n Let H = {all boolean functions on n literals} ∴ |H| = 22 Substituting H into sample complexity expression gives O(2n ) i.e., boolean functions are not PAC-learnable. Can we restrict size of H? Dhruv Gairola (McMaster Univ.) A Theory of the Learnable November 13, 2013 11 / 15
  • 12. k-decision lists A single decision list (DL) is a representation of a single boolean function. DL is not PAC-learnable either. A single DL consists of a series of tests. e.g. if f1 then return b1 ; elseif f2 then return b2 ; ... elseif fn return bn ; A single DL corresponds to a single hypothesis. Apply restriction : A k-decision list is a decision list where each test is a conjunction of at most k literals. Dhruv Gairola (McMaster Univ.) A Theory of the Learnable November 13, 2013 12 / 15
  • 13. k-decision lists (2) What is |H| for k-DL i.e., what is |k-DL(n)| where n is number of literals? k k After calculations, |k-DL(n)| = 2O(n log (n )) Substitute |k-DL(n)| into sample complexity expression : N ≥ 1 (ln 1 + O(nk log (nk ))) δ δ Sample complexity is poly! What about learning complexity? There are efficient algorithms for learning k-decision lists! (e.g., greedy algorithm) We have polynomial sample complexity and efficient k-DL algorithms ∴ k-DL is PAC learnable! Dhruv Gairola (McMaster Univ.) A Theory of the Learnable November 13, 2013 13 / 15
  • 14. Conclusion PAC learning : with high probability an (efficient) learning algorithm will find a hypothesis that is approximately identical to the hidden target hypothesis. k-DL is PAC learnable. Computational learning theory : concerned with the analysis of ML algorithms and covers a lot of fields. Dhruv Gairola (McMaster Univ.) A Theory of the Learnable November 13, 2013 14 / 15
  • 15. References Carla Gomes, Cornell, Foundations of AI notes Dhruv Gairola (McMaster Univ.) A Theory of the Learnable November 13, 2013 15 / 15