22. A computer program is said to learn from experience E
with respect to some class of tasks T and performance
measure P if its performance at tasks in T, as measured
by P, improves with experience E
Tom M. Mitchell
48. Raw Data Collection
Pre-processing
Sampling
Training Dataset
Algorithm Training
Optimization
Post-processing
Final model
Pre-processingFeature Selection
Feature Scaling
Dimensionality Reduction
Performance Metrics
Model Selection
Test Dataset
CrossValidation
Final Model
Evaluation
Pre-processing Classification
Missing Data
Feature Extraction
Data
Split
Data
49. Raw Data Collection
Pre-processing
Sampling
Training Dataset
Algorithm Training
Optimization
Final model
Pre-processingFeature Selection
Feature Scaling
Dimensionality Reduction
Performance Metrics
Model Selection
Test Dataset
CrossValidation
Final Model
Evaluation
Pre-processing Classification
Missing Data
Feature Extraction
Data
Split
Post-processing
Data
50. Classification algorithms
Linear Classification
Logistic Regression
Linear Discriminant Analysis
PLS Discriminant Analysis
Non-Linear Classification
Mixture Discriminant Analysis
Quadratic Discriminant Analysis
Regularized Discriminant Analysis
Neural Networks
Flexible Discriminant Analysis
Support Vector Machines
k-Nearest Neighbor
Naive Bayes
Decission Trees for Classification
Classification and Regression Trees
C4.5
PART
Bagging CART
Random Forest
Gradient Booster Machines
Boosted 5.0
51. Regression algorithms
Linear Regiression
Ordinary Least Squares Regression
Stepwise Linear Regression
Prinicpal Component Regression
Partial Least Squares Regression
Non-Linear Regression /
Penalized Regression
Ridge Regression
Least Absolute Shrinkage
ElasticNet
Multivariate Adaptive Regression
Support Vector Machines
k-Nearest Neighbor
Neural Network
Decission Trees for Regression
Classification and Regression Trees
Conditional Decision Tree
Rule System
Bagging CART
Random Forest
Gradient Boosted Machine
Cubist
57. > summary(mod2)
------------------------------------------------
Gaussian finite mixture model for classification
------------------------------------------------
EDDA model summary:
log.likelihood n df BIC
-187.7097 150 36 -555.8024
Classes n Model G
setosa 50 VEV 1
versicolor 50 VEV 1
virginica 50 VEV 1
Training classification summary:
Predicted
Class setosa versicolor virginica
setosa 50 0 0
versicolor 0 47 3
virginica 0 0 50
Training error = 0.02
> plot(mod2, what = "scatterplot")
60. > head(titanic.raw)
Class Sex Age Survived
1 3rd Male Child No
2 3rd Male Child No
3 3rd Male Child No
4 3rd Male Child No
5 3rd Male Child No
6 3rd Male Child No
> tail(titanic.raw)
Class Sex Age Survived
2196 Crew Female Adult Yes
2197 Crew Female Adult Yes
2198 Crew Female Adult Yes
2199 Crew Female Adult Yes
2200 Crew Female Adult Yes
2201 Crew Female Adult Yes
> summary(titanic.raw)
Class Sex Age Survived
1st :325 Female: 470 Adult:2092 No :1490
2nd :285 Male :1731 Child: 109 Yes: 711
3rd :706
Crew:885