2. About me
• Education
• NCU (MIS)、NCCU (CS)
• Work Experience
• Telecom big data Innovation
• AI projects
• Retail marketing technology
• User Group
• TW Spark User Group
• TW Hadoop User Group
• Taiwan Data Engineer Association Director
• Research
• Big Data/ ML/ AIOT/ AI Columnist
2
6. Data Preprocessing (1)
• Data preprocessing is the process of transforming raw data into an
understandable format.
• It is also an important step in data mining as we cannot work with raw
data.
• The quality of the data should be checked before applying machine
learning or data mining algorithms.
6
7. Data Preprocessing (2)
• Preprocessing of data is mainly to check the data quality. The quality
can be checked by the following
• Accuracy: To check whether the data entered is correct or not.
• Completeness: To check whether the data is available or not recorded.
• Consistency: To check whether the same data is kept in all the places that do
or do not match.
• Timeliness: The data should be updated correctly.
• Believability: The data should be trustable.
• Interpretability: The understandability of the data.
7
8. Data Preprocessing (3)
• Major Tasks in Data Preprocessing
8
• Feature Transformation
• Polynomial feature
• Categorical feature
• Numerical feature
• Custom feature
• Standardization and Normalization
• Data cleaning
• Missing value
• Cut bins
• Data integration
• Data reduction
• Data transformation
9. Data Cleaning (1)
• Data cleaning is the process to remove incorrect data, incomplete
data and inaccurate data from the datasets, and it also replaces the
missing values.
9
10. Data Cleaning (2)
• Handling missing values:
• Standard values like Not Available or NA can be used to replace the missing
values.
• Missing values can also be filled manually but it is not recommended when
that dataset is big.
• The attribute’s mean value can be used to replace the missing value when the
data is normally distributed wherein in the case of non-normal distribution
median value of the attribute can be used.
• While using regression or decision tree algorithms the missing value can be
replaced by the most probable
value.
10
11. Data Cleaning (3)
• Noisy
Noisy generally means random error or containing unnecessary data points
• Binning: This method is to smooth or handle noisy data. There are three
methods for smoothing data in the bin.
• Smoothing by bin mean method: In this method, the values in the bin are replaced by
the mean value of the bin.
• Smoothing by bin median: In this method, the values in the bin are replaced by the
median value.
• Smoothing by bin boundary: In this method, the using minimum and maximum values of
the bin values are taken and the values are replaced by the closest boundary value.
• Clustering: This is used for finding the outliers and also in grouping the data.
11
12. Data Cleaning (4)
• Data Integration
The process of combining multiple sources into a single dataset.
• Database SQL join
• Python pandas join
12
13. Data Cleaning (5)
• Data Reduction:
Helps in the reduction of the data volume makes the analysis easier
• Dimensionality reduction: In this process, the reduction of random variables
or attributes is done so that the dimensionality of the data set can be reduced.
Combining and merging the attributes of the data without losing its original
characteristics. This also helps in the reduction of storage space and
computation time is reduced.
13
Homework: Try to explain what is Principal Components Analysis or Singular Value Decomposition
and why it does dimensionality reduction
14. Data Cleaning (6)
• Data Transformation
The change made in the format or the structure of the data
• Smoothing: By smoothing we can find even a simple change that helps in
prediction.
• Discretization: The continuous data here is split into intervals. Discretization
reduces the data size.
14
16. Feather Transformation (1)
• Interaction Features
Create new features from existing features and use common
knowledge in the field of data
• Feature add
• Feature sub
• Feature prod
• Feature div
16
17. Feather Transformation (1)
• Polynomial features
• Creating polynomial features is a simple and common way of feature
engineering that adds complexity to numeric input data by combining
features.
17
18. Feather Transformation (2)
• Categorical features
• Once you know what type of categorical data you’re working on, you can pick
a suiting transformation tool.
• In sklearn that will be a OrdinalEncoder for ordinal data, and a
OneHotEncoder for nominal data.
18
19. Feather Transformation (3)
• Numerical features
• Numerical features can be decoded into categorical features.
• The two most common ways to do this are discretization and binarization.
19
20. Feather Transformation (4)
• Custom transformers
• If you want to convert an existing function into a transformer to assist in data
cleaning or processing, you can implement a transformer from an arbitrary
function with FunctionTransformer.
• Or you can use lambda function to transform value.
20
22. Standardization and Normalization (1)
• Before applying any scaling transformations it is very important to
split your data into a train set and a test set.
• Standard Scaler
• MinMax Scaler
• MaxAbs Scaler
• Robust Scaler
22
23. Standardization and Normalization (2)
• Standard Scaler
• It purely centers the data by using the following formula, where u is the mean
and s is the standard deviation.
23
24. Standardization and Normalization (3)
• MinMax Scaler
• The MinMaxScaler transforms features by scaling each feature to a given
range.
• This scaler works better for cases where the distribution is not Gaussian or
the standard deviation.
24
25. Standardization and Normalization (4)
• MaxAbs Scaler
• The MaxAbsScaler works very similarly to the MinMaxScaler but automatically
scales the data to a [-1,1] range based on the absolute maximum.
25
26. Standardization and Normalization (5)
• Robust Scaler
• If your data contains many outliers, scaling using the mean and standard
deviation of the data is likely to not work very well.
• In these cases, you can use the RobustScaler. It removes the median and
scales the data according to the quantile range.
26
Homework: Try to explain what is Z-Score Standardization
27. Standardization and Normalization (6)
• Normalization is the process of scaling individual samples to have unit
norm.
• l1 (l1 norm) : The l1 norm uses the sum of all the values as and thus gives
equal penalty to all parameters, enforcing sparsity.
• l2 (l2 norm): The l2 norm uses the square root of the sum of all the squared
values.
27