SlideShare a Scribd company logo
1 of 22
Download to read offline
Prof. Pier Luca Lanzi
Clustering
Data Mining andText Mining (UIC 583 @ Politecnico di Milano)
Prof. Pier Luca Lanzi
Readings
• Mining of Massive Datasets (Chapter 7, Section 3.5)
2
Prof. Pier Luca Lanzi
Prof. Pier Luca Lanzi
4
Prof. Pier Luca Lanzi
Clustering algorithms group a collection of data points
into “clusters” according to some distance measure
Data points in the same cluster should have
a small distance from one another
Data points in different clusters should be at
a large distance from one another.
Prof. Pier Luca Lanzi
Clustering algorithms group a collection of data
points into “clusters” according to some
distance measure
Data points in the same cluster should have
a small distance from one another
Data points in different clusters should be at
a large distance from one another
Prof. Pier Luca Lanzi
Clustering searches for “natural” grouping/structure in un-labeled data
(Unsupervised Learning)
Prof. Pier Luca Lanzi
What is Cluster Analysis?
• A cluster is a collection of data objects
§Similar to one another within the same cluster
§Dissimilar to the objects in other clusters
• Cluster analysis
§Given a set data points try to understand their structure
§Finds similarities between data according to the characteristics
found in the data
§Groups similar data objects into clusters
§It is unsupervised learning since there is no predefined classes
• Typical applications
§Stand-alone tool to get insight into data
§Preprocessing step for other algorithms
8
Prof. Pier Luca Lanzi
What Is Good Clustering?
• A good clustering consists of high quality clusters with
§High intra-class similarity
§Low inter-class similarity
• The quality of a clustering result depends on both the similarity
measure used by the method and its implementation
• The quality of a clustering method is also measured by its ability
to discover some or all of the hidden patterns
• Evaluation
§Various measure of intra/inter cluster similarity
§Manual inspection
§Benchmarking on existing labels
9
Prof. Pier Luca Lanzi
Measure the Quality of Clustering
• Dissimilarity/Similarity metric: Similarity is expressed in terms of a
distance function, typically metric d(i, j)
• There is a separate “quality” function that measures the “goodness” of
a cluster
• The definitions of distance functions are usually very different for
interval-scaled, Boolean, categorical, ordinal ratio, and vector variables
• Weights should be associated with different variables based on
applications and data semantics
• It is hard to define “similar enough” or “good enough” as the answer is
typically highly subjective
10
Prof. Pier Luca Lanzi
Clustering Applications
• Marketing
§Help marketers discover distinct groups in their customer bases,
and then use this knowledge to develop targeted marketing
programs
• Land use
§Identification of areas of similar land use in an earth observation
database
• Insurance
§Identifying groups of motor insurance policy holders with a high
average claim cost
• City-planning
§Identifying groups of houses according to their house type, value,
and geographical location
• Earth-quake studies
§Observed earth quake epicenters should be clustered along
continent faults
11
Prof. Pier Luca Lanzi
Clustering Methods
• Hierarchical vs point assignment
• Numeric and/or symbolic data
• Deterministic vs. probabilistic
• Exclusive vs. overlapping
• Hierarchical vs. flat
• Top-down vs. bottom-up
12
Prof. Pier Luca Lanzi
Data Structures
0
d(2,1) 0
d(3,1) d(3,2) 0
: : :
d(n,1) d(n,2) ... ... 0
!
"
#
#
#
#
#
#
$
%
&
&
&
&
&
&
Outlook Temp Humidity Windy Play
Sunny Hot High False No
Sunny Hot	 High	 True No
Overcast	 Hot		 High False Yes
… … … … …
x
11
... x
1f
... x
1p
... ... ... ... ...
x
i1
... x
if
... x
ip
... ... ... ... ...
x
n1
... x
nf
... x
np
!
"
#
#
#
#
#
#
#
#
$
%
&
&
&
&
&
&
&
&
Data Matrix
13
Dis/Similarity Matrix
Prof. Pier Luca Lanzi
Distance Measures
Prof. Pier Luca Lanzi
Distance Measures
• Given a space and a set of points on this space, a distance
measure d(x,y) maps two points x and y to a real number,
and satisfies three axioms
• d(x,y) ≥ 0
• d(x,y) = 0 if and only x=y
• d(x,y) = d(y,x)
• d(x,y) ≤ d(x,z) + d(z,y)
17
Prof. Pier Luca Lanzi
Euclidean Distances 18
There are other distance measures that have been used for Euclidean spa
For any constant r, we can define the Lr-norm to be the distance measur
defined by:
d([x1, x2, . . . , xn], [y1, y2, . . . , yn]) = (
n
i=1
|xi − yi|r
)1/r
The case r = 2 is the usual L2-norm just mentioned. Another common dista
measure is the L1-norm, or Manhattan distance. There, the distance betw
wo points is the sum of the magnitudes of the differences in each dimens
t is called “Manhattan distance” because it is the distance one would have
• Lr-norm
• Euclidean distance (r=2)
• Manhattan distance (r=1)
• L∞-norm
3.5.2 Euclidean Distances
The most familiar distance measure is the one we normally think of as “dis-
ance.” An n-dimensional Euclidean space is one where points are vectors of n
eal numbers. The conventional distance measure in this space, which we shall
efer to as the L2-norm, is defined:
d([x1, x2, . . . , xn], [y1, y2, . . . , yn]) =
n
i=1
(xi − yi)2
That is, we square the distance in each dimension, sum the squares, and take
he positive square root.
It is easy to verify the first three requirements for a distance measure are
atisfied. The Euclidean distance between two points cannot be negative, be-
cause the positive square root is intended. Since all squares of real numbers are
nonnegative, any i such that xi ̸= yi forces the distance to be strictly positive.
On the other hand, if xi = yi for all i, then the distance is clearly 0. Symmetry
ollows because (xi − yi)2
= (yi − xi)2
. The triangle inequality requires a good
deal of algebra to verify. However, it is well understood to be a property of
Prof. Pier Luca Lanzi
Jaccard Distance
• Jaccard distance is defined as
d(x,y) = 1 – SIM(x,y)
• SIM is the Jaccard similarity,
• Which can also be interpreted as the percentage of identical
attributes
19
Prof. Pier Luca Lanzi
Cosine Distance
• The cosine distance between x, y is the angle that the vectors to
those points make
• This angle will be in the range 0 to 180 degrees, regardless of
how many dimensions the space has.
• Example: given x = (1,2,-1) and y = (2,1,1) the angle between the
two vectors is 60
20
Prof. Pier Luca Lanzi
Edit Distance
• The distance between a string x=x1x2…xn and y=y1y2…ym is
the smallest number of insertions and deletions of single
characters that will transform x into y
• Alternatively, the edit distance d(x, y) can be compute as the
longest common subsequence (LCS) of x and y and then,
d(x,y) = |x| + |y| - 2|LCS|
• Example
§The edit distance between x=abcde and y=acfdeg is 3
(delete b, insert f, insert g), the LCS is acde which is coherent
with the previous result
21
Prof. Pier Luca Lanzi
Hamming Distance
• Hamming distance between two vectors is the number of
components in which they differ
• Or equivalently, given the number of variables p, and the number
m of matching components, we define
• Example: the Hamming distance between the vectors 10101 and
11110 is 3/5.
22
Prof. Pier Luca Lanzi
Requisites for Clustering Algorithms
• Scalability
• Ability to deal with different types of attributes
• Ability to handle dynamic data
• Discovery of clusters with arbitrary shape
• Minimal requirements for domain knowledge to determine input
parameters
• Able to deal with noise and outliers
• Insensitive to order of input records
• High dimensionality
• Incorporation of user-specified constraints
• Interpretability and usability
23
Prof. Pier Luca Lanzi
Curse of Dimensionality
in high dimensions, almost all pairs of points
are equally far away from one another
almost any two vectors are almost
orthogonal

More Related Content

What's hot

What's hot (20)

DMTM Lecture 14 Density based clustering
DMTM Lecture 14 Density based clusteringDMTM Lecture 14 Density based clustering
DMTM Lecture 14 Density based clustering
 
DMTM Lecture 03 Regression
DMTM Lecture 03 RegressionDMTM Lecture 03 Regression
DMTM Lecture 03 Regression
 
DMTM Lecture 04 Classification
DMTM Lecture 04 ClassificationDMTM Lecture 04 Classification
DMTM Lecture 04 Classification
 
DMTM 2015 - 07 Hierarchical Clustering
DMTM 2015 - 07 Hierarchical ClusteringDMTM 2015 - 07 Hierarchical Clustering
DMTM 2015 - 07 Hierarchical Clustering
 
DMTM 2015 - 04 Data Exploration
DMTM 2015 - 04 Data ExplorationDMTM 2015 - 04 Data Exploration
DMTM 2015 - 04 Data Exploration
 
DMTM Lecture 06 Classification evaluation
DMTM Lecture 06 Classification evaluationDMTM Lecture 06 Classification evaluation
DMTM Lecture 06 Classification evaluation
 
DMTM Lecture 17 Text mining
DMTM Lecture 17 Text miningDMTM Lecture 17 Text mining
DMTM Lecture 17 Text mining
 
DMTM 2015 - 15 Classification Ensembles
DMTM 2015 - 15 Classification EnsemblesDMTM 2015 - 15 Classification Ensembles
DMTM 2015 - 15 Classification Ensembles
 
DMTM 2015 - 10 Introduction to Classification
DMTM 2015 - 10 Introduction to ClassificationDMTM 2015 - 10 Introduction to Classification
DMTM 2015 - 10 Introduction to Classification
 
DMTM Lecture 18 Graph mining
DMTM Lecture 18 Graph miningDMTM Lecture 18 Graph mining
DMTM Lecture 18 Graph mining
 
DMTM Lecture 19 Data exploration
DMTM Lecture 19 Data explorationDMTM Lecture 19 Data exploration
DMTM Lecture 19 Data exploration
 
DMTM 2015 - 08 Representative-Based Clustering
DMTM 2015 - 08 Representative-Based ClusteringDMTM 2015 - 08 Representative-Based Clustering
DMTM 2015 - 08 Representative-Based Clustering
 
DMTM 2015 - 16 Data Preparation
DMTM 2015 - 16 Data PreparationDMTM 2015 - 16 Data Preparation
DMTM 2015 - 16 Data Preparation
 
DMTM 2015 - 14 Evaluation of Classification Models
DMTM 2015 - 14 Evaluation of Classification ModelsDMTM 2015 - 14 Evaluation of Classification Models
DMTM 2015 - 14 Evaluation of Classification Models
 
DMTM 2015 - 12 Classification Rules
DMTM 2015 - 12 Classification RulesDMTM 2015 - 12 Classification Rules
DMTM 2015 - 12 Classification Rules
 
An overview of Hidden Markov Models (HMM)
An overview of Hidden Markov Models (HMM)An overview of Hidden Markov Models (HMM)
An overview of Hidden Markov Models (HMM)
 
L06 stemmer and edit distance
L06 stemmer and edit distanceL06 stemmer and edit distance
L06 stemmer and edit distance
 
DLBLR talk
DLBLR talkDLBLR talk
DLBLR talk
 
Anthiil Inside workshop on NLP
Anthiil Inside workshop on NLPAnthiil Inside workshop on NLP
Anthiil Inside workshop on NLP
 
DMTM 2015 - 11 Decision Trees
DMTM 2015 - 11 Decision TreesDMTM 2015 - 11 Decision Trees
DMTM 2015 - 11 Decision Trees
 

Similar to DMTM Lecture 11 Clustering

CSA 3702 machine learning module 3
CSA 3702 machine learning module 3CSA 3702 machine learning module 3
CSA 3702 machine learning module 3
Nandhini S
 
An_Accelerated_Nearest_Neighbor_Search_Method_for_the_K-Means_Clustering_Algo...
An_Accelerated_Nearest_Neighbor_Search_Method_for_the_K-Means_Clustering_Algo...An_Accelerated_Nearest_Neighbor_Search_Method_for_the_K-Means_Clustering_Algo...
An_Accelerated_Nearest_Neighbor_Search_Method_for_the_K-Means_Clustering_Algo...
Adam Fausett
 

Similar to DMTM Lecture 11 Clustering (20)

Introduction to machine learning
Introduction to machine learningIntroduction to machine learning
Introduction to machine learning
 
[PPT]
[PPT][PPT]
[PPT]
 
Cs345 cl
Cs345 clCs345 cl
Cs345 cl
 
Machine Learning Foundations for Professional Managers
Machine Learning Foundations for Professional ManagersMachine Learning Foundations for Professional Managers
Machine Learning Foundations for Professional Managers
 
PR07.pdf
PR07.pdfPR07.pdf
PR07.pdf
 
09_dm1_knn_2022_23.pdf
09_dm1_knn_2022_23.pdf09_dm1_knn_2022_23.pdf
09_dm1_knn_2022_23.pdf
 
Data Mining Lecture_5.pptx
Data Mining Lecture_5.pptxData Mining Lecture_5.pptx
Data Mining Lecture_5.pptx
 
Words in space
Words in spaceWords in space
Words in space
 
Cluster analysis
Cluster analysisCluster analysis
Cluster analysis
 
Clique and sting
Clique and stingClique and sting
Clique and sting
 
similarities-knn-1.ppt
similarities-knn-1.pptsimilarities-knn-1.ppt
similarities-knn-1.ppt
 
UnSupervised Machincs4811-ch23a-clustering.ppt
UnSupervised Machincs4811-ch23a-clustering.pptUnSupervised Machincs4811-ch23a-clustering.ppt
UnSupervised Machincs4811-ch23a-clustering.ppt
 
A Visual Exploration of Distance, Documents, and Distributions
A Visual Exploration of Distance, Documents, and DistributionsA Visual Exploration of Distance, Documents, and Distributions
A Visual Exploration of Distance, Documents, and Distributions
 
Words in Space - Rebecca Bilbro
Words in Space - Rebecca BilbroWords in Space - Rebecca Bilbro
Words in Space - Rebecca Bilbro
 
Cluster Analysis
Cluster Analysis Cluster Analysis
Cluster Analysis
 
CSA 3702 machine learning module 3
CSA 3702 machine learning module 3CSA 3702 machine learning module 3
CSA 3702 machine learning module 3
 
similarities-knn.pptx
similarities-knn.pptxsimilarities-knn.pptx
similarities-knn.pptx
 
SPATIAL POINT PATTERNS
SPATIAL POINT PATTERNSSPATIAL POINT PATTERNS
SPATIAL POINT PATTERNS
 
Chap_10_Object_Recognition.pdf
Chap_10_Object_Recognition.pdfChap_10_Object_Recognition.pdf
Chap_10_Object_Recognition.pdf
 
An_Accelerated_Nearest_Neighbor_Search_Method_for_the_K-Means_Clustering_Algo...
An_Accelerated_Nearest_Neighbor_Search_Method_for_the_K-Means_Clustering_Algo...An_Accelerated_Nearest_Neighbor_Search_Method_for_the_K-Means_Clustering_Algo...
An_Accelerated_Nearest_Neighbor_Search_Method_for_the_K-Means_Clustering_Algo...
 

More from Pier Luca Lanzi

More from Pier Luca Lanzi (17)

11 Settembre 2021 - Giocare con i Videogiochi
11 Settembre 2021 - Giocare con i Videogiochi11 Settembre 2021 - Giocare con i Videogiochi
11 Settembre 2021 - Giocare con i Videogiochi
 
Breve Viaggio al Centro dei Videogiochi
Breve Viaggio al Centro dei VideogiochiBreve Viaggio al Centro dei Videogiochi
Breve Viaggio al Centro dei Videogiochi
 
Global Game Jam 19 @ POLIMI - Morning Welcome
Global Game Jam 19 @ POLIMI - Morning WelcomeGlobal Game Jam 19 @ POLIMI - Morning Welcome
Global Game Jam 19 @ POLIMI - Morning Welcome
 
Data Driven Game Design @ Campus Party 2018
Data Driven Game Design @ Campus Party 2018Data Driven Game Design @ Campus Party 2018
Data Driven Game Design @ Campus Party 2018
 
GGJ18 al Politecnico di Milano - Presentazione che precede la presentazione d...
GGJ18 al Politecnico di Milano - Presentazione che precede la presentazione d...GGJ18 al Politecnico di Milano - Presentazione che precede la presentazione d...
GGJ18 al Politecnico di Milano - Presentazione che precede la presentazione d...
 
GGJ18 al Politecnico di Milano - Presentazione di apertura
GGJ18 al Politecnico di Milano - Presentazione di aperturaGGJ18 al Politecnico di Milano - Presentazione di apertura
GGJ18 al Politecnico di Milano - Presentazione di apertura
 
Presentation for UNITECH event - January 8, 2018
Presentation for UNITECH event - January 8, 2018Presentation for UNITECH event - January 8, 2018
Presentation for UNITECH event - January 8, 2018
 
DMTM Lecture 16 Association rules
DMTM Lecture 16 Association rulesDMTM Lecture 16 Association rules
DMTM Lecture 16 Association rules
 
DMTM Lecture 09 Other classificationmethods
DMTM Lecture 09 Other classificationmethodsDMTM Lecture 09 Other classificationmethods
DMTM Lecture 09 Other classificationmethods
 
DMTM Lecture 08 Classification rules
DMTM Lecture 08 Classification rulesDMTM Lecture 08 Classification rules
DMTM Lecture 08 Classification rules
 
DMTM Lecture 07 Decision trees
DMTM Lecture 07 Decision treesDMTM Lecture 07 Decision trees
DMTM Lecture 07 Decision trees
 
DMTM Lecture 05 Data representation
DMTM Lecture 05 Data representationDMTM Lecture 05 Data representation
DMTM Lecture 05 Data representation
 
DMTM Lecture 01 Introduction
DMTM Lecture 01 IntroductionDMTM Lecture 01 Introduction
DMTM Lecture 01 Introduction
 
DMTM Lecture 02 Data mining
DMTM Lecture 02 Data miningDMTM Lecture 02 Data mining
DMTM Lecture 02 Data mining
 
VDP2016 - Lecture 16 Rendering pipeline
VDP2016 - Lecture 16 Rendering pipelineVDP2016 - Lecture 16 Rendering pipeline
VDP2016 - Lecture 16 Rendering pipeline
 
VDP2016 - Lecture 14 Procedural content generation
VDP2016 - Lecture 14 Procedural content generationVDP2016 - Lecture 14 Procedural content generation
VDP2016 - Lecture 14 Procedural content generation
 
VDP2016 - Lecture 13 Data driven game design
VDP2016 - Lecture 13 Data driven game designVDP2016 - Lecture 13 Data driven game design
VDP2016 - Lecture 13 Data driven game design
 

Recently uploaded

An Overview of Mutual Funds Bcom Project.pdf
An Overview of Mutual Funds Bcom Project.pdfAn Overview of Mutual Funds Bcom Project.pdf
An Overview of Mutual Funds Bcom Project.pdf
SanaAli374401
 
Gardella_PRCampaignConclusion Pitch Letter
Gardella_PRCampaignConclusion Pitch LetterGardella_PRCampaignConclusion Pitch Letter
Gardella_PRCampaignConclusion Pitch Letter
MateoGardella
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
QucHHunhnh
 

Recently uploaded (20)

Holdier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfHoldier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdf
 
An Overview of Mutual Funds Bcom Project.pdf
An Overview of Mutual Funds Bcom Project.pdfAn Overview of Mutual Funds Bcom Project.pdf
An Overview of Mutual Funds Bcom Project.pdf
 
Basic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptxBasic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptx
 
Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"
Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"
Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"
 
fourth grading exam for kindergarten in writing
fourth grading exam for kindergarten in writingfourth grading exam for kindergarten in writing
fourth grading exam for kindergarten in writing
 
Gardella_PRCampaignConclusion Pitch Letter
Gardella_PRCampaignConclusion Pitch LetterGardella_PRCampaignConclusion Pitch Letter
Gardella_PRCampaignConclusion Pitch Letter
 
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
Ecological Succession. ( ECOSYSTEM, B. Pharmacy, 1st Year, Sem-II, Environmen...
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104
 
Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..
 
This PowerPoint helps students to consider the concept of infinity.
This PowerPoint helps students to consider the concept of infinity.This PowerPoint helps students to consider the concept of infinity.
This PowerPoint helps students to consider the concept of infinity.
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
 
Unit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptxUnit-IV- Pharma. Marketing Channels.pptx
Unit-IV- Pharma. Marketing Channels.pptx
 
Advance Mobile Application Development class 07
Advance Mobile Application Development class 07Advance Mobile Application Development class 07
Advance Mobile Application Development class 07
 
Introduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsIntroduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The Basics
 
Application orientated numerical on hev.ppt
Application orientated numerical on hev.pptApplication orientated numerical on hev.ppt
Application orientated numerical on hev.ppt
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdf
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activity
 
Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impact
 
How to Give a Domain for a Field in Odoo 17
How to Give a Domain for a Field in Odoo 17How to Give a Domain for a Field in Odoo 17
How to Give a Domain for a Field in Odoo 17
 

DMTM Lecture 11 Clustering

  • 1. Prof. Pier Luca Lanzi Clustering Data Mining andText Mining (UIC 583 @ Politecnico di Milano)
  • 2. Prof. Pier Luca Lanzi Readings • Mining of Massive Datasets (Chapter 7, Section 3.5) 2
  • 4. Prof. Pier Luca Lanzi 4
  • 5. Prof. Pier Luca Lanzi Clustering algorithms group a collection of data points into “clusters” according to some distance measure Data points in the same cluster should have a small distance from one another Data points in different clusters should be at a large distance from one another.
  • 6. Prof. Pier Luca Lanzi Clustering algorithms group a collection of data points into “clusters” according to some distance measure Data points in the same cluster should have a small distance from one another Data points in different clusters should be at a large distance from one another
  • 7. Prof. Pier Luca Lanzi Clustering searches for “natural” grouping/structure in un-labeled data (Unsupervised Learning)
  • 8. Prof. Pier Luca Lanzi What is Cluster Analysis? • A cluster is a collection of data objects §Similar to one another within the same cluster §Dissimilar to the objects in other clusters • Cluster analysis §Given a set data points try to understand their structure §Finds similarities between data according to the characteristics found in the data §Groups similar data objects into clusters §It is unsupervised learning since there is no predefined classes • Typical applications §Stand-alone tool to get insight into data §Preprocessing step for other algorithms 8
  • 9. Prof. Pier Luca Lanzi What Is Good Clustering? • A good clustering consists of high quality clusters with §High intra-class similarity §Low inter-class similarity • The quality of a clustering result depends on both the similarity measure used by the method and its implementation • The quality of a clustering method is also measured by its ability to discover some or all of the hidden patterns • Evaluation §Various measure of intra/inter cluster similarity §Manual inspection §Benchmarking on existing labels 9
  • 10. Prof. Pier Luca Lanzi Measure the Quality of Clustering • Dissimilarity/Similarity metric: Similarity is expressed in terms of a distance function, typically metric d(i, j) • There is a separate “quality” function that measures the “goodness” of a cluster • The definitions of distance functions are usually very different for interval-scaled, Boolean, categorical, ordinal ratio, and vector variables • Weights should be associated with different variables based on applications and data semantics • It is hard to define “similar enough” or “good enough” as the answer is typically highly subjective 10
  • 11. Prof. Pier Luca Lanzi Clustering Applications • Marketing §Help marketers discover distinct groups in their customer bases, and then use this knowledge to develop targeted marketing programs • Land use §Identification of areas of similar land use in an earth observation database • Insurance §Identifying groups of motor insurance policy holders with a high average claim cost • City-planning §Identifying groups of houses according to their house type, value, and geographical location • Earth-quake studies §Observed earth quake epicenters should be clustered along continent faults 11
  • 12. Prof. Pier Luca Lanzi Clustering Methods • Hierarchical vs point assignment • Numeric and/or symbolic data • Deterministic vs. probabilistic • Exclusive vs. overlapping • Hierarchical vs. flat • Top-down vs. bottom-up 12
  • 13. Prof. Pier Luca Lanzi Data Structures 0 d(2,1) 0 d(3,1) d(3,2) 0 : : : d(n,1) d(n,2) ... ... 0 ! " # # # # # # $ % & & & & & & Outlook Temp Humidity Windy Play Sunny Hot High False No Sunny Hot High True No Overcast Hot High False Yes … … … … … x 11 ... x 1f ... x 1p ... ... ... ... ... x i1 ... x if ... x ip ... ... ... ... ... x n1 ... x nf ... x np ! " # # # # # # # # $ % & & & & & & & & Data Matrix 13 Dis/Similarity Matrix
  • 14. Prof. Pier Luca Lanzi Distance Measures
  • 15. Prof. Pier Luca Lanzi Distance Measures • Given a space and a set of points on this space, a distance measure d(x,y) maps two points x and y to a real number, and satisfies three axioms • d(x,y) ≥ 0 • d(x,y) = 0 if and only x=y • d(x,y) = d(y,x) • d(x,y) ≤ d(x,z) + d(z,y) 17
  • 16. Prof. Pier Luca Lanzi Euclidean Distances 18 There are other distance measures that have been used for Euclidean spa For any constant r, we can define the Lr-norm to be the distance measur defined by: d([x1, x2, . . . , xn], [y1, y2, . . . , yn]) = ( n i=1 |xi − yi|r )1/r The case r = 2 is the usual L2-norm just mentioned. Another common dista measure is the L1-norm, or Manhattan distance. There, the distance betw wo points is the sum of the magnitudes of the differences in each dimens t is called “Manhattan distance” because it is the distance one would have • Lr-norm • Euclidean distance (r=2) • Manhattan distance (r=1) • L∞-norm 3.5.2 Euclidean Distances The most familiar distance measure is the one we normally think of as “dis- ance.” An n-dimensional Euclidean space is one where points are vectors of n eal numbers. The conventional distance measure in this space, which we shall efer to as the L2-norm, is defined: d([x1, x2, . . . , xn], [y1, y2, . . . , yn]) = n i=1 (xi − yi)2 That is, we square the distance in each dimension, sum the squares, and take he positive square root. It is easy to verify the first three requirements for a distance measure are atisfied. The Euclidean distance between two points cannot be negative, be- cause the positive square root is intended. Since all squares of real numbers are nonnegative, any i such that xi ̸= yi forces the distance to be strictly positive. On the other hand, if xi = yi for all i, then the distance is clearly 0. Symmetry ollows because (xi − yi)2 = (yi − xi)2 . The triangle inequality requires a good deal of algebra to verify. However, it is well understood to be a property of
  • 17. Prof. Pier Luca Lanzi Jaccard Distance • Jaccard distance is defined as d(x,y) = 1 – SIM(x,y) • SIM is the Jaccard similarity, • Which can also be interpreted as the percentage of identical attributes 19
  • 18. Prof. Pier Luca Lanzi Cosine Distance • The cosine distance between x, y is the angle that the vectors to those points make • This angle will be in the range 0 to 180 degrees, regardless of how many dimensions the space has. • Example: given x = (1,2,-1) and y = (2,1,1) the angle between the two vectors is 60 20
  • 19. Prof. Pier Luca Lanzi Edit Distance • The distance between a string x=x1x2…xn and y=y1y2…ym is the smallest number of insertions and deletions of single characters that will transform x into y • Alternatively, the edit distance d(x, y) can be compute as the longest common subsequence (LCS) of x and y and then, d(x,y) = |x| + |y| - 2|LCS| • Example §The edit distance between x=abcde and y=acfdeg is 3 (delete b, insert f, insert g), the LCS is acde which is coherent with the previous result 21
  • 20. Prof. Pier Luca Lanzi Hamming Distance • Hamming distance between two vectors is the number of components in which they differ • Or equivalently, given the number of variables p, and the number m of matching components, we define • Example: the Hamming distance between the vectors 10101 and 11110 is 3/5. 22
  • 21. Prof. Pier Luca Lanzi Requisites for Clustering Algorithms • Scalability • Ability to deal with different types of attributes • Ability to handle dynamic data • Discovery of clusters with arbitrary shape • Minimal requirements for domain knowledge to determine input parameters • Able to deal with noise and outliers • Insensitive to order of input records • High dimensionality • Incorporation of user-specified constraints • Interpretability and usability 23
  • 22. Prof. Pier Luca Lanzi Curse of Dimensionality in high dimensions, almost all pairs of points are equally far away from one another almost any two vectors are almost orthogonal