SlideShare une entreprise Scribd logo
1  sur  84
.
:
(Clustering Analysis)
1
What is Clustering in Data Mining?
• Cluster : (collection)
– (Similarity)
– (Dissimilarity or
Distance)
• Cluster Analysis
–
• Clustering
– (Classification)
(unsupervised classification)
2
Cluster Analysis
How many clusters?
Four ClustersTwo Clusters
Six Clusters
3
What is Good Clustering?
•
(Minimize Intra-Cluster
Distances)
(Maximize Inter-Cluster Distances)
Inter-cluster
distances are
maximized
Intra-cluster
distances are
minimized
4
Types of Clustering
• Partitional Clustering
– A division data objects into non-overlapping subsets
(clusters) such that each data object is in exactly one subset
Original Points A Partitional Clustering
5
Types of Clustering
• Hierarchical clustering
– A set of nested clusters organized as a hierarchical tree
p4
p1
p3
p2
p4
p1
p3
p2
p4p1 p2 p3
p4p1 p2 p3
Hierarchical Clustering#1
Hierarchical Clustering#2 Traditional Dendrogram 2
Traditional Dendrogram 1
6
Types of Clustering
• Exclusive versus non-exclusive
– In non-exclusive clusterings, points may belong to multiple
clusters.
– Can represent multiple classes or „border‟ points
• Fuzzy versus non-fuzzy
– In fuzzy clustering, a point belongs to every cluster with
some weight between 0 and 1
– Weights must sum to 1
– Probabilistic clustering has similar characteristics
• Partial versus complete
– In some cases, we only want to cluster some of the data
• Heterogeneous versus homogeneous
– Cluster of widely different sizes, shapes, and densities
7
Characteristics of Cluster
• Well-Separated Clusters:
– A cluster is a set of points such that any point in a cluster is
closer (or more similar) to every other point in the cluster than
to any point not in the cluster.
3 well-separated clusters
8
Characteristics of Cluster
• Center-based
– A cluster is a set of objects such that an object in a cluster
is closer (more similar) to the “center” of a cluster, than to
the center of any other cluster.
– The center of a cluster is often a centroid, the average of all
the points in the cluster, or a medoid, the most
“representative” point of a cluster.
4 center-based clusters
9
Characteristics of Cluster
• Density-based
– A cluster is a dense region of points, which is separated by
low-density regions, from other regions of high density.
– Used when the clusters are irregular, and when noise and
outliers are present.
6 density-based clusters
10
Characteristics of Cluster
• Shared Property or Conceptual Clusters
– Finds clusters that share some common property
or represent a particular concept.
2 Overlapping Circles
11
Clustering Algorithms
• K-means clustering
• Hierarchical clustering
12
K-means Clustering
• (Partition) n
D k (
k)
• k-Means k
13
K-means Clustering Algorithm
Algorithm: The k-Means algorithm for partitioning based
on the mean value of object in the cluster.
Input: The number of cluster k and a database containing
n objects.
Output: A set of k clusters that mininimizes the squared-
error criterion.
14
K-means Clustering Algorithm
Method
1) Randomly choose k object as the initial cluster centers
(centroid);
2) Repeat
3) (re)assign each object to the cluster to which the object
is the most similar, based on the mean value of the
objects in the cluster;
4) Update the cluster mean
calculate the mean value of the objects for each
cluster;
5) Until centroid (center point) no change;
15
Example: K-Mean Clustering
• Problem: Cluster the following eight points (with (x,
y) representing locations) into three clusters A1(2,
10) A2(2, 5) A3(8, 4) A4(5, 8) A5(7, 5) A6(6, 4)
A7(1, 2) A8(4, 9).
0
2
4
6
8
10
12
0 1 2 3 4 5 6 7 8 9
16
Example: K-Mean Clustering
• Randomly choose k object as the initial cluster
centers;
• k =3 ; c1(2, 10), c2(5, 8) and c3(1, 2).
0
2
4
6
8
10
12
0 1 2 3 4 5 6 7 8 9
+
+
+
c1
c2
c3
17
Example: K-Mean Clustering
• The distance function between two points a=(x1, y1)
and b=(x2, y2) is defined as:
distance(a, b) = |x2 – x1| + |y2 – y1|
(2, 10) (5, 8) (1, 2)
Point Dist Mean 1 Dist Mean 2 Dist Mean 3 Cluster
A1 (2, 10)
A2 (2, 5)
A3 (8, 4)
A4 (5, 8)
A5 (7, 5)
A6 (6, 4)
A7 (1, 2)
A8 (4, 9)
18
Example: K-Mean Clustering
• Step 2 Calculate distance by using the
distance functionpoint mean1
x1, y1 x2, y2
(2, 10) (2, 10)
distance(point, mean1) = |x2 – x1| + |y2 – y1|
= |2 – 2| + |10 – 10|
= 0 + 0 = 0
point mean2
x1, y1 x2, y2
(2, 10) (5, 8)
distance(point, mean2) = |x2 – x1| + |y2 – y1|
= |5 – 2| + |8 – 10|
= 3 + 2 = 5
point mean3
x1, y1 x2, y2
(2, 10) (1, 2)
distance(point, mean3) = |x2 – x1| + |y2 – y1|
= |1 – 2| + |2 – 10|
= 1 + 8 = 9
19
Example: K-Mean Clustering
(2, 10) (5, 8) (1, 2)
Point Dist Mean 1 Dist Mean 2 Dist Mean 3 Cluster
A1 (2, 10) 0 5 9 1
A2 (2, 5)
A3 (8, 4)
A4 (5, 8)
A5 (7, 5)
A6 (6, 4)
A7 (1, 2)
A8 (4, 9)
20
Example: K-Mean Clustering
• Calculate distance by using the distance function
point mean1
x1, y1 x2, y2
(2, 5) (2, 10)
distance(point, mean1) = |x2 – x1| + |y2 – y1|
= |2 – 2| + |10 – 5|
= 0 + 5 = 5
point mean2
x1, y1 x2, y2
(2, 5) (5, 8)
distance(point, mean2) = |x2 – x1| + |y2 – y1|
= |5 – 2| + |8 – 5|
= 3 + 3 = 6
point mean3
x1, y1 x2, y2
(2, 5) (1, 2)
distance(point, mean3) = |x2 – x1| + |y2 – y1|
= |1 – 2| + |2 – 5|
= 1 + 3 = 4
21
Example: K-Mean Clustering
(2, 10) (5, 8) (1, 2)
Point Dist Mean 1 Dist Mean 2 Dist Mean 3 Cluster
A1 (2, 10) 0 5 9 1
A2 (2, 5) 5 6 4 3
A3 (8, 4)
A4 (5, 8)
A5 (7, 5)
A6 (6, 4)
A7 (1, 2)
A8 (4, 9)
22
Example: K-Mean Clustering
• Iteration#1
(2, 10) (5, 8) (1, 2)
Point Dist Mean 1 Dist Mean 2 Dist Mean 3 Cluster
A1 (2, 10) 0 5 9 1
A2 (2, 5) 5 6 4 3
A3 (8, 4) 12 7 9 2
A4 (5, 8) 5 0 10 2
A5 (7, 5) 10 5 9 2
A6 (6, 4) 10 5 7 2
A7 (1, 2) 9 10 0 3
A8 (4, 9) 3 2 10 2
23
Example: K-Mean Clustering
Cluster 1 Cluster 2 Cluster 3
A1(2, 10) A3(8, 4) A2(2, 5)
A4(5, 8) A7(1, 2)
A5(7, 5)
A6(6, 4)
A8(4, 9)
0
2
4
6
8
10
12
0 1 2 3 4 5 6 7 8 9
+
+
c1
c2
c3
+
24
Example: K-Mean Clustering
• re-compute the new cluster centers (means). We do
so, by taking the mean of all points in each cluster.
• For Cluster 1, we only have one point
A1(2, 10), which was the old mean, so the cluster
center remains the same.
• For Cluster 2, we have (
(8+5+7+6+4)/5, (4+8+5+4+9)/5 ) = (6, 6)
• For Cluster 3, we have ( (2+1)/2, (5+2)/2 ) =
(1.5, 3.5)
25
Example: K-Mean Clustering
0
2
4
6
8
10
12
0 1 2 3 4 5 6 7 8 9
+
+
c1
c2
c3
+
26
Example: K-Mean Clustering
• Iteration#2
(2, 10) (6, 6) (1.5, 3.5)
Point Dist Mean 1 Dist Mean 2 Dist Mean 3 Cluster
A1 (2, 10)
A2 (2, 5)
A3 (8, 4)
A4 (5, 8)
A5 (7, 5)
A6 (6, 4)
A7 (1, 2)
A8 (4, 9)
27
Example: K-Mean Clustering
(Iteration#2)
Cluster 1? Cluster 2? Cluster 3?
0
2
4
6
8
10
12
0 1 2 3 4 5 6 7 8 9
re-compute the new
cluster centers (means)
C1 = (2+4/2, 10+9/2)
= (3, 9.5)
C2 = (6.5, 5.25)
C3 = (1.5, 3.5)
28
Example: K-Mean Clustering
Iteration#3
Cluster 1? Cluster 2? Cluster 3?
0
2
4
6
8
10
12
0 1 2 3 4 5 6 7 8 9
re-compute the new
cluster centers (means)??
29
Distance functions
•
• Minkowski distance
• q=1 d Manhattan distance
• q=2 d Euclidean distance
q
q
jpip
q
ji
q
ji xxxxxxjid )...(),( 2211
jpipjiji xxxxxxjid ...),( 2211
)...(),(
22
22
2
11 jpipjiji xxxxxxjid
30
Evaluating K-means Clusters
• Most common measure is Sum of Squared
Error (SSE)
– For each point, the error is the distance to
the nearest cluster
– To get SSE, we square these errors and
sum them.
where,
– x is a data point in cluster Ci
– mi is the centroid point for cluster Ci
• can show that mi corresponds to the
K
i Cx
i
i
xmdistSSE
1
2
),(
31
Limitations of K-Mean
• K-means
–Size
–Density
–Shapes
32
Limitations of K-means: Differing
Sizes
• K-means
Original Points K-means (3 Clusters)
33
Limitations of K-means: Differing
Density
• K-means
Original Points K-means (3 Clusters)
34
Limitations of K-means: Non-
globular Shapes
• K-means
Original Points K-means (2 Clusters)
35
Overcoming K-means Limitations
Original Points K-means Clusters
One solution is to use many clusters.
Find parts of clusters, but need to put together.36
Overcoming K-means Limitations
Original Points K-means Clusters
Overcoming K-means Limitations
Original Points K-means Clusters
38
Hierarchical Clustering
•
Dendrogram
• Dendrogram
(cluster) (subcluster)
(cluster)
1 3 2 5 4 6
0
0.05
0.1
0.15
0.2
1
2
3
4
5
6
1
2
3 4
5
Dendrogram
39
Hierarchical Clustering
2
1. Agglomerative ( ) :
Agglomerative
2. Divisive ( ) :
Divisive Agglomerative
(singleton
cluster) cluster
40
Agglomerative Clustering Algorithm
Basic algorithm is straightforward
1. Compute the proximity matrix
2. Let each data point be a cluster
3. Repeat
4. Merge the two closest clusters
5. Update the proximity matrix
6. Until only a single cluster remains
41
Example:
42
Example:
43
Example:
6
Euclidean distance
44
How to Define Inter-Cluster
Similarity
p1
p3
p5
p4
p2
p1 p2 p3 p4 p5 . . .
.
.
.
Similarity?
Proximity Matrix
 MIN
 MAX
 Group Average
 Ward’s Method uses squared
error
45
How to Define Inter-Cluster
Similarity
 MIN
 MAX
 Group Average
 Ward’s Method uses squared
error
p1
p3
p5
p4
p2
p1 p2 p3 p4 p5 . . .
.
.
.
Proximity Matrix
46
How to Define Inter-Cluster Similarity
p1
p3
p5
p4
p2
p1 p2 p3 p4 p5 . . .
.
.
.
Proximity Matrix
 MIN
 MAX
 Group Average
 Ward’s Method uses squared error
47
How to Define Inter-Cluster
Similarity
p1
p3
p5
p4
p2
p1 p2 p3 p4 p5 . . .
.
.
.
Proximity Matrix
 MIN
 MAX
 Group Average
 Ward’s Method uses squared error
48
Cluster Similarity: MIN or Single
Link
• Single link MIN
Hierarchical Clustering
2
49
Cluster Similarity: MIN or Single
Link
1
2
3
4
5
6
1
3 6 2 5 4 1
0
0.05
0.1
0.15
0.2
0.11
50
Cluster Similarity: MIN or Single
Link
1
2
3
4
5
6
1
3 6 2 5 4 1
0
0.05
0.1
0.15
0.2
Dist({3,6},{2}) = min(dist(3,2), dist(6,2))
min(0.15, 0.25) = 0.15
Dist({3,6}, {5}) = min(dist(3,5), dist(6,5)) =0.28
Dist ({3,6}, {4}) = min(dist(3,4), dist(6,4)) = 0.15
Dist({3,6}, {1}) = min(dist(3,1), dist(6,1)) = 0.22
51
Cluster Similarity: MIN or Single Link
1
2
3
4
5
6
1
2
3 6 2 5 4 1
0
0.05
0.1
0.15
0.2
Dist({3,6},{2}) = min(dist(3,2), dist(6,2))
min(0.15, 0.25) = 0.15
Dist ({3,6}, {4}) = min(dist(3,4), dist(6,4)) = 0.15
Dist ({3,6},{2}), Dist ({3,6}, {4}) > dist(5,2) = 0.14
0.14
52
Cluster Similarity: MIN or Single
Link
1
2
3
4
5
6
1
2
3
3 6 2 5 4 1
0
0.05
0.1
0.15
0.2
Dist({3,6},{2,5}) = min(dist(3,2), dist(6,2), dist(3,5), dist(6,5))
= min(0.15, 0.25, 0.28, 0.39)
=0.15
Dist ({3,6},{1}) = min(dist(3,1), dist(6,1))
= min(0.22, 0.23) = 0.22
Dist ({3,6},{4}) = min (dist(3,4), dist(6,4))
= min(0.15, 0.22) = 0.15
53
Cluster Similarity: MIN or Single
Link
1
2
3
4
5
6
1
2
3
4
3 6 2 5 4 1
0
0.05
0.1
0.15
0.2
Dist({3,6,2,5}, {1}) = min(dist(3,1), dist(6,1), dist(2,1), dist(5,1))
= min(0.22, 0.23, 0.24, 0.34) = 0.22
Dist(({3,6,2,5}, {4}) = min(dist(3,4), dist(6,4), dist(2,4), dist(5,4))
= min(0.15, 0.22, 0.20, 0.29) = 0.15
Cluster Similarity: MIN or Single
Link
1
2
3
4
5
6
1
2
3
4
5
3 6 2 5 4 1
0
0.05
0.1
0.15
0.2
Dist(({3,6,2,5,4}, {1}) = min(dist(3,1), dist(6,1),
dist(2,1), dist(5,1), dist(4,1))
= min(0.22, 0.23, 0.24, 0.34, 0.37)
= 0.22
55
Strength of MIN
Original Points Two Clusters
• Can handle non-elliptical shapes
56
Limitations of MIN
•
Original Points Two Clusters
• Sensitive to noise and outliers
57
Cluster Similarity: MAX or
Complete Linkage
• Complete link MAX
Hierarchical Clustering
2
58
3 6 4 1 2 5
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
Cluster Similarity: MAX or
Complete Linkage
1
2
3
4
5
6
1
0.11
59
Cluster Similarity: MAX or
Complete Linkage
3 6 4 1 2 5
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
1
2
3
4
5
6
1
Dist({3,6},{1}) = max(dist(3,1), dist(6,1))= 0.23
Dist({3,6},{2}) = max(dist(3,2), dist(6,2))= 0.25
Dist({3,6},{4}) = max(dist(3,4), dist(6,4))= 0.22**
Dist({3,6},{5}) = max(dist(3,5), dist(6,5))= 0.3960
Cluster Similarity: MAX or
Complete Linkage
3 6 4 1 2 5
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
1
2
3
4
5
6
1
2
Dist({3,6},{4}) = max(dist(3,4), dist(6,4))= 0.22 > Dist({2},{5})
0.14
61
Cluster Similarity: MAX or
Complete Linkage
3 6 4 1 2 5
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
1
2
3
4
5
6
1
3
Dist({3,6},{1}) = max(dist(3,1), dist(6,1))= 0.23
Dist({3,6},{2,5}) = max(dist(3,2), dist(3,5), dist(6,2), dist(6,5))
= 0.39
Dist({3,6},{4}) = max(dist(3,4), dist(6,4))= 0.22**
Dist({2,5},{4}) = max(dist(2,4), dist(5,4)) = 0.29
Dist({2,5}, {1}) = max(dist(2,1), dist(5,1)) = 0.34
0.22
2
62
Cluster Similarity: MAX or
Complete Linkage
3 6 4 1 2 5
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
1
2
3
4
5
6
1
2
3
4
Dist({2,5}, {1}) = max(dist(2,1), dist(5,1))= 0.34**
Dist({2,5}, {3,6,4}) = max(dist(2,3), dist(2,6), dist(2,4)),
dist(5,3), dist(5,6), dist(5,4))
= max(0.15, 0.25, 0.20, 0.28, 0.39, 0.29)
= 0.39
0.34
63
Cluster Similarity: MAX or Complete Linkage
3 6 4 1 2 5
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
1
2
3
4
5
6
1
3 5
2
4
Dist({2,5,1},{3,6,4}) = max(dist(2,3), dist(2,6), dist(2,4)
dist(5,3), dist(5, 6), dist(5,4)
dist(1,3), dist(1,6), dist(1,4))
= max(0.15, 0.25, 0.20, 0.28, 0.39,
0.29, 0.22, 0.23,0.37)
=0.39
0.39
64
Strength of MAX
Original Points Two Clusters
• Less susceptible to noise and outliers
65
Limitations of MAX
Original Points Two Clusters
•Tends to break large clusters
•Biased towards globular clusters
66
Cluster Similarity: Group Average
• Group average Hierarchical
Clustering
Single link
compete link
67
Cluster Similarity: Group Average
Linkage
1
2
3
4
5
6
1
3 6 4 1 2 5
0
0.05
0.1
0.15
0.2
0.25
2 5 1
0.11
68
Cluster Similarity: Group Average
1
2
3
4
5
6
1
Dist({3,6},{1}) = avg(dist(3,1), dist(6,1)) = (0.22+0.23)/(2*1) = 0.225
Dist ({3,6},{2}) = avg(dist(3,2), dist(6,2)) = (0.15+0.25)/(2*1) = 0.20
Dist({3,6},{4}) = avg(dist(3,4), dist(6,4)) = (0.15+0.22)/(2*1) = 0.185**
Dist({3,6}, {5}) = avg(dist(3,5), dist(6,5)) = (0.28+0.39)/(2*1) = 0.335
3 6 4 1 2 5
0
0.05
0.1
0.15
0.2
0.25
2 5 1
0.11
69
Cluster Similarity: Group Average
1
2
3
4
5
6
1
2
Dist({3,6},{4}) = avg(dist(3,4), dist(6,4)) = (0.15+0.22)/(2*1) = 0.185** >
Dist({2}, {5})
3 6 4 1 2 5
0
0.05
0.1
0.15
0.2
0.25
2 5 1
0.14
70
Cluster Similarity: Group Average
1
2
3
4
5
6
1
2
3
3 6 4 1 2 5
0
0.05
0.1
0.15
0.2
0.25
2 5 1
Dist({3,6},{1}) = avg(dist(3,1), dist(6,1)) = (0.22+0.23)/(2*1) = 0.225
Dist ({3,6},{2}) = avg(dist(3,2), dist(6,2)) = (0.15+0.25)/(2*1) = 0.20
Dist({3,6},{4}) = avg(dist(3,4), dist(6,4)) = (0.15+0.22)/(2*1) = 0.185**
Dist({3,6}, {5}) = avg(dist(3,5), dist(6,5)) = (0.28+0.39)/(2*1) = 0.335
0.185
71
Cluster Similarity: Group Average
1
2
3
4
5
6
1
2
3
3 6 4 1 2 5
0
0.05
0.1
0.15
0.2
0.25
2 5 1
Dist({3,6, 4},{1}) = avg(dist(3,1), dist(6,1), dist(4,1))
= (0.22+0.23+0.37)/(3*1) = 0.273
Dist ({3,6,4},{2,5}) = avg(dist(3,2), dist(3,5), dist(6,2), dist(6,5),
dist(4,2), dist(4,5))
= (0.15+0.28+0.25+0.39+0.20+0.29)/(3*2)
= 0.26
4
0.26
Cluster Similarity: Group Average
1
2
3
4
5
6
1
2
5
3
3 6 4 1 2 5
0
0.05
0.1
0.15
0.2
0.25
2 5 1
4
Dist ({3,6,4,2,5}, {1}) = avg(dist(3,1), dist(6,1), dist(4,1), dist(2,1),dist(5,1))
= (0.22+0.23+0.37+0.24+0.34)/(5*1)
= 0.28
0.28
73
Hierarchical Clustering: Group
Average
• Compromise between Single and Complete
Link
• Strengths
– Less susceptible to noise and outliers
• Limitations
– Biased towards globular clusters
74
Hierarchical Clustering:
Comparison
Group Average
MIN MAX
1
2
3
4
5
6
1
2
5
3
4
1
2
3
4
5
6
1
2 5
3
41
2
3
4
5
6
1
2
3
4
5
75
Hierarchical Clustering:
Comparison
3 6 4 1 2 5
0
0.05
0.1
0.15
0.2
0.25
2 5 1
Group Average
3 6 4 1 2 5
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
MAX
3 6 2 5 4 1
0
0.05
0.1
0.15
0.2
MIN
76
Internal Measures : Cohesion and
Separation
(graph-based clusters)• A graph-based cluster approach can be evaluated by
cohesion and separation measures.
– Cluster cohesion is the sum of the weight of all links within a
cluster.
– Cluster separation is the sum of the weights between nodes in the
cluster and nodes outside the cluster.
cohesion separation
77
Cohesion and Separation (Central-
based clusters)
• A central-based cluster approach can be
evaluated by cohesion and separation
measures.
78
Cohesion and Separation (Central-
based clustering)
• Cluster Cohesion: Measures how closely related are
objects in a cluster
– Cohesion is measured by the within cluster
sum of squares (SSE)
• Cluster Separation: Measure how distinct or well-
separated a cluster is from other clusters
– Separation is measured by the between cluster
sum of squares
»Where |Ci| is the size of cluster i
i Cx
i
i
mxWSS 2
)(
i
ii mmCBSS 2
)(
79
Example: Cohesion and Separation
 Example: WSS + BSS = Total SSE (constant)
1 2 3 4 5
m
1091
9)5.43(2)5.13(2
1)5.45()5.44()5.12()5.11(
22
2222
Total
BSS
WSSK=2 clusters:
10010
0)33(4
10)35()34()32()31(
2
2222
Total
BSS
WSSK=1 cluster:
1 2 3 4 5m1 m2
m
HW#8
81
• Database Segmentation
K-means clustering K=3
C1 = (1,5)
, C2 = (3,12) C3 = (2,13)
(Pattern)
K-means
HW#8
82
• What is cluster?
• What is Good Clustering?
• How many types of clustering?
• How many Characteristics of Cluster?
• What is K-means Clustering?
• What are limitations of K-Mean?
• Please explain method of Hierarchical
Clustering?
ID X Y
A1 1 5
A2 4 9
A3 8 15
A4 6 2
A5 3 12
A6 10 7
A7 7 7
A8 11 4
A9 13 10
A10 2 13
LAB 8
84
• Use weka program to construct a neural
network classification from the given file.
• Weka Explorer  Open file  bank.arff
• Cluster  Choose button 
SimpleKMeans  Next, click on the text
box to the right of the "Choose" button to
get the pop-up window

Contenu connexe

Tendances (20)

Hierarchical Clustering
Hierarchical ClusteringHierarchical Clustering
Hierarchical Clustering
 
K mean-clustering algorithm
K mean-clustering algorithmK mean-clustering algorithm
K mean-clustering algorithm
 
Classification
ClassificationClassification
Classification
 
07 dimensionality reduction
07 dimensionality reduction07 dimensionality reduction
07 dimensionality reduction
 
3.3 hierarchical methods
3.3 hierarchical methods3.3 hierarchical methods
3.3 hierarchical methods
 
Fuzzy Clustering(C-means, K-means)
Fuzzy Clustering(C-means, K-means)Fuzzy Clustering(C-means, K-means)
Fuzzy Clustering(C-means, K-means)
 
K means Clustering Algorithm
K means Clustering AlgorithmK means Clustering Algorithm
K means Clustering Algorithm
 
KNN
KNN KNN
KNN
 
Clustering, k-means clustering
Clustering, k-means clusteringClustering, k-means clustering
Clustering, k-means clustering
 
K-Means clustring @jax
K-Means clustring @jaxK-Means clustring @jax
K-Means clustring @jax
 
Data cleaning-outlier-detection
Data cleaning-outlier-detectionData cleaning-outlier-detection
Data cleaning-outlier-detection
 
K Means Clustering Algorithm | K Means Clustering Example | Machine Learning ...
K Means Clustering Algorithm | K Means Clustering Example | Machine Learning ...K Means Clustering Algorithm | K Means Clustering Example | Machine Learning ...
K Means Clustering Algorithm | K Means Clustering Example | Machine Learning ...
 
Hierarchical clustering.pptx
Hierarchical clustering.pptxHierarchical clustering.pptx
Hierarchical clustering.pptx
 
Hierachical clustering
Hierachical clusteringHierachical clustering
Hierachical clustering
 
Data clustering
Data clustering Data clustering
Data clustering
 
04 Classification in Data Mining
04 Classification in Data Mining04 Classification in Data Mining
04 Classification in Data Mining
 
Unsupervised learning clustering
Unsupervised learning clusteringUnsupervised learning clustering
Unsupervised learning clustering
 
Clustering
ClusteringClustering
Clustering
 
Stressen's matrix multiplication
Stressen's matrix multiplicationStressen's matrix multiplication
Stressen's matrix multiplication
 
Dbscan algorithom
Dbscan algorithomDbscan algorithom
Dbscan algorithom
 

En vedette (9)

01 introduction to data mining
01 introduction to data mining01 introduction to data mining
01 introduction to data mining
 
02 data werehouse
02 data werehouse02 data werehouse
02 data werehouse
 
03 data preprocessing
03 data preprocessing03 data preprocessing
03 data preprocessing
 
09 anomaly detection
09 anomaly detection09 anomaly detection
09 anomaly detection
 
K means cluster in weka
K means cluster in wekaK means cluster in weka
K means cluster in weka
 
06 classification 2 bayesian and instance based classification
06 classification 2 bayesian and instance based classification06 classification 2 bayesian and instance based classification
06 classification 2 bayesian and instance based classification
 
07 classification 3 neural network
07 classification 3 neural network07 classification 3 neural network
07 classification 3 neural network
 
04 association
04 association04 association
04 association
 
05 classification 1 decision tree and rule based classification
05 classification 1 decision tree and rule based classification05 classification 1 decision tree and rule based classification
05 classification 1 decision tree and rule based classification
 

Similaire à 08 clustering

K means cluster ML
K means cluster  MLK means cluster  ML
K means cluster MLBangalore
 
Pattern recognition binoy k means clustering
Pattern recognition binoy  k means clusteringPattern recognition binoy  k means clustering
Pattern recognition binoy k means clustering108kaushik
 
unsupervised classification.pdf
unsupervised classification.pdfunsupervised classification.pdf
unsupervised classification.pdfsurjeetkoli900
 
11-2-Clustering.pptx
11-2-Clustering.pptx11-2-Clustering.pptx
11-2-Clustering.pptxpaktari1
 
Lecture_3_k-mean-clustering.ppt
Lecture_3_k-mean-clustering.pptLecture_3_k-mean-clustering.ppt
Lecture_3_k-mean-clustering.pptSyedNahin1
 
Fuzzy c means clustering protocol for wireless sensor networks
Fuzzy c means clustering protocol for wireless sensor networksFuzzy c means clustering protocol for wireless sensor networks
Fuzzy c means clustering protocol for wireless sensor networksmourya chandra
 
K-means Clustering || Data Mining
K-means Clustering || Data MiningK-means Clustering || Data Mining
K-means Clustering || Data MiningIffat Firozy
 
Business analytics course in delhi
Business analytics course in delhiBusiness analytics course in delhi
Business analytics course in delhibhuvan8999
 
data science course in delhi
data science course in delhidata science course in delhi
data science course in delhidevipatnala1
 
business analytics course in delhi
business analytics course in delhibusiness analytics course in delhi
business analytics course in delhidevipatnala1
 
Best data science training, best data science training institute in hyderabad.
 Best data science training, best data science training institute in hyderabad. Best data science training, best data science training institute in hyderabad.
Best data science training, best data science training institute in hyderabad.Data Analytics Courses in Pune
 
Data scientist course in hyderabad
Data scientist course in hyderabadData scientist course in hyderabad
Data scientist course in hyderabadprathyusha1234
 
Data scientist training in bangalore
Data scientist training in bangaloreData scientist training in bangalore
Data scientist training in bangaloreprathyusha1234
 
Data science course in chennai (3)
Data science course in chennai (3)Data science course in chennai (3)
Data science course in chennai (3)prathyusha1234
 
data science course in chennai
data science course in chennaidata science course in chennai
data science course in chennaidevipatnala1
 
Best institute for data science in hyderabad
Best institute for data science in hyderabadBest institute for data science in hyderabad
Best institute for data science in hyderabadprathyusha1234
 
Data science online course
Data science online courseData science online course
Data science online courseprathyusha1234
 

Similaire à 08 clustering (20)

K means cluster ML
K means cluster  MLK means cluster  ML
K means cluster ML
 
Pattern recognition binoy k means clustering
Pattern recognition binoy  k means clusteringPattern recognition binoy  k means clustering
Pattern recognition binoy k means clustering
 
kmean clustering
kmean clusteringkmean clustering
kmean clustering
 
unsupervised classification.pdf
unsupervised classification.pdfunsupervised classification.pdf
unsupervised classification.pdf
 
11-2-Clustering.pptx
11-2-Clustering.pptx11-2-Clustering.pptx
11-2-Clustering.pptx
 
Lec13 Clustering.pptx
Lec13 Clustering.pptxLec13 Clustering.pptx
Lec13 Clustering.pptx
 
Lecture_3_k-mean-clustering.ppt
Lecture_3_k-mean-clustering.pptLecture_3_k-mean-clustering.ppt
Lecture_3_k-mean-clustering.ppt
 
Fuzzy c means clustering protocol for wireless sensor networks
Fuzzy c means clustering protocol for wireless sensor networksFuzzy c means clustering protocol for wireless sensor networks
Fuzzy c means clustering protocol for wireless sensor networks
 
K-means Clustering || Data Mining
K-means Clustering || Data MiningK-means Clustering || Data Mining
K-means Clustering || Data Mining
 
Business analytics course in delhi
Business analytics course in delhiBusiness analytics course in delhi
Business analytics course in delhi
 
data science course in delhi
data science course in delhidata science course in delhi
data science course in delhi
 
business analytics course in delhi
business analytics course in delhibusiness analytics course in delhi
business analytics course in delhi
 
Best data science training, best data science training institute in hyderabad.
 Best data science training, best data science training institute in hyderabad. Best data science training, best data science training institute in hyderabad.
Best data science training, best data science training institute in hyderabad.
 
Data scientist course in hyderabad
Data scientist course in hyderabadData scientist course in hyderabad
Data scientist course in hyderabad
 
Data scientist training in bangalore
Data scientist training in bangaloreData scientist training in bangalore
Data scientist training in bangalore
 
Data science course in chennai (3)
Data science course in chennai (3)Data science course in chennai (3)
Data science course in chennai (3)
 
data science course in chennai
data science course in chennaidata science course in chennai
data science course in chennai
 
Best institute for data science in hyderabad
Best institute for data science in hyderabadBest institute for data science in hyderabad
Best institute for data science in hyderabad
 
Data science training
Data science trainingData science training
Data science training
 
Data science online course
Data science online courseData science online course
Data science online course
 

Dernier

18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdfssuser54595a
 
Student login on Anyboli platform.helpin
Student login on Anyboli platform.helpinStudent login on Anyboli platform.helpin
Student login on Anyboli platform.helpinRaunakKeshri1
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdfQucHHunhnh
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfciinovamais
 
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...Marc Dusseiller Dusjagr
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfsanyamsingh5019
 
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdfSoniaTolstoy
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdfQucHHunhnh
 
How to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxHow to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxmanuelaromero2013
 
Hybridoma Technology ( Production , Purification , and Application )
Hybridoma Technology  ( Production , Purification , and Application  ) Hybridoma Technology  ( Production , Purification , and Application  )
Hybridoma Technology ( Production , Purification , and Application ) Sakshi Ghasle
 
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptx
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptxContemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptx
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptxRoyAbrique
 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Educationpboyjonauth
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Sapana Sha
 
CARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxCARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxGaneshChakor2
 
Privatization and Disinvestment - Meaning, Objectives, Advantages and Disadva...
Privatization and Disinvestment - Meaning, Objectives, Advantages and Disadva...Privatization and Disinvestment - Meaning, Objectives, Advantages and Disadva...
Privatization and Disinvestment - Meaning, Objectives, Advantages and Disadva...RKavithamani
 
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991RKavithamani
 
Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactdawncurless
 

Dernier (20)

18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
 
Student login on Anyboli platform.helpin
Student login on Anyboli platform.helpinStudent login on Anyboli platform.helpin
Student login on Anyboli platform.helpin
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
 
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
 
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdfTataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdf
 
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdf
 
How to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxHow to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptx
 
Hybridoma Technology ( Production , Purification , and Application )
Hybridoma Technology  ( Production , Purification , and Application  ) Hybridoma Technology  ( Production , Purification , and Application  )
Hybridoma Technology ( Production , Purification , and Application )
 
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptx
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptxContemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptx
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptx
 
Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1
 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Education
 
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptxINDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
 
CARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxCARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptx
 
Privatization and Disinvestment - Meaning, Objectives, Advantages and Disadva...
Privatization and Disinvestment - Meaning, Objectives, Advantages and Disadva...Privatization and Disinvestment - Meaning, Objectives, Advantages and Disadva...
Privatization and Disinvestment - Meaning, Objectives, Advantages and Disadva...
 
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
 
Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impact
 

08 clustering

  • 2. What is Clustering in Data Mining? • Cluster : (collection) – (Similarity) – (Dissimilarity or Distance) • Cluster Analysis – • Clustering – (Classification) (unsupervised classification) 2
  • 3. Cluster Analysis How many clusters? Four ClustersTwo Clusters Six Clusters 3
  • 4. What is Good Clustering? • (Minimize Intra-Cluster Distances) (Maximize Inter-Cluster Distances) Inter-cluster distances are maximized Intra-cluster distances are minimized 4
  • 5. Types of Clustering • Partitional Clustering – A division data objects into non-overlapping subsets (clusters) such that each data object is in exactly one subset Original Points A Partitional Clustering 5
  • 6. Types of Clustering • Hierarchical clustering – A set of nested clusters organized as a hierarchical tree p4 p1 p3 p2 p4 p1 p3 p2 p4p1 p2 p3 p4p1 p2 p3 Hierarchical Clustering#1 Hierarchical Clustering#2 Traditional Dendrogram 2 Traditional Dendrogram 1 6
  • 7. Types of Clustering • Exclusive versus non-exclusive – In non-exclusive clusterings, points may belong to multiple clusters. – Can represent multiple classes or „border‟ points • Fuzzy versus non-fuzzy – In fuzzy clustering, a point belongs to every cluster with some weight between 0 and 1 – Weights must sum to 1 – Probabilistic clustering has similar characteristics • Partial versus complete – In some cases, we only want to cluster some of the data • Heterogeneous versus homogeneous – Cluster of widely different sizes, shapes, and densities 7
  • 8. Characteristics of Cluster • Well-Separated Clusters: – A cluster is a set of points such that any point in a cluster is closer (or more similar) to every other point in the cluster than to any point not in the cluster. 3 well-separated clusters 8
  • 9. Characteristics of Cluster • Center-based – A cluster is a set of objects such that an object in a cluster is closer (more similar) to the “center” of a cluster, than to the center of any other cluster. – The center of a cluster is often a centroid, the average of all the points in the cluster, or a medoid, the most “representative” point of a cluster. 4 center-based clusters 9
  • 10. Characteristics of Cluster • Density-based – A cluster is a dense region of points, which is separated by low-density regions, from other regions of high density. – Used when the clusters are irregular, and when noise and outliers are present. 6 density-based clusters 10
  • 11. Characteristics of Cluster • Shared Property or Conceptual Clusters – Finds clusters that share some common property or represent a particular concept. 2 Overlapping Circles 11
  • 12. Clustering Algorithms • K-means clustering • Hierarchical clustering 12
  • 13. K-means Clustering • (Partition) n D k ( k) • k-Means k 13
  • 14. K-means Clustering Algorithm Algorithm: The k-Means algorithm for partitioning based on the mean value of object in the cluster. Input: The number of cluster k and a database containing n objects. Output: A set of k clusters that mininimizes the squared- error criterion. 14
  • 15. K-means Clustering Algorithm Method 1) Randomly choose k object as the initial cluster centers (centroid); 2) Repeat 3) (re)assign each object to the cluster to which the object is the most similar, based on the mean value of the objects in the cluster; 4) Update the cluster mean calculate the mean value of the objects for each cluster; 5) Until centroid (center point) no change; 15
  • 16. Example: K-Mean Clustering • Problem: Cluster the following eight points (with (x, y) representing locations) into three clusters A1(2, 10) A2(2, 5) A3(8, 4) A4(5, 8) A5(7, 5) A6(6, 4) A7(1, 2) A8(4, 9). 0 2 4 6 8 10 12 0 1 2 3 4 5 6 7 8 9 16
  • 17. Example: K-Mean Clustering • Randomly choose k object as the initial cluster centers; • k =3 ; c1(2, 10), c2(5, 8) and c3(1, 2). 0 2 4 6 8 10 12 0 1 2 3 4 5 6 7 8 9 + + + c1 c2 c3 17
  • 18. Example: K-Mean Clustering • The distance function between two points a=(x1, y1) and b=(x2, y2) is defined as: distance(a, b) = |x2 – x1| + |y2 – y1| (2, 10) (5, 8) (1, 2) Point Dist Mean 1 Dist Mean 2 Dist Mean 3 Cluster A1 (2, 10) A2 (2, 5) A3 (8, 4) A4 (5, 8) A5 (7, 5) A6 (6, 4) A7 (1, 2) A8 (4, 9) 18
  • 19. Example: K-Mean Clustering • Step 2 Calculate distance by using the distance functionpoint mean1 x1, y1 x2, y2 (2, 10) (2, 10) distance(point, mean1) = |x2 – x1| + |y2 – y1| = |2 – 2| + |10 – 10| = 0 + 0 = 0 point mean2 x1, y1 x2, y2 (2, 10) (5, 8) distance(point, mean2) = |x2 – x1| + |y2 – y1| = |5 – 2| + |8 – 10| = 3 + 2 = 5 point mean3 x1, y1 x2, y2 (2, 10) (1, 2) distance(point, mean3) = |x2 – x1| + |y2 – y1| = |1 – 2| + |2 – 10| = 1 + 8 = 9 19
  • 20. Example: K-Mean Clustering (2, 10) (5, 8) (1, 2) Point Dist Mean 1 Dist Mean 2 Dist Mean 3 Cluster A1 (2, 10) 0 5 9 1 A2 (2, 5) A3 (8, 4) A4 (5, 8) A5 (7, 5) A6 (6, 4) A7 (1, 2) A8 (4, 9) 20
  • 21. Example: K-Mean Clustering • Calculate distance by using the distance function point mean1 x1, y1 x2, y2 (2, 5) (2, 10) distance(point, mean1) = |x2 – x1| + |y2 – y1| = |2 – 2| + |10 – 5| = 0 + 5 = 5 point mean2 x1, y1 x2, y2 (2, 5) (5, 8) distance(point, mean2) = |x2 – x1| + |y2 – y1| = |5 – 2| + |8 – 5| = 3 + 3 = 6 point mean3 x1, y1 x2, y2 (2, 5) (1, 2) distance(point, mean3) = |x2 – x1| + |y2 – y1| = |1 – 2| + |2 – 5| = 1 + 3 = 4 21
  • 22. Example: K-Mean Clustering (2, 10) (5, 8) (1, 2) Point Dist Mean 1 Dist Mean 2 Dist Mean 3 Cluster A1 (2, 10) 0 5 9 1 A2 (2, 5) 5 6 4 3 A3 (8, 4) A4 (5, 8) A5 (7, 5) A6 (6, 4) A7 (1, 2) A8 (4, 9) 22
  • 23. Example: K-Mean Clustering • Iteration#1 (2, 10) (5, 8) (1, 2) Point Dist Mean 1 Dist Mean 2 Dist Mean 3 Cluster A1 (2, 10) 0 5 9 1 A2 (2, 5) 5 6 4 3 A3 (8, 4) 12 7 9 2 A4 (5, 8) 5 0 10 2 A5 (7, 5) 10 5 9 2 A6 (6, 4) 10 5 7 2 A7 (1, 2) 9 10 0 3 A8 (4, 9) 3 2 10 2 23
  • 24. Example: K-Mean Clustering Cluster 1 Cluster 2 Cluster 3 A1(2, 10) A3(8, 4) A2(2, 5) A4(5, 8) A7(1, 2) A5(7, 5) A6(6, 4) A8(4, 9) 0 2 4 6 8 10 12 0 1 2 3 4 5 6 7 8 9 + + c1 c2 c3 + 24
  • 25. Example: K-Mean Clustering • re-compute the new cluster centers (means). We do so, by taking the mean of all points in each cluster. • For Cluster 1, we only have one point A1(2, 10), which was the old mean, so the cluster center remains the same. • For Cluster 2, we have ( (8+5+7+6+4)/5, (4+8+5+4+9)/5 ) = (6, 6) • For Cluster 3, we have ( (2+1)/2, (5+2)/2 ) = (1.5, 3.5) 25
  • 26. Example: K-Mean Clustering 0 2 4 6 8 10 12 0 1 2 3 4 5 6 7 8 9 + + c1 c2 c3 + 26
  • 27. Example: K-Mean Clustering • Iteration#2 (2, 10) (6, 6) (1.5, 3.5) Point Dist Mean 1 Dist Mean 2 Dist Mean 3 Cluster A1 (2, 10) A2 (2, 5) A3 (8, 4) A4 (5, 8) A5 (7, 5) A6 (6, 4) A7 (1, 2) A8 (4, 9) 27
  • 28. Example: K-Mean Clustering (Iteration#2) Cluster 1? Cluster 2? Cluster 3? 0 2 4 6 8 10 12 0 1 2 3 4 5 6 7 8 9 re-compute the new cluster centers (means) C1 = (2+4/2, 10+9/2) = (3, 9.5) C2 = (6.5, 5.25) C3 = (1.5, 3.5) 28
  • 29. Example: K-Mean Clustering Iteration#3 Cluster 1? Cluster 2? Cluster 3? 0 2 4 6 8 10 12 0 1 2 3 4 5 6 7 8 9 re-compute the new cluster centers (means)?? 29
  • 30. Distance functions • • Minkowski distance • q=1 d Manhattan distance • q=2 d Euclidean distance q q jpip q ji q ji xxxxxxjid )...(),( 2211 jpipjiji xxxxxxjid ...),( 2211 )...(),( 22 22 2 11 jpipjiji xxxxxxjid 30
  • 31. Evaluating K-means Clusters • Most common measure is Sum of Squared Error (SSE) – For each point, the error is the distance to the nearest cluster – To get SSE, we square these errors and sum them. where, – x is a data point in cluster Ci – mi is the centroid point for cluster Ci • can show that mi corresponds to the K i Cx i i xmdistSSE 1 2 ),( 31
  • 32. Limitations of K-Mean • K-means –Size –Density –Shapes 32
  • 33. Limitations of K-means: Differing Sizes • K-means Original Points K-means (3 Clusters) 33
  • 34. Limitations of K-means: Differing Density • K-means Original Points K-means (3 Clusters) 34
  • 35. Limitations of K-means: Non- globular Shapes • K-means Original Points K-means (2 Clusters) 35
  • 36. Overcoming K-means Limitations Original Points K-means Clusters One solution is to use many clusters. Find parts of clusters, but need to put together.36
  • 37. Overcoming K-means Limitations Original Points K-means Clusters
  • 38. Overcoming K-means Limitations Original Points K-means Clusters 38
  • 39. Hierarchical Clustering • Dendrogram • Dendrogram (cluster) (subcluster) (cluster) 1 3 2 5 4 6 0 0.05 0.1 0.15 0.2 1 2 3 4 5 6 1 2 3 4 5 Dendrogram 39
  • 40. Hierarchical Clustering 2 1. Agglomerative ( ) : Agglomerative 2. Divisive ( ) : Divisive Agglomerative (singleton cluster) cluster 40
  • 41. Agglomerative Clustering Algorithm Basic algorithm is straightforward 1. Compute the proximity matrix 2. Let each data point be a cluster 3. Repeat 4. Merge the two closest clusters 5. Update the proximity matrix 6. Until only a single cluster remains 41
  • 45. How to Define Inter-Cluster Similarity p1 p3 p5 p4 p2 p1 p2 p3 p4 p5 . . . . . . Similarity? Proximity Matrix  MIN  MAX  Group Average  Ward’s Method uses squared error 45
  • 46. How to Define Inter-Cluster Similarity  MIN  MAX  Group Average  Ward’s Method uses squared error p1 p3 p5 p4 p2 p1 p2 p3 p4 p5 . . . . . . Proximity Matrix 46
  • 47. How to Define Inter-Cluster Similarity p1 p3 p5 p4 p2 p1 p2 p3 p4 p5 . . . . . . Proximity Matrix  MIN  MAX  Group Average  Ward’s Method uses squared error 47
  • 48. How to Define Inter-Cluster Similarity p1 p3 p5 p4 p2 p1 p2 p3 p4 p5 . . . . . . Proximity Matrix  MIN  MAX  Group Average  Ward’s Method uses squared error 48
  • 49. Cluster Similarity: MIN or Single Link • Single link MIN Hierarchical Clustering 2 49
  • 50. Cluster Similarity: MIN or Single Link 1 2 3 4 5 6 1 3 6 2 5 4 1 0 0.05 0.1 0.15 0.2 0.11 50
  • 51. Cluster Similarity: MIN or Single Link 1 2 3 4 5 6 1 3 6 2 5 4 1 0 0.05 0.1 0.15 0.2 Dist({3,6},{2}) = min(dist(3,2), dist(6,2)) min(0.15, 0.25) = 0.15 Dist({3,6}, {5}) = min(dist(3,5), dist(6,5)) =0.28 Dist ({3,6}, {4}) = min(dist(3,4), dist(6,4)) = 0.15 Dist({3,6}, {1}) = min(dist(3,1), dist(6,1)) = 0.22 51
  • 52. Cluster Similarity: MIN or Single Link 1 2 3 4 5 6 1 2 3 6 2 5 4 1 0 0.05 0.1 0.15 0.2 Dist({3,6},{2}) = min(dist(3,2), dist(6,2)) min(0.15, 0.25) = 0.15 Dist ({3,6}, {4}) = min(dist(3,4), dist(6,4)) = 0.15 Dist ({3,6},{2}), Dist ({3,6}, {4}) > dist(5,2) = 0.14 0.14 52
  • 53. Cluster Similarity: MIN or Single Link 1 2 3 4 5 6 1 2 3 3 6 2 5 4 1 0 0.05 0.1 0.15 0.2 Dist({3,6},{2,5}) = min(dist(3,2), dist(6,2), dist(3,5), dist(6,5)) = min(0.15, 0.25, 0.28, 0.39) =0.15 Dist ({3,6},{1}) = min(dist(3,1), dist(6,1)) = min(0.22, 0.23) = 0.22 Dist ({3,6},{4}) = min (dist(3,4), dist(6,4)) = min(0.15, 0.22) = 0.15 53
  • 54. Cluster Similarity: MIN or Single Link 1 2 3 4 5 6 1 2 3 4 3 6 2 5 4 1 0 0.05 0.1 0.15 0.2 Dist({3,6,2,5}, {1}) = min(dist(3,1), dist(6,1), dist(2,1), dist(5,1)) = min(0.22, 0.23, 0.24, 0.34) = 0.22 Dist(({3,6,2,5}, {4}) = min(dist(3,4), dist(6,4), dist(2,4), dist(5,4)) = min(0.15, 0.22, 0.20, 0.29) = 0.15
  • 55. Cluster Similarity: MIN or Single Link 1 2 3 4 5 6 1 2 3 4 5 3 6 2 5 4 1 0 0.05 0.1 0.15 0.2 Dist(({3,6,2,5,4}, {1}) = min(dist(3,1), dist(6,1), dist(2,1), dist(5,1), dist(4,1)) = min(0.22, 0.23, 0.24, 0.34, 0.37) = 0.22 55
  • 56. Strength of MIN Original Points Two Clusters • Can handle non-elliptical shapes 56
  • 57. Limitations of MIN • Original Points Two Clusters • Sensitive to noise and outliers 57
  • 58. Cluster Similarity: MAX or Complete Linkage • Complete link MAX Hierarchical Clustering 2 58
  • 59. 3 6 4 1 2 5 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 Cluster Similarity: MAX or Complete Linkage 1 2 3 4 5 6 1 0.11 59
  • 60. Cluster Similarity: MAX or Complete Linkage 3 6 4 1 2 5 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 1 2 3 4 5 6 1 Dist({3,6},{1}) = max(dist(3,1), dist(6,1))= 0.23 Dist({3,6},{2}) = max(dist(3,2), dist(6,2))= 0.25 Dist({3,6},{4}) = max(dist(3,4), dist(6,4))= 0.22** Dist({3,6},{5}) = max(dist(3,5), dist(6,5))= 0.3960
  • 61. Cluster Similarity: MAX or Complete Linkage 3 6 4 1 2 5 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 1 2 3 4 5 6 1 2 Dist({3,6},{4}) = max(dist(3,4), dist(6,4))= 0.22 > Dist({2},{5}) 0.14 61
  • 62. Cluster Similarity: MAX or Complete Linkage 3 6 4 1 2 5 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 1 2 3 4 5 6 1 3 Dist({3,6},{1}) = max(dist(3,1), dist(6,1))= 0.23 Dist({3,6},{2,5}) = max(dist(3,2), dist(3,5), dist(6,2), dist(6,5)) = 0.39 Dist({3,6},{4}) = max(dist(3,4), dist(6,4))= 0.22** Dist({2,5},{4}) = max(dist(2,4), dist(5,4)) = 0.29 Dist({2,5}, {1}) = max(dist(2,1), dist(5,1)) = 0.34 0.22 2 62
  • 63. Cluster Similarity: MAX or Complete Linkage 3 6 4 1 2 5 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 1 2 3 4 5 6 1 2 3 4 Dist({2,5}, {1}) = max(dist(2,1), dist(5,1))= 0.34** Dist({2,5}, {3,6,4}) = max(dist(2,3), dist(2,6), dist(2,4)), dist(5,3), dist(5,6), dist(5,4)) = max(0.15, 0.25, 0.20, 0.28, 0.39, 0.29) = 0.39 0.34 63
  • 64. Cluster Similarity: MAX or Complete Linkage 3 6 4 1 2 5 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 1 2 3 4 5 6 1 3 5 2 4 Dist({2,5,1},{3,6,4}) = max(dist(2,3), dist(2,6), dist(2,4) dist(5,3), dist(5, 6), dist(5,4) dist(1,3), dist(1,6), dist(1,4)) = max(0.15, 0.25, 0.20, 0.28, 0.39, 0.29, 0.22, 0.23,0.37) =0.39 0.39 64
  • 65. Strength of MAX Original Points Two Clusters • Less susceptible to noise and outliers 65
  • 66. Limitations of MAX Original Points Two Clusters •Tends to break large clusters •Biased towards globular clusters 66
  • 67. Cluster Similarity: Group Average • Group average Hierarchical Clustering Single link compete link 67
  • 68. Cluster Similarity: Group Average Linkage 1 2 3 4 5 6 1 3 6 4 1 2 5 0 0.05 0.1 0.15 0.2 0.25 2 5 1 0.11 68
  • 69. Cluster Similarity: Group Average 1 2 3 4 5 6 1 Dist({3,6},{1}) = avg(dist(3,1), dist(6,1)) = (0.22+0.23)/(2*1) = 0.225 Dist ({3,6},{2}) = avg(dist(3,2), dist(6,2)) = (0.15+0.25)/(2*1) = 0.20 Dist({3,6},{4}) = avg(dist(3,4), dist(6,4)) = (0.15+0.22)/(2*1) = 0.185** Dist({3,6}, {5}) = avg(dist(3,5), dist(6,5)) = (0.28+0.39)/(2*1) = 0.335 3 6 4 1 2 5 0 0.05 0.1 0.15 0.2 0.25 2 5 1 0.11 69
  • 70. Cluster Similarity: Group Average 1 2 3 4 5 6 1 2 Dist({3,6},{4}) = avg(dist(3,4), dist(6,4)) = (0.15+0.22)/(2*1) = 0.185** > Dist({2}, {5}) 3 6 4 1 2 5 0 0.05 0.1 0.15 0.2 0.25 2 5 1 0.14 70
  • 71. Cluster Similarity: Group Average 1 2 3 4 5 6 1 2 3 3 6 4 1 2 5 0 0.05 0.1 0.15 0.2 0.25 2 5 1 Dist({3,6},{1}) = avg(dist(3,1), dist(6,1)) = (0.22+0.23)/(2*1) = 0.225 Dist ({3,6},{2}) = avg(dist(3,2), dist(6,2)) = (0.15+0.25)/(2*1) = 0.20 Dist({3,6},{4}) = avg(dist(3,4), dist(6,4)) = (0.15+0.22)/(2*1) = 0.185** Dist({3,6}, {5}) = avg(dist(3,5), dist(6,5)) = (0.28+0.39)/(2*1) = 0.335 0.185 71
  • 72. Cluster Similarity: Group Average 1 2 3 4 5 6 1 2 3 3 6 4 1 2 5 0 0.05 0.1 0.15 0.2 0.25 2 5 1 Dist({3,6, 4},{1}) = avg(dist(3,1), dist(6,1), dist(4,1)) = (0.22+0.23+0.37)/(3*1) = 0.273 Dist ({3,6,4},{2,5}) = avg(dist(3,2), dist(3,5), dist(6,2), dist(6,5), dist(4,2), dist(4,5)) = (0.15+0.28+0.25+0.39+0.20+0.29)/(3*2) = 0.26 4 0.26
  • 73. Cluster Similarity: Group Average 1 2 3 4 5 6 1 2 5 3 3 6 4 1 2 5 0 0.05 0.1 0.15 0.2 0.25 2 5 1 4 Dist ({3,6,4,2,5}, {1}) = avg(dist(3,1), dist(6,1), dist(4,1), dist(2,1),dist(5,1)) = (0.22+0.23+0.37+0.24+0.34)/(5*1) = 0.28 0.28 73
  • 74. Hierarchical Clustering: Group Average • Compromise between Single and Complete Link • Strengths – Less susceptible to noise and outliers • Limitations – Biased towards globular clusters 74
  • 75. Hierarchical Clustering: Comparison Group Average MIN MAX 1 2 3 4 5 6 1 2 5 3 4 1 2 3 4 5 6 1 2 5 3 41 2 3 4 5 6 1 2 3 4 5 75
  • 76. Hierarchical Clustering: Comparison 3 6 4 1 2 5 0 0.05 0.1 0.15 0.2 0.25 2 5 1 Group Average 3 6 4 1 2 5 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 MAX 3 6 2 5 4 1 0 0.05 0.1 0.15 0.2 MIN 76
  • 77. Internal Measures : Cohesion and Separation (graph-based clusters)• A graph-based cluster approach can be evaluated by cohesion and separation measures. – Cluster cohesion is the sum of the weight of all links within a cluster. – Cluster separation is the sum of the weights between nodes in the cluster and nodes outside the cluster. cohesion separation 77
  • 78. Cohesion and Separation (Central- based clusters) • A central-based cluster approach can be evaluated by cohesion and separation measures. 78
  • 79. Cohesion and Separation (Central- based clustering) • Cluster Cohesion: Measures how closely related are objects in a cluster – Cohesion is measured by the within cluster sum of squares (SSE) • Cluster Separation: Measure how distinct or well- separated a cluster is from other clusters – Separation is measured by the between cluster sum of squares »Where |Ci| is the size of cluster i i Cx i i mxWSS 2 )( i ii mmCBSS 2 )( 79
  • 80. Example: Cohesion and Separation  Example: WSS + BSS = Total SSE (constant) 1 2 3 4 5 m 1091 9)5.43(2)5.13(2 1)5.45()5.44()5.12()5.11( 22 2222 Total BSS WSSK=2 clusters: 10010 0)33(4 10)35()34()32()31( 2 2222 Total BSS WSSK=1 cluster: 1 2 3 4 5m1 m2 m
  • 81. HW#8 81 • Database Segmentation K-means clustering K=3 C1 = (1,5) , C2 = (3,12) C3 = (2,13) (Pattern) K-means
  • 82. HW#8 82 • What is cluster? • What is Good Clustering? • How many types of clustering? • How many Characteristics of Cluster? • What is K-means Clustering? • What are limitations of K-Mean? • Please explain method of Hierarchical Clustering?
  • 83. ID X Y A1 1 5 A2 4 9 A3 8 15 A4 6 2 A5 3 12 A6 10 7 A7 7 7 A8 11 4 A9 13 10 A10 2 13
  • 84. LAB 8 84 • Use weka program to construct a neural network classification from the given file. • Weka Explorer  Open file  bank.arff • Cluster  Choose button  SimpleKMeans  Next, click on the text box to the right of the "Choose" button to get the pop-up window