SlideShare une entreprise Scribd logo
1  sur  90
Lecture-41Lecture-41
What is Cluster Analysis?What is Cluster Analysis?
General Applications of ClusteringGeneral Applications of Clustering
Pattern RecognitionPattern Recognition
Spatial Data AnalysisSpatial Data Analysis

create thematic maps in GIS by clustering featurecreate thematic maps in GIS by clustering feature
spacesspaces

detect spatial clusters and explain them in spatial datadetect spatial clusters and explain them in spatial data
miningmining
Image ProcessingImage Processing
Economic Science ( market research)Economic Science ( market research)
WWWWWW

Document classificationDocument classification

Cluster Weblog data to discover groups of similarCluster Weblog data to discover groups of similar
access patternsaccess patterns
Lecture-41 - What is Cluster Analysis?Lecture-41 - What is Cluster Analysis?
Examples of Clustering ApplicationsExamples of Clustering Applications
Marketing: Help marketers discover distinct groups in theirMarketing: Help marketers discover distinct groups in their
customer bases, and then use this knowledge to developcustomer bases, and then use this knowledge to develop
targeted marketing programstargeted marketing programs
Land use: Identification of areas of similar land use in anLand use: Identification of areas of similar land use in an
earth observation databaseearth observation database
Insurance: Identifying groups of motor insurance policyInsurance: Identifying groups of motor insurance policy
holders with a high average claim costholders with a high average claim cost
City-planning: Identifying groups of houses according toCity-planning: Identifying groups of houses according to
their house type, value, and geographical locationtheir house type, value, and geographical location
Earth-quake studies: Observed earth quake epicentersEarth-quake studies: Observed earth quake epicenters
should be clustered along continent faultsshould be clustered along continent faults
Lecture-41 - What is Cluster Analysis?Lecture-41 - What is Cluster Analysis?
What Is Good Clustering?What Is Good Clustering?
A good clustering method will produce highA good clustering method will produce high
quality clusters withquality clusters with

highhigh intra-classintra-class similaritysimilarity

lowlow inter-classinter-class similaritysimilarity
The quality of a clustering result depends onThe quality of a clustering result depends on
both the similarity measure used by the methodboth the similarity measure used by the method
and its implementation.and its implementation.
The quality of a clustering method is alsoThe quality of a clustering method is also
measured by its ability to discover some or all ofmeasured by its ability to discover some or all of
the hidden patterns.the hidden patterns.
Lecture-41 - What is Cluster Analysis?Lecture-41 - What is Cluster Analysis?
Requirements of Clustering in Data MiningRequirements of Clustering in Data Mining
ScalabilityScalability
Ability to deal with different types of attributesAbility to deal with different types of attributes
Discovery of clusters with arbitrary shapeDiscovery of clusters with arbitrary shape
Minimal requirements for domain knowledge toMinimal requirements for domain knowledge to
determine input parametersdetermine input parameters
Able to deal with noise and outliersAble to deal with noise and outliers
Insensitive to order of input recordsInsensitive to order of input records
High dimensionalityHigh dimensionality
Incorporation of user-specified constraintsIncorporation of user-specified constraints
Interpretability and usabilityInterpretability and usability
Lecture-41 - What is Cluster Analysis?Lecture-41 - What is Cluster Analysis?
Lecture-42Lecture-42
Types of Data in Cluster AnalysisTypes of Data in Cluster Analysis
Data StructuresData Structures
Data matrixData matrix

(two modes)(two modes)
Dissimilarity matrixDissimilarity matrix

(one mode)(one mode)


















npx...nfx...n1x
...............
ipx...ifx...i1x
...............
1px...1fx...11x
















0...)2,()1,(
:::
)2,3()
...ndnd
0dd(3,1
0d(2,1)
0
Lecture-42 - Types of Data in Cluster AnalysisLecture-42 - Types of Data in Cluster Analysis
Measure the Quality of ClusteringMeasure the Quality of Clustering
Dissimilarity/Similarity metric: Similarity is expressed inDissimilarity/Similarity metric: Similarity is expressed in
terms of a distance function, which is typically metric:terms of a distance function, which is typically metric:
dd((i, ji, j))
There is a separate “quality” function that measures theThere is a separate “quality” function that measures the
“goodness” of a cluster.“goodness” of a cluster.
The definitions of distance functions are usually veryThe definitions of distance functions are usually very
different for interval-scaled, boolean, categorical, ordinaldifferent for interval-scaled, boolean, categorical, ordinal
and ratio variables.and ratio variables.
Weights should be associated with different variablesWeights should be associated with different variables
based on applications and data semantics.based on applications and data semantics.
It is hard to define “similar enough” or “good enough”It is hard to define “similar enough” or “good enough”

the answer is typically highly subjective.the answer is typically highly subjective.
Lecture-42 - Types of Data in Cluster AnalysisLecture-42 - Types of Data in Cluster Analysis
Type of data in clustering analysisType of data in clustering analysis
Interval-scaled variablesInterval-scaled variables
Binary variablesBinary variables
Categorical, Ordinal, and Ratio ScaledCategorical, Ordinal, and Ratio Scaled
variablesvariables
Variables of mixed typesVariables of mixed types
Lecture-42 - Types of Data in Cluster AnalysisLecture-42 - Types of Data in Cluster Analysis
Interval-valued variablesInterval-valued variables
Standardize dataStandardize data

Calculate the mean absolute deviation:Calculate the mean absolute deviation:
wherewhere

Calculate the standardized measurement (Calculate the standardized measurement (z-scorez-score))
Using mean absolute deviation is more robustUsing mean absolute deviation is more robust
than using standard deviationthan using standard deviation
.)...
21
1
nffff
xx(xnm +++=
|)|...|||(|1
21 fnffffff
mxmxmxns −++−+−=
f
fif
if s
mx
z
−
=
Lecture-42 - Types of Data in Cluster AnalysisLecture-42 - Types of Data in Cluster Analysis
Similarity and Dissimilarity ObjectsSimilarity and Dissimilarity Objects
Distances are normally used to measure the similarity orDistances are normally used to measure the similarity or
dissimilarity between two data objectsdissimilarity between two data objects
Some popular ones include:Some popular ones include: Minkowski distanceMinkowski distance::
wherewhere ii = (= (xxi1i1,, xxi2i2, …,, …, xxipip) and) and jj = (= (xxj1j1,, xxj2j2, …,, …, xxjpjp) are two) are two pp-dimensional-dimensional
data objects, anddata objects, and qq is a positive integeris a positive integer
IfIf qq == 11,, dd is Manhattan distanceis Manhattan distance
q
q
pp
qq
j
x
i
x
j
x
i
x
j
x
i
xjid )||...|||(|),(
2211
−++−+−=
||...||||),(
2211 pp j
x
i
x
j
x
i
x
j
x
i
xjid −++−+−=
Lecture-42 - Types of Data in Cluster AnalysisLecture-42 - Types of Data in Cluster Analysis
Similarity and Dissimilarity ObjectsSimilarity and Dissimilarity Objects
If qIf q == 22,, dd is Euclidean distance:is Euclidean distance:

PropertiesProperties
d(i,j)d(i,j) ≥≥ 00
d(i,i)d(i,i) = 0= 0
d(i,j)d(i,j) == d(j,i)d(j,i)
d(i,j)d(i,j) ≤≤ d(i,k)d(i,k) ++ d(k,j)d(k,j)
Also one can use weighted distance, parametricAlso one can use weighted distance, parametric
Pearson product moment correlation, or otherPearson product moment correlation, or other
disimilarity measures.disimilarity measures.
)||...|||(|),( 22
22
2
11 pp j
x
i
x
j
x
i
x
j
x
i
xjid −++−+−=
Lecture-42 - Types of Data in Cluster AnalysisLecture-42 - Types of Data in Cluster Analysis
Binary VariablesBinary Variables
A contingency table for binary dataA contingency table for binary data
Simple matching coefficient (invariant, if theSimple matching coefficient (invariant, if the
binary variable isbinary variable is symmetricsymmetric):):
Jaccard coefficient (noninvariant if the binaryJaccard coefficient (noninvariant if the binary
variable isvariable is asymmetricasymmetric):):
dcba
cbjid
+++
+=),(
pdbcasum
dcdc
baba
sum
++
+
+
0
1
01
cba
cbjid
++
+=),(
Object i
Object j
Lecture-42 - Types of Data in Cluster AnalysisLecture-42 - Types of Data in Cluster Analysis
Dissimilarity between BinaryDissimilarity between Binary
VariablesVariables
ExampleExample

gender is a symmetric attributegender is a symmetric attribute

the remaining attributes are asymmetric binarythe remaining attributes are asymmetric binary

let the values Y and P be set to 1, and the value N be set to 0let the values Y and P be set to 1, and the value N be set to 0
Name Gender Fever Cough Test-1 Test-2 Test-3 Test-4
Jack M Y N P N N N
Mary F Y N P N P N
Jim M Y P N N N N
75.0
211
21
),(
67.0
111
11
),(
33.0
102
10
),(
=
++
+
=
=
++
+
=
=
++
+
=
maryjimd
jimjackd
maryjackd
Lecture-42 - Types of Data in Cluster AnalysisLecture-42 - Types of Data in Cluster Analysis
Categorical VariablesCategorical Variables
A generalization of the binary variable in that itA generalization of the binary variable in that it
can take more than 2 states, e.g., red, yellow,can take more than 2 states, e.g., red, yellow,
blue, greenblue, green
Method 1: Simple matchingMethod 1: Simple matching

MM is no of matches,is no of matches, pp is total no of variablesis total no of variables
Method 2: use a large number of binary variablesMethod 2: use a large number of binary variables

creating a new binary variable for each of thecreating a new binary variable for each of the MM
nominal statesnominal states
p
mpjid −=),(
Lecture-42 - Types of Data in Cluster AnalysisLecture-42 - Types of Data in Cluster Analysis
Ordinal VariablesOrdinal Variables
An ordinal variable can be discrete or continuousAn ordinal variable can be discrete or continuous
order is important, e.g., rankorder is important, e.g., rank
Can be treated like interval-scaledCan be treated like interval-scaled
 replacingreplacing xxifif by their rankby their rank

map the range of each variable onto [0, 1] by replacingmap the range of each variable onto [0, 1] by replacing
ii-th object in the-th object in the ff-th variable by-th variable by

compute the dissimilarity using methods for interval-compute the dissimilarity using methods for interval-
scaled variablesscaled variables
1
1
−
−
=
f
if
if M
r
z
},...,1{ fif
Mr ∈
Lecture-42 - Types of Data in Cluster AnalysisLecture-42 - Types of Data in Cluster Analysis
Ratio-Scaled VariablesRatio-Scaled Variables
Ratio-scaled variable: a positive measurement onRatio-scaled variable: a positive measurement on
a nonlinear scale, approximately at exponentiala nonlinear scale, approximately at exponential
scale, such asscale, such as AeAeBtBt
oror AeAe-Bt-Bt
Methods:Methods:

treat them like interval-scaled variablestreat them like interval-scaled variables

apply logarithmic transformationapply logarithmic transformation
yyifif == log(xlog(xifif))

treat them as continuous ordinal data treat their rank astreat them as continuous ordinal data treat their rank as
interval-scaled.interval-scaled.
Lecture-42 - Types of Data in Cluster AnalysisLecture-42 - Types of Data in Cluster Analysis
Variables of MixedVariables of Mixed
TypesTypesA database may contain all the six types of variablesA database may contain all the six types of variables

symmetric binary, asymmetric binary, nominal, ordinal,symmetric binary, asymmetric binary, nominal, ordinal,
interval and ratio.interval and ratio.
One may use a weighted formula to combine theirOne may use a weighted formula to combine their
effects.effects.

ff is binary or nominal:is binary or nominal:
ddijij
(f)(f)
= 0 if x= 0 if xifif = x= xjfjf , or d, or dijij
(f)(f)
= 1 o.w.= 1 o.w.

ff is interval-based: use the normalized distanceis interval-based: use the normalized distance

ff is ordinal or ratio-scaledis ordinal or ratio-scaled
compute ranks rcompute ranks rifif andand
and treat zand treat zifif as interval-scaledas interval-scaled
)(
1
)()(
1
),( f
ij
p
f
f
ij
f
ij
p
f
d
jid
δ
δ
=
=
Σ
Σ
=
1
1
−
−
=
f
if
M
rzif
Lecture-42 - Types of Data in Cluster AnalysisLecture-42 - Types of Data in Cluster Analysis
Lecture-43Lecture-43
A Categorization of MajorA Categorization of Major
Clustering MethodsClustering Methods
Major Clustering ApproachesMajor Clustering Approaches
Partitioning algorithmsPartitioning algorithms

Construct various partitions and then evaluate them by someConstruct various partitions and then evaluate them by some
criterioncriterion
Hierarchy algorithmsHierarchy algorithms

Create a hierarchical decomposition of the set of data (or objects)Create a hierarchical decomposition of the set of data (or objects)
using some criterionusing some criterion
Density-basedDensity-based

based on connectivity and density functionsbased on connectivity and density functions
Grid-basedGrid-based

based on a multiple-level granularity structurebased on a multiple-level granularity structure
Model-basedModel-based

A model is hypothesized for each of the clusters and the idea is toA model is hypothesized for each of the clusters and the idea is to
find the best fit of that model to each otherfind the best fit of that model to each other
Lecture-43 - A Categorization of Major Clustering MethodsLecture-43 - A Categorization of Major Clustering Methods
Lecture-44Lecture-44
Partitioning MethodsPartitioning Methods
Partitioning Algorithms: Basic ConceptPartitioning Algorithms: Basic Concept
Partitioning method: Construct a partition of a databasePartitioning method: Construct a partition of a database DD
ofof nn objects into a set ofobjects into a set of kk clustersclusters
Given aGiven a kk, find a partition of, find a partition of k clustersk clusters that optimizes thethat optimizes the
chosen partitioning criterionchosen partitioning criterion

Global optimal: exhaustively enumerate all partitionsGlobal optimal: exhaustively enumerate all partitions

Heuristic methods:Heuristic methods: k-meansk-means andand k-medoidsk-medoids algorithmsalgorithms

k-meansk-means - Each cluster is represented by the center of- Each cluster is represented by the center of
the clusterthe cluster

k-medoidsk-medoids or PAM (Partition around medoids) - Eachor PAM (Partition around medoids) - Each
cluster is represented by one of the objects in thecluster is represented by one of the objects in the
clustercluster
Lecture-44 - Partitioning MethodsLecture-44 - Partitioning Methods
TheThe K-MeansK-Means Clustering MethodClustering Method
GivenGiven kk, the, the k-meansk-means algorithm isalgorithm is
implemented in 4 steps:implemented in 4 steps:

Partition objects intoPartition objects into kk nonempty subsetsnonempty subsets

Compute seed points as the centroids of theCompute seed points as the centroids of the
clusters of the current partition. The centroid is theclusters of the current partition. The centroid is the
center (mean point) of the cluster.center (mean point) of the cluster.

Assign each object to the cluster with the nearestAssign each object to the cluster with the nearest
seed point.seed point.

Go back to Step 2, stop when no more newGo back to Step 2, stop when no more new
assignment.assignment.
Lecture-44 - Partitioning MethodsLecture-44 - Partitioning Methods
TheThe K-MeansK-Means Clustering MethodClustering Method
ExampleExample
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
Lecture-44 - Partitioning MethodsLecture-44 - Partitioning Methods
thethe K-MeansK-Means MethodMethod
StrengthStrength

Relatively efficientRelatively efficient:: OO((tkntkn), where), where nn is no of objects,is no of objects, kk isis
no of clusters, andno of clusters, and tt is no of iterations. Normally,is no of iterations. Normally, kk,,
tt <<<< nn..

Often terminates at aOften terminates at a local optimumlocal optimum. The. The global optimumglobal optimum
may be found using techniques such as:may be found using techniques such as: deterministicdeterministic
annealingannealing andand genetic algorithmsgenetic algorithms
WeaknessWeakness

Applicable only whenApplicable only when meanmean is defined, then what aboutis defined, then what about
categorical data?categorical data?

Need to specifyNeed to specify k,k, thethe numbernumber of clusters, in advanceof clusters, in advance

Unable to handle noisy data andUnable to handle noisy data and outliersoutliers

Not suitable to discover clusters withNot suitable to discover clusters with non-convex shapesnon-convex shapes
Lecture-44 - Partitioning MethodsLecture-44 - Partitioning Methods
Variations of theVariations of the K-MeansK-Means MethodMethod
A few variants of theA few variants of the k-meansk-means which differ inwhich differ in

Selection of the initialSelection of the initial kk meansmeans

Dissimilarity calculationsDissimilarity calculations

Strategies to calculate cluster meansStrategies to calculate cluster means
Handling categorical data:Handling categorical data: k-modesk-modes

Replacing means of clusters with modesReplacing means of clusters with modes

Using new dissimilarity measures to deal withUsing new dissimilarity measures to deal with
categorical objectscategorical objects

Using a frequency-based method to update modes ofUsing a frequency-based method to update modes of
clustersclusters

A mixture of categorical and numerical data:A mixture of categorical and numerical data: k-prototypek-prototype
methodmethod
Lecture-44 - Partitioning MethodsLecture-44 - Partitioning Methods
TheThe KK--MedoidsMedoids Clustering MethodClustering Method
FindFind representativerepresentative objects, calledobjects, called medoidsmedoids, in clusters, in clusters
PAMPAM (Partitioning Around Medoids, 1987)(Partitioning Around Medoids, 1987)

starts from an initial set of medoids and iterativelystarts from an initial set of medoids and iteratively
replaces one of the medoids by one of the non-replaces one of the medoids by one of the non-
medoids if it improves the total distance of the resultingmedoids if it improves the total distance of the resulting
clusteringclustering

PAMPAM works effectively for small data sets, but does notworks effectively for small data sets, but does not
scale well for large data setsscale well for large data sets
CLARACLARA (Kaufmann & Rousseeuw, 1990)(Kaufmann & Rousseeuw, 1990)
CLARANSCLARANS (Ng & Han, 1994): Randomized sampling(Ng & Han, 1994): Randomized sampling
Focusing + spatial data structure (Ester et al., 1995)Focusing + spatial data structure (Ester et al., 1995)
Lecture-44 - Partitioning MethodsLecture-44 - Partitioning Methods
PAM (Partitioning Around Medoids) (1987)PAM (Partitioning Around Medoids) (1987)
PAM (Kaufman and Rousseeuw, 1987), built inPAM (Kaufman and Rousseeuw, 1987), built in
SplusSplus
Use real object to represent the clusterUse real object to represent the cluster

SelectSelect kk representative objects arbitrarilyrepresentative objects arbitrarily

For each pair of non-selected objectFor each pair of non-selected object hh and selectedand selected
objectobject ii, calculate the total swapping cost, calculate the total swapping cost TCTCihih

For each pair ofFor each pair of ii andand hh,,
IfIf TCTCihih < 0,< 0, ii is replaced byis replaced by hh
Then assign each non-selected object to the mostThen assign each non-selected object to the most
similar representative objectsimilar representative object

repeat steps 2-3 until there is no changerepeat steps 2-3 until there is no change
Lecture-44 - Partitioning MethodsLecture-44 - Partitioning Methods
PAM Clustering:PAM Clustering: Total swapping costTotal swapping cost
TCTCihih==∑∑jjCCjihjih
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
j
i
h
t
Cjih = 0
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
t
i h
j
Cjih = d(j, h) - d(j, i)
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
h
i
t
j
Cjih = d(j, t) - d(j, i)
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
t
i
h j
Cjih = d(j, h) - d(j, t)Lecture-44 - Partitioning MethodsLecture-44 - Partitioning Methods
CLARACLARA (Clustering Large Applications)(Clustering Large Applications)
(1990)(1990)
CLARACLARA (Kaufmann and Rousseeuw in 1990)(Kaufmann and Rousseeuw in 1990)

Built in statistical analysis packages, such as S+Built in statistical analysis packages, such as S+
It drawsIt draws multiple samplesmultiple samples of the data set, appliesof the data set, applies PAMPAM onon
each sample, and gives the best clustering as the outputeach sample, and gives the best clustering as the output
StrengthStrength: deals with larger data sets than: deals with larger data sets than PAMPAM
Weakness:Weakness:

Efficiency depends on the sample sizeEfficiency depends on the sample size

A good clustering based on samples will notA good clustering based on samples will not
necessarily represent a good clustering of the wholenecessarily represent a good clustering of the whole
data set if the sample is biaseddata set if the sample is biased
Lecture-44 - Partitioning MethodsLecture-44 - Partitioning Methods
CLARANSCLARANS (“Randomized” CLARA)(“Randomized” CLARA)
(1994)(1994)
CLARANSCLARANS (A Clustering Algorithm based on Randomized(A Clustering Algorithm based on Randomized
Search) CLARANS draws sample of neighborsSearch) CLARANS draws sample of neighbors
dynamicallydynamically
The clustering process can be presented as searching aThe clustering process can be presented as searching a
graph where every node is a potential solution, that is, agraph where every node is a potential solution, that is, a
set ofset of kk medoidsmedoids
If the local optimum is found,If the local optimum is found, CLARANSCLARANS starts with newstarts with new
randomly selected node in search for a new local optimumrandomly selected node in search for a new local optimum
It is more efficient and scalable than bothIt is more efficient and scalable than both PAMPAM andand
CLARACLARA
Focusing techniques and spatial access structures mayFocusing techniques and spatial access structures may
further improve its performancefurther improve its performance
Lecture-44 - Partitioning MethodsLecture-44 - Partitioning Methods
Lecture-45Lecture-45
Hierarchical MethodsHierarchical Methods
Hierarchical ClusteringHierarchical Clustering
Use distance matrix as clustering criteria. ThisUse distance matrix as clustering criteria. This
method does not require the number of clustersmethod does not require the number of clusters
kk as an input, but needs a termination conditionas an input, but needs a termination condition
Step 0 Step 1 Step 2 Step 3 Step 4
b
d
c
e
a
a b
d e
c d e
a b c d e
Step 4 Step 3 Step 2 Step 1 Step 0
agglomerative
(AGNES)
divisive
(DIANA)
Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods
AGNES (Agglomerative Nesting)AGNES (Agglomerative Nesting)
Introduced in Kaufmann and Rousseeuw (1990)Introduced in Kaufmann and Rousseeuw (1990)
Implemented in statistical analysis packages, e.g.,Implemented in statistical analysis packages, e.g.,
SplusSplus
Use the Single-Link method and the dissimilarityUse the Single-Link method and the dissimilarity
matrix.matrix.
Merge nodes that have the least dissimilarityMerge nodes that have the least dissimilarity
Go on in a non-descending fashionGo on in a non-descending fashion
Eventually all nodes belong to the same clusterEventually all nodes belong to the same cluster
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods
A Dendrogram Shows How the
Clusters are Merged Hierarchically
Decompose data objects into a several levels of nested
partitioning (tree of clusters), called a dendrogram.
A clustering of the data objects is obtained by cutting the
dendrogram at the desired level, then each connected
component forms a cluster.
Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods
DIANA (Divisive Analysis)DIANA (Divisive Analysis)
Introduced in Kaufmann and Rousseeuw (1990)Introduced in Kaufmann and Rousseeuw (1990)
Implemented in statistical analysis packages,Implemented in statistical analysis packages,
e.g., Spluse.g., Splus
Inverse order of AGNESInverse order of AGNES
Eventually each node forms a cluster on its ownEventually each node forms a cluster on its own
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods
More on Hierarchical Clustering MethodsMore on Hierarchical Clustering Methods
Major weakness of agglomerative clustering methodsMajor weakness of agglomerative clustering methods

do not scaledo not scale well: time complexity of at leastwell: time complexity of at least OO((nn22
),),
wherewhere nn is the number of total objectsis the number of total objects

can never undo what was done previouslycan never undo what was done previously
Integration of hierarchical with distance-based clusteringIntegration of hierarchical with distance-based clustering

BIRCH (1996)BIRCH (1996): uses CF-tree and incrementally adjusts: uses CF-tree and incrementally adjusts
the quality of sub-clustersthe quality of sub-clusters

CURE (1998CURE (1998): selects well-scattered points from the): selects well-scattered points from the
cluster and then shrinks them towards the center of thecluster and then shrinks them towards the center of the
cluster by a specified fractioncluster by a specified fraction

CHAMELEON (1999)CHAMELEON (1999): hierarchical clustering using: hierarchical clustering using
dynamic modelingdynamic modeling
Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods
BIRCH (1996)BIRCH (1996)
Birch: Balanced Iterative Reducing and Clustering usingBirch: Balanced Iterative Reducing and Clustering using
Hierarchies, by Zhang, Ramakrishnan, Livny (SIGMODHierarchies, by Zhang, Ramakrishnan, Livny (SIGMOD’’96)96)
Incrementally construct a CF (Clustering Feature) tree, aIncrementally construct a CF (Clustering Feature) tree, a
hierarchical data structure for multiphase clusteringhierarchical data structure for multiphase clustering

Phase 1: scan DB to build an initial in-memory CF treePhase 1: scan DB to build an initial in-memory CF tree
(a multi-level compression of the data that tries to(a multi-level compression of the data that tries to
preserve the inherent clustering structure of the data)preserve the inherent clustering structure of the data)

Phase 2: use an arbitrary clustering algorithm to clusterPhase 2: use an arbitrary clustering algorithm to cluster
the leaf nodes of the CF-treethe leaf nodes of the CF-tree
Scales linearlyScales linearly: finds a good clustering with a single scan: finds a good clustering with a single scan
and improves the quality with a few additional scansand improves the quality with a few additional scans
Weakness:Weakness: handles only numeric data, and sensitive to thehandles only numeric data, and sensitive to the
order of the data record.order of the data record.
Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods
Clustering FeatureClustering Feature
VectorVector
Clustering Feature: CF = (N, LS, SS)
N: Number of data points
LS: ∑N
i=1=Xi
SS: ∑N
i=1=Xi
2
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
CF = (5, (16,30),(54,190))
(3,4)
(2,6)
(4,5)
(4,7)
(3,8)
Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods
CF TreeCF Tree
CF1
child1
CF3
child3
CF2
child2
CF6
child6
CF1
child1
CF3
child3
CF2
child2
CF5
child5
CF1 CF2 CF6
prev next CF1 CF2 CF4
prev next
B = 7
L = 6
Root
Non-leaf node
Leaf node Leaf node
Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods
CURECURE (Clustering(Clustering
UsingUsing
REpresentatives )REpresentatives )
CURE: proposed by Guha, Rastogi & Shim, 1998CURE: proposed by Guha, Rastogi & Shim, 1998

Stops the creation of a cluster hierarchy if a levelStops the creation of a cluster hierarchy if a level
consists ofconsists of kk clustersclusters

Uses multiple representative points to evaluate theUses multiple representative points to evaluate the
distance between clusters, adjusts well to arbitrarydistance between clusters, adjusts well to arbitrary
shaped clusters and avoids single-link effectshaped clusters and avoids single-link effect
Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods
Drawbacks of Distance-Drawbacks of Distance-
Based MethodBased Method
Drawbacks of square-error based clusteringDrawbacks of square-error based clustering
methodmethod

Consider only one point as representative of a clusterConsider only one point as representative of a cluster

Good only for convex shaped, similar size andGood only for convex shaped, similar size and
density, and ifdensity, and if kk can be reasonably estimatedcan be reasonably estimatedLecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods
Cure: The AlgorithmCure: The Algorithm

Draw random sampleDraw random sample ss..

Partition sample toPartition sample to pp partitions with sizepartitions with size s/ps/p

Partially cluster partitions intoPartially cluster partitions into s/pqs/pq clustersclusters

Eliminate outliersEliminate outliers
By random samplingBy random sampling
If a cluster grows too slow, eliminate it.If a cluster grows too slow, eliminate it.

Cluster partial clusters.Cluster partial clusters.

Label data in diskLabel data in disk
Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods
Data Partitioning andData Partitioning and
ClusteringClustering

s = 50s = 50

p = 2p = 2

s/p = 25s/p = 25
x x
x
y
y y
y
x
y
x
s/pq = 5
Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods
Cure: Shrinking RepresentativeCure: Shrinking Representative
PointsPoints
Shrink the multiple representative points towardsShrink the multiple representative points towards
the gravity center by a fraction ofthe gravity center by a fraction of αα..
Multiple representatives capture the shape of theMultiple representatives capture the shape of the
cluster
x
y
x
y
Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods
Clustering CategoricalClustering Categorical
Data: ROCKData: ROCK
ROCK: Robust Clustering using linKs,ROCK: Robust Clustering using linKs,
by S. Guha, R. Rastogi, K. Shim (ICDE’99).by S. Guha, R. Rastogi, K. Shim (ICDE’99).

Use links to measure similarity/proximityUse links to measure similarity/proximity

Not distance basedNot distance based

Computational complexity:Computational complexity:
Basic ideas:Basic ideas:

Similarity function and neighbors:Similarity function and neighbors:
LetLet TT11 = {1,2,3},= {1,2,3}, TT22={3,4,5}={3,4,5}
O n nm m n nm a( log )2 2
+ +
Sim T T
T T
T T
( , )1 2
1 2
1 2
=
∩
∪
Sim T T( , )
{ }
{ , , , , }
.1 2
3
1 2 3 4 5
1
5
0 2= = =
Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods
Rock: AlgorithmRock: Algorithm
Links: The number of common neighboursLinks: The number of common neighbours
for the two points.for the two points.
AlgorithmAlgorithm

Draw random sampleDraw random sample

Cluster with linksCluster with links

Label data in diskLabel data in disk
{1,2,3}, {1,2,4}, {1,2,5}, {1,3,4}, {1,3,5}
{1,4,5}, {2,3,4}, {2,3,5}, {2,4,5}, {3,4,5}
{1,2,3} {1,2,4}3
Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods
CHAMELEONCHAMELEON
CHAMELEON: hierarchical clustering using dynamicCHAMELEON: hierarchical clustering using dynamic
modeling, by G. Karypis, E.H. Han and V. Kumarmodeling, by G. Karypis, E.H. Han and V. Kumar’’9999
Measures the similarity based on a dynamic modelMeasures the similarity based on a dynamic model

Two clusters are merged only if theTwo clusters are merged only if the interconnectivityinterconnectivity
andand closeness (proximity)closeness (proximity) between two clusters arebetween two clusters are
highhigh relative torelative to the internal interconnectivity of thethe internal interconnectivity of the
clusters and closeness of items within the clustersclusters and closeness of items within the clusters
A two phase algorithmA two phase algorithm

1. Use a graph partitioning algorithm: cluster objects1. Use a graph partitioning algorithm: cluster objects
into a large number of relatively small sub-clustersinto a large number of relatively small sub-clusters

2. Use an agglomerative hierarchical clustering2. Use an agglomerative hierarchical clustering
algorithm: find the genuine clusters by repeatedlyalgorithm: find the genuine clusters by repeatedly
combining these sub-clusterscombining these sub-clusters
Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods
Overall Framework ofOverall Framework of
CHAMELEONCHAMELEON
Construct
Sparse Graph Partition the Graph
Merge Partition
Final Clusters
Data Set
Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods
Lecture-46Lecture-46
Density-Based MethodsDensity-Based Methods
Density-Based Clustering MethodsDensity-Based Clustering Methods
Clustering based on density (local clusterClustering based on density (local cluster
criterion), such as density-connected pointscriterion), such as density-connected points
Major features:Major features:

Discover clusters of arbitrary shapeDiscover clusters of arbitrary shape

Handle noiseHandle noise

One scanOne scan

Need density parameters as termination conditionNeed density parameters as termination condition
Several methodsSeveral methods

DBSCANDBSCAN

OPTICSOPTICS

DENCLUEDENCLUE

CLIQUECLIQUE
Lecture-46 - Density-Based MethodsLecture-46 - Density-Based Methods
Density-Based ClusteringDensity-Based Clustering
Two parametersTwo parameters::

EpsEps: Maximum radius of the neighbourhood: Maximum radius of the neighbourhood

MinPtsMinPts: Minimum number of points in an Eps-: Minimum number of points in an Eps-
neighbourhood of that pointneighbourhood of that point
NNEpsEps(p):(p): {q belongs to D | dist(p,q) <= Eps}{q belongs to D | dist(p,q) <= Eps}
Directly density-reachableDirectly density-reachable:: A pointA point pp is directlyis directly
density-reachable from a pointdensity-reachable from a point qq wrt.wrt. EpsEps, MinPts, MinPts
ifif
 1)1) pp belongs tobelongs to NNEpsEps(q)(q)

2) core point condition:2) core point condition:
|N|NEpsEps (q)| >= MinPts(q)| >= MinPts
p
q
MinPts = 5
Eps = 1 cm
Lecture-46 - Density-Based MethodsLecture-46 - Density-Based Methods
Density-Based ClusteringDensity-Based Clustering
Density-reachable:Density-reachable:

A pointA point pp is density-reachable fromis density-reachable from
a pointa point qq wrt.wrt. EpsEps,, MinPtsMinPts if thereif there
is a chain of pointsis a chain of points pp11,, ……,, ppnn,, pp11 == qq,,
ppnn == pp such thatsuch that ppi+1i+1 is directlyis directly
density-reachable fromdensity-reachable from ppii
Density-connectedDensity-connected

A pointA point pp is density-connected to ais density-connected to a
pointpoint qq wrt.wrt. EpsEps,, MinPtsMinPts if there isif there is
a pointa point oo such that both,such that both, pp andand qq
are density-reachable fromare density-reachable from oo wrt.wrt.
EpsEps andand MinPtsMinPts..
p
q
p1
p q
o
Lecture-46 - Density-Based MethodsLecture-46 - Density-Based Methods
DBSCAN: Density Based SpatialDBSCAN: Density Based Spatial
Clustering of Applications with NoiseClustering of Applications with Noise
Relies on aRelies on a density-baseddensity-based notion of cluster: Anotion of cluster: A
clustercluster is defined as a maximal set of density-is defined as a maximal set of density-
connected pointsconnected points
Discovers clusters of arbitrary shape in spatialDiscovers clusters of arbitrary shape in spatial
databases with noisedatabases with noise
Core
Border
Outlier
Eps = 1cm
MinPts = 5
Lecture-46 - Density-Based MethodsLecture-46 - Density-Based Methods
DBSCAN: The AlgorithmDBSCAN: The Algorithm

Arbitrary select a pointArbitrary select a point pp

Retrieve all points density-reachable fromRetrieve all points density-reachable from pp wrtwrt EpsEps
andand MinPtsMinPts..

IfIf pp is a core point, a cluster is formed.is a core point, a cluster is formed.

IfIf pp is a border point, no points are density-reachableis a border point, no points are density-reachable
fromfrom pp and DBSCAN visits the next point of theand DBSCAN visits the next point of the
database.database.

Continue the process until all of the points have beenContinue the process until all of the points have been
processed.processed.
Lecture-46 - Density-Based MethodsLecture-46 - Density-Based Methods
OPTICS: A Cluster-Ordering MethodOPTICS: A Cluster-Ordering Method
OPTICS: Ordering Points To Identify theOPTICS: Ordering Points To Identify the
Clustering StructureClustering Structure

Ankerst, Breunig, Kriegel, and Sander (SIGMODAnkerst, Breunig, Kriegel, and Sander (SIGMOD’’99)99)

Produces a special order of the database wrt itsProduces a special order of the database wrt its
density-based clustering structuredensity-based clustering structure

This cluster-ordering contains info equiv to theThis cluster-ordering contains info equiv to the
density-based clusterings corresponding to a broaddensity-based clusterings corresponding to a broad
range of parameter settingsrange of parameter settings

Good for both automatic and interactive clusterGood for both automatic and interactive cluster
analysis, including finding intrinsic clustering structureanalysis, including finding intrinsic clustering structure

Can be represented graphically or using visualizationCan be represented graphically or using visualization
techniquestechniques
Lecture-46 - Density-Based MethodsLecture-46 - Density-Based Methods
OPTICS: Some Extension fromOPTICS: Some Extension from
DBSCANDBSCAN
Index-based:Index-based:
k = number of dimensionsk = number of dimensions
N = 20N = 20
p = 75%p = 75%
M = N(1-p) = 5M = N(1-p) = 5

Complexity: O(Complexity: O(kNkN22
))
Core DistanceCore Distance
Reachability DistanceReachability Distance
D
p2
MinPts = 5
ε = 3 cm
Max (core-distance (o), d (o, p))
r(p1, o) = 2.8cm. r(p2,o) = 4cm
o
o
p1
Lecture-46 - Density-Based MethodsLecture-46 - Density-Based Methods
ε
ε
Reachability
-distance
Cluster-order
of the objects
undefined
ε ‘
Lecture-46 - Density-Based MethodsLecture-46 - Density-Based Methods
DENCLUE: using density functionsDENCLUE: using density functions
DENsity-based CLUstEring by Hinneburg & KeimDENsity-based CLUstEring by Hinneburg & Keim
(KDD(KDD’’98)98)
Major featuresMajor features

Solid mathematical foundationSolid mathematical foundation

Good for data sets with large amounts of noiseGood for data sets with large amounts of noise

Allows a compact mathematical description of arbitrarilyAllows a compact mathematical description of arbitrarily
shaped clusters in high-dimensional data setsshaped clusters in high-dimensional data sets

Significant faster than existing algorithm (faster thanSignificant faster than existing algorithm (faster than
DBSCAN by a factor of up to 45)DBSCAN by a factor of up to 45)

But needs a large number of parametersBut needs a large number of parameters
Lecture-46 - Density-Based MethodsLecture-46 - Density-Based Methods
Uses grid cells but only keeps information about gridUses grid cells but only keeps information about grid
cells that do actually contain data points and managescells that do actually contain data points and manages
these cells in a tree-based access structure.these cells in a tree-based access structure.
Influence function: describes the impact of a data pointInfluence function: describes the impact of a data point
within its neighborhood.within its neighborhood.
Overall density of the data space can be calculated asOverall density of the data space can be calculated as
the sum of the influence function of all data points.the sum of the influence function of all data points.
Clusters can be determined mathematically byClusters can be determined mathematically by
identifying density attractors.identifying density attractors.
Density attractors are local maximal of the overallDensity attractors are local maximal of the overall
density function.density function.
Denclue: Technical EssenceDenclue: Technical Essence
Lecture-46 - Density-Based MethodsLecture-46 - Density-Based Methods
Gradient: The steepness of aGradient: The steepness of a
slopeslope
ExampleExample
∑=
−
=
N
i
xxd
D
Gaussian
i
exf 1
2
),(
2
2
)( σ
∑ =
−
⋅−=∇
N
i
xxd
ii
D
Gaussian
i
exxxxf 1
2
),(
2
2
)(),( σ
f x y eGaussian
d x y
( , )
( , )
=
−
2
2
2σ
Lecture-46 - Density-Based MethodsLecture-46 - Density-Based Methods
Density AttractorDensity Attractor
Lecture-46 - Density-Based MethodsLecture-46 - Density-Based Methods
Center-Defined and ArbitraryCenter-Defined and Arbitrary
Lecture-46 - Density-Based MethodsLecture-46 - Density-Based Methods
Lecture-47Lecture-47
Grid-Based MethodsGrid-Based Methods
Grid-Based Clustering MethodGrid-Based Clustering Method
Using multi-resolution grid data structureUsing multi-resolution grid data structure
Several interesting methodsSeveral interesting methods

STING (a STatistical INformation Grid approach)STING (a STatistical INformation Grid approach)

WaveClusterWaveCluster
A multi-resolution clustering approach usingA multi-resolution clustering approach using
wavelet methodwavelet method

CLIQUECLIQUE
Lecture-47 - Grid-Based MethodsLecture-47 - Grid-Based Methods
STING: A Statistical Information GridSTING: A Statistical Information Grid
ApproachApproach
Wang, Yang and Muntz (VLDB’97)Wang, Yang and Muntz (VLDB’97)
The spatial area area is divided into rectangularThe spatial area area is divided into rectangular
cellscells
There are several levels of cells corresponding toThere are several levels of cells corresponding to
different levels of resolutiondifferent levels of resolution
Lecture-47 - Grid-Based MethodsLecture-47 - Grid-Based Methods
STING: A Statistical InformationSTING: A Statistical Information
Grid ApproachGrid Approach

Each cell at a high level is partitioned into a number ofEach cell at a high level is partitioned into a number of
smaller cells in the next lower levelsmaller cells in the next lower level

Statistical info of each cell is calculated and storedStatistical info of each cell is calculated and stored
beforehand and is used to answer queriesbeforehand and is used to answer queries

Parameters of higher level cells can be easily calculatedParameters of higher level cells can be easily calculated
from parameters of lower level cellfrom parameters of lower level cell
countcount,, meanmean,, ss,, minmin,, maxmax
type of distribution—normal,type of distribution—normal, uniformuniform, etc., etc.

Use a top-down approach to answer spatial data queriesUse a top-down approach to answer spatial data queries

Start from a pre-selected layerStart from a pre-selected layer——typically with a smalltypically with a small
number of cellsnumber of cells

For each cell in the current level compute the confidenceFor each cell in the current level compute the confidence
intervalinterval
Lecture-47 - Grid-Based MethodsLecture-47 - Grid-Based Methods
STING: A Statistical Information GridSTING: A Statistical Information Grid
ApproachApproach

Remove the irrelevant cells from further considerationRemove the irrelevant cells from further consideration

When finish examining the current layer, proceed toWhen finish examining the current layer, proceed to
the next lower levelthe next lower level

Repeat this process until the bottom layer is reachedRepeat this process until the bottom layer is reached

Advantages:Advantages:
Query-independent, easy to parallelize, incrementalQuery-independent, easy to parallelize, incremental
updateupdate
O(K),O(K), wherewhere KK is the number of grid cells at theis the number of grid cells at the
lowest levellowest level

Disadvantages:Disadvantages:
All the cluster boundaries are either horizontal orAll the cluster boundaries are either horizontal or
vertical, and no diagonal boundary is detectedvertical, and no diagonal boundary is detected
Lecture-47 - Grid-Based MethodsLecture-47 - Grid-Based Methods
WaveClusterWaveCluster
Sheikholeslami, Chatterjee, and Zhang (VLDB’98)Sheikholeslami, Chatterjee, and Zhang (VLDB’98)
A multi-resolution clustering approach which appliesA multi-resolution clustering approach which applies
wavelet transform to the feature spacewavelet transform to the feature space

A wavelet transform is a signal processingA wavelet transform is a signal processing
technique that decomposes a signal into differenttechnique that decomposes a signal into different
frequency sub-band.frequency sub-band.
Both grid-based and density-basedBoth grid-based and density-based
Input parameters:Input parameters:

No of grid cells for each dimensionNo of grid cells for each dimension

the wavelet, and the no of applications of waveletthe wavelet, and the no of applications of wavelet
transform.transform.
Lecture-47 - Grid-Based MethodsLecture-47 - Grid-Based Methods
WaveClusterWaveCluster
How to apply wavelet transform to find clustersHow to apply wavelet transform to find clusters

Summaries the data by imposing a multidimensionalSummaries the data by imposing a multidimensional
grid structure onto data spacegrid structure onto data space

These multidimensional spatial data objects areThese multidimensional spatial data objects are
represented in a n-dimensional feature spacerepresented in a n-dimensional feature space

Apply wavelet transform on feature space to find theApply wavelet transform on feature space to find the
dense regions in the feature spacedense regions in the feature space

Apply wavelet transform multiple times which result inApply wavelet transform multiple times which result in
clusters at different scales from fine to coarseclusters at different scales from fine to coarse
Lecture-47 - Grid-Based MethodsLecture-47 - Grid-Based Methods
What Is WaveletWhat Is Wavelet
Lecture-47 - Grid-Based MethodsLecture-47 - Grid-Based Methods
QuantizationQuantization
Lecture-47 - Grid-Based MethodsLecture-47 - Grid-Based Methods
TransformationTransformation
Lecture-47 - Grid-Based MethodsLecture-47 - Grid-Based Methods
WaveClusterWaveCluster
Why is wavelet transformation useful forWhy is wavelet transformation useful for
clusteringclustering

Unsupervised clusteringUnsupervised clustering
It uses hat-shape filters to emphasize region whereIt uses hat-shape filters to emphasize region where
points cluster, but simultaneously to suppress weakerpoints cluster, but simultaneously to suppress weaker
information in their boundaryinformation in their boundary

Effective removal of outliersEffective removal of outliers

Multi-resolutionMulti-resolution

Cost efficiencyCost efficiency
Major features:Major features:

Complexity O(N)Complexity O(N)

Detect arbitrary shaped clusters at different scalesDetect arbitrary shaped clusters at different scales

Not sensitive to noise, not sensitive to input orderNot sensitive to noise, not sensitive to input order

Only applicable to low dimensional dataOnly applicable to low dimensional data
Lecture-47 - Grid-Based MethodsLecture-47 - Grid-Based Methods
CLIQUE (Clustering In QUEst)CLIQUE (Clustering In QUEst)
Agrawal, Gehrke, Gunopulos, Raghavan (SIGMOD’98).Agrawal, Gehrke, Gunopulos, Raghavan (SIGMOD’98).
Automatically identifying subspaces of a high dimensionalAutomatically identifying subspaces of a high dimensional
data space that allow better clustering than original spacedata space that allow better clustering than original space
CLIQUE can be considered as both density-based and grid-CLIQUE can be considered as both density-based and grid-
basedbased

It partitions each dimension into the same number ofIt partitions each dimension into the same number of
equal length intervalequal length interval

It partitions an m-dimensional data space into non-It partitions an m-dimensional data space into non-
overlapping rectangular unitsoverlapping rectangular units

A unit is dense if the fraction of total data pointsA unit is dense if the fraction of total data points
contained in the unit exceeds the input model parametercontained in the unit exceeds the input model parameter

A cluster is a maximal set of connected dense unitsA cluster is a maximal set of connected dense units
within a subspacewithin a subspace
Lecture-47 - Grid-Based MethodsLecture-47 - Grid-Based Methods
CLIQUE: The Major StepsCLIQUE: The Major Steps
Partition the data space and find the number of points thatPartition the data space and find the number of points that
lie inside each cell of the partition.lie inside each cell of the partition.
Identify the subspaces that contain clusters using theIdentify the subspaces that contain clusters using the
Apriori principleApriori principle
Identify clusters:Identify clusters:

Determine dense units in all subspaces of interestsDetermine dense units in all subspaces of interests

Determine connected dense units in all subspaces ofDetermine connected dense units in all subspaces of
interests.interests.
Generate minimal description for the clustersGenerate minimal description for the clusters

Determine maximal regions that cover a cluster ofDetermine maximal regions that cover a cluster of
connected dense units for each clusterconnected dense units for each cluster

Determination of minimal cover for each clusterDetermination of minimal cover for each cluster
Lecture-47 - Grid-Based MethodsLecture-47 - Grid-Based Methods
Salary
(10,000)
20 30 40 50 60
age
54312670
20 30 40 50 60
age
54312670
Vacation
(week)
age
Vacation
Salary 30 50
τ = 3
Lecture-47 - Grid-Based MethodsLecture-47 - Grid-Based Methods
Strength and Weakness ofStrength and Weakness of CLIQUECLIQUE
StrengthStrength

ItIt automaticallyautomatically finds subspaces of thefinds subspaces of the highesthighest
dimensionalitydimensionality such that high density clusters exist insuch that high density clusters exist in
those subspacesthose subspaces

It isIt is insensitiveinsensitive to the order of records in input andto the order of records in input and
does not presume some canonical data distributiondoes not presume some canonical data distribution

It scalesIt scales linearlylinearly with the size of input and has goodwith the size of input and has good
scalability as the number of dimensions in the datascalability as the number of dimensions in the data
increasesincreases
WeaknessWeakness

The accuracy of the clustering result may beThe accuracy of the clustering result may be
degraded at the expense of simplicity of the methoddegraded at the expense of simplicity of the method
Lecture-47 - Grid-Based MethodsLecture-47 - Grid-Based Methods
Lecture-48Lecture-48
Model-Based Clustering MethodsModel-Based Clustering Methods
Model-Based Clustering MethodsModel-Based Clustering Methods
Attempt to optimize the fit between the data and some mathematicalAttempt to optimize the fit between the data and some mathematical
modelmodel
Statistical and AI approachStatistical and AI approach

Conceptual clusteringConceptual clustering
A form of clustering in machine learningA form of clustering in machine learning
Produces a classification scheme for a set of unlabeled objectsProduces a classification scheme for a set of unlabeled objects
Finds characteristic description for each concept (class)Finds characteristic description for each concept (class)

COBWEBCOBWEB
A popular a simple method of incremental conceptual learningA popular a simple method of incremental conceptual learning
Creates a hierarchical clustering in the form of a classificationCreates a hierarchical clustering in the form of a classification
treetree
Each node refers to a concept and contains a probabilisticEach node refers to a concept and contains a probabilistic
description of that conceptdescription of that concept
Lecture-48 - Model-Based Clustering MethodsLecture-48 - Model-Based Clustering Methods
COBWEB Clustering MethodCOBWEB Clustering Method
A classification tree
Lecture-48 - Model-Based Clustering MethodsLecture-48 - Model-Based Clustering Methods
More on Statistical-Based ClusteringMore on Statistical-Based Clustering
Limitations of COBWEBLimitations of COBWEB

The assumption that the attributes are independentThe assumption that the attributes are independent
of each other is often too strong because correlationof each other is often too strong because correlation
may existmay exist

Not suitable for clustering large database data –Not suitable for clustering large database data –
skewed tree and expensive probability distributionsskewed tree and expensive probability distributions
CLASSITCLASSIT

an extension of COBWEB for incremental clusteringan extension of COBWEB for incremental clustering
of continuous dataof continuous data

suffers similar problems as COBWEBsuffers similar problems as COBWEB
AutoClass (Cheeseman and Stutz, 1996)AutoClass (Cheeseman and Stutz, 1996)

Uses Bayesian statistical analysis to estimate theUses Bayesian statistical analysis to estimate the
number of clustersnumber of clusters

Popular in industryPopular in industry
Lecture-48 - Model-Based Clustering MethodsLecture-48 - Model-Based Clustering Methods
Other Model-Based Clustering MethodsOther Model-Based Clustering Methods
Neural network approachesNeural network approaches

Represent each cluster as an exemplar, acting as aRepresent each cluster as an exemplar, acting as a
“prototype” of the cluster“prototype” of the cluster

New objects are distributed to the cluster whoseNew objects are distributed to the cluster whose
exemplar is the most similar according to someexemplar is the most similar according to some
dostance measuredostance measure
Competitive learningCompetitive learning

Involves a hierarchical architecture of several unitsInvolves a hierarchical architecture of several units
(neurons)(neurons)

Neurons compete in a “winner-takes-all” fashion forNeurons compete in a “winner-takes-all” fashion for
the object currently being presentedthe object currently being presented
Lecture-48 - Model-Based Clustering MethodsLecture-48 - Model-Based Clustering Methods
Model-Based ClusteringModel-Based Clustering
MethodsMethods
Lecture-48 - Model-Based Clustering MethodsLecture-48 - Model-Based Clustering Methods
Self-organizing feature maps (SOMs)Self-organizing feature maps (SOMs)
Clustering is also performed by having severalClustering is also performed by having several
units competing for the current objectunits competing for the current object
The unit whose weight vector is closest to theThe unit whose weight vector is closest to the
current object winscurrent object wins
The winner and its neighbors learn by havingThe winner and its neighbors learn by having
their weights adjustedtheir weights adjusted
SOMs are believed to resemble processing thatSOMs are believed to resemble processing that
can occur in the braincan occur in the brain
Useful for visualizing high-dimensional data inUseful for visualizing high-dimensional data in
2- or 3-D space2- or 3-D space
Lecture-48 - Model-Based Clustering MethodsLecture-48 - Model-Based Clustering Methods
Lecture-49Lecture-49
Outlier AnalysisOutlier Analysis
What Is Outlier Discovery?What Is Outlier Discovery?
What are outliers?What are outliers?

The set of objects are considerably dissimilar fromThe set of objects are considerably dissimilar from
the remainder of the datathe remainder of the data

Example: Sports: Michael Jordon, WayneExample: Sports: Michael Jordon, Wayne
Gretzky, ...Gretzky, ...
ProblemProblem

Find top n outlier pointsFind top n outlier points
Applications:Applications:

Credit card fraud detectionCredit card fraud detection

Telecom fraud detectionTelecom fraud detection

Customer segmentationCustomer segmentation

Medical analysisMedical analysis
Lecture-49 - Outlier AnalysisLecture-49 - Outlier Analysis
Outlier Discovery:Outlier Discovery:
Statistical ApproachesStatistical Approaches
Assume a model underlying distribution thatAssume a model underlying distribution that
generates data set (e.g. normal distribution)generates data set (e.g. normal distribution)
Use discordancy tests depending onUse discordancy tests depending on

data distributiondata distribution

distribution parameter (e.g., mean, variance)distribution parameter (e.g., mean, variance)

number of expected outliersnumber of expected outliers
DrawbacksDrawbacks

most tests are for single attributemost tests are for single attribute

In many cases, data distribution may not be knownIn many cases, data distribution may not be known
Lecture-49 - Outlier AnalysisLecture-49 - Outlier Analysis
Outlier Discovery: Distance-BasedOutlier Discovery: Distance-Based
ApproachApproach
Introduced to counter the main limitationsIntroduced to counter the main limitations
imposed by statistical methodsimposed by statistical methods

We need multi-dimensional analysis without knowingWe need multi-dimensional analysis without knowing
data distribution.data distribution.
Distance-based outlier: A DB(p, D)-outlier is anDistance-based outlier: A DB(p, D)-outlier is an
object O in a dataset T such that at least aobject O in a dataset T such that at least a
fraction p of the objects in T lies at a distancefraction p of the objects in T lies at a distance
greater than D from Ogreater than D from O
Algorithms for mining distance-based outliersAlgorithms for mining distance-based outliers

Index-based algorithmIndex-based algorithm

Nested-loop algorithmNested-loop algorithm

Cell-based algorithmCell-based algorithm
Lecture-49 - Outlier AnalysisLecture-49 - Outlier Analysis
Outlier Discovery: Deviation-BasedOutlier Discovery: Deviation-Based
ApproachApproach
Identifies outliers by examining the mainIdentifies outliers by examining the main
characteristics of objects in a groupcharacteristics of objects in a group
Objects that “deviate” from this description areObjects that “deviate” from this description are
considered outliersconsidered outliers
sequential exception techniquesequential exception technique

simulates the way in which humans can distinguishsimulates the way in which humans can distinguish
unusual objects from among a series of supposedlyunusual objects from among a series of supposedly
like objectslike objects
OLAP data cube techniqueOLAP data cube technique

uses data cubes to identify regions of anomalies inuses data cubes to identify regions of anomalies in
large multidimensional datalarge multidimensional data
Lecture-49 - Outlier AnalysisLecture-49 - Outlier Analysis

Contenu connexe

Tendances

Principal Component Analysis and Clustering
Principal Component Analysis and ClusteringPrincipal Component Analysis and Clustering
Principal Component Analysis and ClusteringUsha Vijay
 
Introduction to principal component analysis (pca)
Introduction to principal component analysis (pca)Introduction to principal component analysis (pca)
Introduction to principal component analysis (pca)Mohammed Musah
 
K mean-clustering algorithm
K mean-clustering algorithmK mean-clustering algorithm
K mean-clustering algorithmparry prabhu
 
Pca(principal components analysis)
Pca(principal components analysis)Pca(principal components analysis)
Pca(principal components analysis)kalung0313
 
CART – Classification & Regression Trees
CART – Classification & Regression TreesCART – Classification & Regression Trees
CART – Classification & Regression TreesHemant Chetwani
 
Support Vector Machine ppt presentation
Support Vector Machine ppt presentationSupport Vector Machine ppt presentation
Support Vector Machine ppt presentationAyanaRukasar
 
Support Vector Machines
Support Vector MachinesSupport Vector Machines
Support Vector Machinesnextlib
 
Feature selection concepts and methods
Feature selection concepts and methodsFeature selection concepts and methods
Feature selection concepts and methodsReza Ramezani
 
Chapter - 7 Data Mining Concepts and Techniques 2nd Ed slides Han &amp; Kamber
Chapter - 7 Data Mining Concepts and Techniques 2nd Ed slides Han &amp; KamberChapter - 7 Data Mining Concepts and Techniques 2nd Ed slides Han &amp; Kamber
Chapter - 7 Data Mining Concepts and Techniques 2nd Ed slides Han &amp; Kambererror007
 
Handling Imbalanced Data: SMOTE vs. Random Undersampling
Handling Imbalanced Data: SMOTE vs. Random UndersamplingHandling Imbalanced Data: SMOTE vs. Random Undersampling
Handling Imbalanced Data: SMOTE vs. Random UndersamplingIRJET Journal
 
K means Clustering
K means ClusteringK means Clustering
K means ClusteringEdureka!
 
Chap8 basic cluster_analysis
Chap8 basic cluster_analysisChap8 basic cluster_analysis
Chap8 basic cluster_analysisguru_prasadg
 

Tendances (20)

K - Nearest neighbor ( KNN )
K - Nearest neighbor  ( KNN )K - Nearest neighbor  ( KNN )
K - Nearest neighbor ( KNN )
 
Principal Component Analysis and Clustering
Principal Component Analysis and ClusteringPrincipal Component Analysis and Clustering
Principal Component Analysis and Clustering
 
Introduction to principal component analysis (pca)
Introduction to principal component analysis (pca)Introduction to principal component analysis (pca)
Introduction to principal component analysis (pca)
 
Kmeans
KmeansKmeans
Kmeans
 
K mean-clustering algorithm
K mean-clustering algorithmK mean-clustering algorithm
K mean-clustering algorithm
 
Pca(principal components analysis)
Pca(principal components analysis)Pca(principal components analysis)
Pca(principal components analysis)
 
CART – Classification & Regression Trees
CART – Classification & Regression TreesCART – Classification & Regression Trees
CART – Classification & Regression Trees
 
PCA
PCAPCA
PCA
 
Support Vector Machine ppt presentation
Support Vector Machine ppt presentationSupport Vector Machine ppt presentation
Support Vector Machine ppt presentation
 
Cluster analysis
Cluster analysisCluster analysis
Cluster analysis
 
Clusters techniques
Clusters techniquesClusters techniques
Clusters techniques
 
Support Vector Machines
Support Vector MachinesSupport Vector Machines
Support Vector Machines
 
Feature selection concepts and methods
Feature selection concepts and methodsFeature selection concepts and methods
Feature selection concepts and methods
 
Chapter - 7 Data Mining Concepts and Techniques 2nd Ed slides Han &amp; Kamber
Chapter - 7 Data Mining Concepts and Techniques 2nd Ed slides Han &amp; KamberChapter - 7 Data Mining Concepts and Techniques 2nd Ed slides Han &amp; Kamber
Chapter - 7 Data Mining Concepts and Techniques 2nd Ed slides Han &amp; Kamber
 
Handling Imbalanced Data: SMOTE vs. Random Undersampling
Handling Imbalanced Data: SMOTE vs. Random UndersamplingHandling Imbalanced Data: SMOTE vs. Random Undersampling
Handling Imbalanced Data: SMOTE vs. Random Undersampling
 
K means Clustering
K means ClusteringK means Clustering
K means Clustering
 
Pca ppt
Pca pptPca ppt
Pca ppt
 
Knn 160904075605-converted
Knn 160904075605-convertedKnn 160904075605-converted
Knn 160904075605-converted
 
Chap8 basic cluster_analysis
Chap8 basic cluster_analysisChap8 basic cluster_analysis
Chap8 basic cluster_analysis
 
Machine Learning: Bias and Variance Trade-off
Machine Learning: Bias and Variance Trade-offMachine Learning: Bias and Variance Trade-off
Machine Learning: Bias and Variance Trade-off
 

En vedette

c and data structures first unit notes (jntuh syllabus)
c and data structures first unit notes (jntuh syllabus)c and data structures first unit notes (jntuh syllabus)
c and data structures first unit notes (jntuh syllabus)Acad
 
Structure and Typedef
Structure and TypedefStructure and Typedef
Structure and TypedefAcad
 
Tiny os
Tiny osTiny os
Tiny osAcad
 
pipeline and vector processing
pipeline and vector processingpipeline and vector processing
pipeline and vector processingAcad
 
input output Organization
input output Organizationinput output Organization
input output OrganizationAcad
 
Memory Organization
Memory OrganizationMemory Organization
Memory OrganizationAcad
 
Prim Algorithm and kruskal algorithm
Prim Algorithm and kruskal algorithmPrim Algorithm and kruskal algorithm
Prim Algorithm and kruskal algorithmAcad
 
Branch and bound
Branch and boundBranch and bound
Branch and boundAcad
 
Classification and prediction
Classification and predictionClassification and prediction
Classification and predictionAcad
 
Union from C and Data Strutures
Union from C and Data StruturesUnion from C and Data Strutures
Union from C and Data StruturesAcad
 
Data retrieval in sensor networks
Data retrieval in sensor networksData retrieval in sensor networks
Data retrieval in sensor networksAcad
 
Association rule mining
Association rule miningAssociation rule mining
Association rule miningAcad
 

En vedette (12)

c and data structures first unit notes (jntuh syllabus)
c and data structures first unit notes (jntuh syllabus)c and data structures first unit notes (jntuh syllabus)
c and data structures first unit notes (jntuh syllabus)
 
Structure and Typedef
Structure and TypedefStructure and Typedef
Structure and Typedef
 
Tiny os
Tiny osTiny os
Tiny os
 
pipeline and vector processing
pipeline and vector processingpipeline and vector processing
pipeline and vector processing
 
input output Organization
input output Organizationinput output Organization
input output Organization
 
Memory Organization
Memory OrganizationMemory Organization
Memory Organization
 
Prim Algorithm and kruskal algorithm
Prim Algorithm and kruskal algorithmPrim Algorithm and kruskal algorithm
Prim Algorithm and kruskal algorithm
 
Branch and bound
Branch and boundBranch and bound
Branch and bound
 
Classification and prediction
Classification and predictionClassification and prediction
Classification and prediction
 
Union from C and Data Strutures
Union from C and Data StruturesUnion from C and Data Strutures
Union from C and Data Strutures
 
Data retrieval in sensor networks
Data retrieval in sensor networksData retrieval in sensor networks
Data retrieval in sensor networks
 
Association rule mining
Association rule miningAssociation rule mining
Association rule mining
 

Similaire à Cluster analysis

DM UNIT_4 PPT for btech final year students
DM UNIT_4 PPT for btech final year studentsDM UNIT_4 PPT for btech final year students
DM UNIT_4 PPT for btech final year studentssriharipatilin
 
Machine learning for_finance
Machine learning for_financeMachine learning for_finance
Machine learning for_financeStefan Duprey
 
3.1 clustering
3.1 clustering3.1 clustering
3.1 clusteringKrish_ver2
 
Presentation on Machine Learning and Data Mining
Presentation on Machine Learning and Data MiningPresentation on Machine Learning and Data Mining
Presentation on Machine Learning and Data Miningbutest
 
Neo4j MeetUp - Graph Exploration with MetaExp
Neo4j MeetUp - Graph Exploration with MetaExpNeo4j MeetUp - Graph Exploration with MetaExp
Neo4j MeetUp - Graph Exploration with MetaExpAdrian Ziegler
 
Machine Learning and Artificial Neural Networks.ppt
Machine Learning and Artificial Neural Networks.pptMachine Learning and Artificial Neural Networks.ppt
Machine Learning and Artificial Neural Networks.pptAnshika865276
 
Data Warehousing
Data WarehousingData Warehousing
Data Warehousingadil raja
 
FUNCTION OF RIVAL SIMILARITY IN A COGNITIVE DATA ANALYSIS

FUNCTION OF RIVAL SIMILARITY IN A COGNITIVE DATA ANALYSIS
FUNCTION OF RIVAL SIMILARITY IN A COGNITIVE DATA ANALYSIS

FUNCTION OF RIVAL SIMILARITY IN A COGNITIVE DATA ANALYSIS
Maxim Kazantsev
 
Machine Learning Algorithms (Part 1)
Machine Learning Algorithms (Part 1)Machine Learning Algorithms (Part 1)
Machine Learning Algorithms (Part 1)Zihui Li
 
Computational decision making
Computational decision makingComputational decision making
Computational decision makingBoris Adryan
 
Introduction to conventional machine learning techniques
Introduction to conventional machine learning techniquesIntroduction to conventional machine learning techniques
Introduction to conventional machine learning techniquesXavier Rafael Palou
 
Visualizing the Model Selection Process
Visualizing the Model Selection ProcessVisualizing the Model Selection Process
Visualizing the Model Selection ProcessBenjamin Bengfort
 

Similaire à Cluster analysis (20)

Cs501 cluster analysis
Cs501 cluster analysisCs501 cluster analysis
Cs501 cluster analysis
 
DM UNIT_4 PPT for btech final year students
DM UNIT_4 PPT for btech final year studentsDM UNIT_4 PPT for btech final year students
DM UNIT_4 PPT for btech final year students
 
Dbm630 lecture09
Dbm630 lecture09Dbm630 lecture09
Dbm630 lecture09
 
Machine learning for_finance
Machine learning for_financeMachine learning for_finance
Machine learning for_finance
 
3.1 clustering
3.1 clustering3.1 clustering
3.1 clustering
 
Presentation on Machine Learning and Data Mining
Presentation on Machine Learning and Data MiningPresentation on Machine Learning and Data Mining
Presentation on Machine Learning and Data Mining
 
cluster analysis
cluster analysiscluster analysis
cluster analysis
 
Clustering
ClusteringClustering
Clustering
 
Neo4j MeetUp - Graph Exploration with MetaExp
Neo4j MeetUp - Graph Exploration with MetaExpNeo4j MeetUp - Graph Exploration with MetaExp
Neo4j MeetUp - Graph Exploration with MetaExp
 
Machine Learning and Artificial Neural Networks.ppt
Machine Learning and Artificial Neural Networks.pptMachine Learning and Artificial Neural Networks.ppt
Machine Learning and Artificial Neural Networks.ppt
 
Chapter8
Chapter8Chapter8
Chapter8
 
Data Warehousing
Data WarehousingData Warehousing
Data Warehousing
 
FUNCTION OF RIVAL SIMILARITY IN A COGNITIVE DATA ANALYSIS

FUNCTION OF RIVAL SIMILARITY IN A COGNITIVE DATA ANALYSIS
FUNCTION OF RIVAL SIMILARITY IN A COGNITIVE DATA ANALYSIS

FUNCTION OF RIVAL SIMILARITY IN A COGNITIVE DATA ANALYSIS

 
Cluster
ClusterCluster
Cluster
 
Machine Learning Algorithms (Part 1)
Machine Learning Algorithms (Part 1)Machine Learning Algorithms (Part 1)
Machine Learning Algorithms (Part 1)
 
nnml.ppt
nnml.pptnnml.ppt
nnml.ppt
 
Computational decision making
Computational decision makingComputational decision making
Computational decision making
 
CBIR MIni project2
CBIR MIni project2CBIR MIni project2
CBIR MIni project2
 
Introduction to conventional machine learning techniques
Introduction to conventional machine learning techniquesIntroduction to conventional machine learning techniques
Introduction to conventional machine learning techniques
 
Visualizing the Model Selection Process
Visualizing the Model Selection ProcessVisualizing the Model Selection Process
Visualizing the Model Selection Process
 

Plus de Acad

routing alg.pptx
routing alg.pptxrouting alg.pptx
routing alg.pptxAcad
 
Network Layer design Issues.pptx
Network Layer design Issues.pptxNetwork Layer design Issues.pptx
Network Layer design Issues.pptxAcad
 
Computer Science basics
Computer Science basics Computer Science basics
Computer Science basics Acad
 
Union
UnionUnion
UnionAcad
 
Stacks
StacksStacks
StacksAcad
 
Str
StrStr
StrAcad
 
Functions
FunctionsFunctions
FunctionsAcad
 
File
FileFile
FileAcad
 
Ds
DsDs
DsAcad
 
Dma
DmaDma
DmaAcad
 
Botnet detection by Imitation method
Botnet detection  by Imitation methodBotnet detection  by Imitation method
Botnet detection by Imitation methodAcad
 
Bot net detection by using ssl encryption
Bot net detection by using ssl encryptionBot net detection by using ssl encryption
Bot net detection by using ssl encryptionAcad
 
An Aggregate Location Monitoring System Of Privacy Preserving In Authenticati...
An Aggregate Location Monitoring System Of Privacy Preserving In Authenticati...An Aggregate Location Monitoring System Of Privacy Preserving In Authenticati...
An Aggregate Location Monitoring System Of Privacy Preserving In Authenticati...Acad
 
Literature survey on peer to peer botnets
Literature survey on peer to peer botnetsLiterature survey on peer to peer botnets
Literature survey on peer to peer botnetsAcad
 
multi processors
multi processorsmulti processors
multi processorsAcad
 

Plus de Acad (15)

routing alg.pptx
routing alg.pptxrouting alg.pptx
routing alg.pptx
 
Network Layer design Issues.pptx
Network Layer design Issues.pptxNetwork Layer design Issues.pptx
Network Layer design Issues.pptx
 
Computer Science basics
Computer Science basics Computer Science basics
Computer Science basics
 
Union
UnionUnion
Union
 
Stacks
StacksStacks
Stacks
 
Str
StrStr
Str
 
Functions
FunctionsFunctions
Functions
 
File
FileFile
File
 
Ds
DsDs
Ds
 
Dma
DmaDma
Dma
 
Botnet detection by Imitation method
Botnet detection  by Imitation methodBotnet detection  by Imitation method
Botnet detection by Imitation method
 
Bot net detection by using ssl encryption
Bot net detection by using ssl encryptionBot net detection by using ssl encryption
Bot net detection by using ssl encryption
 
An Aggregate Location Monitoring System Of Privacy Preserving In Authenticati...
An Aggregate Location Monitoring System Of Privacy Preserving In Authenticati...An Aggregate Location Monitoring System Of Privacy Preserving In Authenticati...
An Aggregate Location Monitoring System Of Privacy Preserving In Authenticati...
 
Literature survey on peer to peer botnets
Literature survey on peer to peer botnetsLiterature survey on peer to peer botnets
Literature survey on peer to peer botnets
 
multi processors
multi processorsmulti processors
multi processors
 

Dernier

Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Celine George
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingTechSoup
 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Educationpboyjonauth
 
Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactdawncurless
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Krashi Coaching
 
Hybridoma Technology ( Production , Purification , and Application )
Hybridoma Technology  ( Production , Purification , and Application  ) Hybridoma Technology  ( Production , Purification , and Application  )
Hybridoma Technology ( Production , Purification , and Application ) Sakshi Ghasle
 
URLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website AppURLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website AppCeline George
 
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991RKavithamani
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxiammrhaywood
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionSafetyChain Software
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdfQucHHunhnh
 
Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3JemimahLaneBuaron
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdfQucHHunhnh
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeThiyagu K
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactPECB
 
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...Marc Dusseiller Dusjagr
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Sapana Sha
 

Dernier (20)

Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17
 
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdfTataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy Consulting
 
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptxINDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Education
 
Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impact
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
 
Hybridoma Technology ( Production , Purification , and Application )
Hybridoma Technology  ( Production , Purification , and Application  ) Hybridoma Technology  ( Production , Purification , and Application  )
Hybridoma Technology ( Production , Purification , and Application )
 
URLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website AppURLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website App
 
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory Inspection
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdf
 
Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
 
Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"
Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"
Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and Mode
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global Impact
 
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
 

Cluster analysis

  • 1. Lecture-41Lecture-41 What is Cluster Analysis?What is Cluster Analysis?
  • 2. General Applications of ClusteringGeneral Applications of Clustering Pattern RecognitionPattern Recognition Spatial Data AnalysisSpatial Data Analysis  create thematic maps in GIS by clustering featurecreate thematic maps in GIS by clustering feature spacesspaces  detect spatial clusters and explain them in spatial datadetect spatial clusters and explain them in spatial data miningmining Image ProcessingImage Processing Economic Science ( market research)Economic Science ( market research) WWWWWW  Document classificationDocument classification  Cluster Weblog data to discover groups of similarCluster Weblog data to discover groups of similar access patternsaccess patterns Lecture-41 - What is Cluster Analysis?Lecture-41 - What is Cluster Analysis?
  • 3. Examples of Clustering ApplicationsExamples of Clustering Applications Marketing: Help marketers discover distinct groups in theirMarketing: Help marketers discover distinct groups in their customer bases, and then use this knowledge to developcustomer bases, and then use this knowledge to develop targeted marketing programstargeted marketing programs Land use: Identification of areas of similar land use in anLand use: Identification of areas of similar land use in an earth observation databaseearth observation database Insurance: Identifying groups of motor insurance policyInsurance: Identifying groups of motor insurance policy holders with a high average claim costholders with a high average claim cost City-planning: Identifying groups of houses according toCity-planning: Identifying groups of houses according to their house type, value, and geographical locationtheir house type, value, and geographical location Earth-quake studies: Observed earth quake epicentersEarth-quake studies: Observed earth quake epicenters should be clustered along continent faultsshould be clustered along continent faults Lecture-41 - What is Cluster Analysis?Lecture-41 - What is Cluster Analysis?
  • 4. What Is Good Clustering?What Is Good Clustering? A good clustering method will produce highA good clustering method will produce high quality clusters withquality clusters with  highhigh intra-classintra-class similaritysimilarity  lowlow inter-classinter-class similaritysimilarity The quality of a clustering result depends onThe quality of a clustering result depends on both the similarity measure used by the methodboth the similarity measure used by the method and its implementation.and its implementation. The quality of a clustering method is alsoThe quality of a clustering method is also measured by its ability to discover some or all ofmeasured by its ability to discover some or all of the hidden patterns.the hidden patterns. Lecture-41 - What is Cluster Analysis?Lecture-41 - What is Cluster Analysis?
  • 5. Requirements of Clustering in Data MiningRequirements of Clustering in Data Mining ScalabilityScalability Ability to deal with different types of attributesAbility to deal with different types of attributes Discovery of clusters with arbitrary shapeDiscovery of clusters with arbitrary shape Minimal requirements for domain knowledge toMinimal requirements for domain knowledge to determine input parametersdetermine input parameters Able to deal with noise and outliersAble to deal with noise and outliers Insensitive to order of input recordsInsensitive to order of input records High dimensionalityHigh dimensionality Incorporation of user-specified constraintsIncorporation of user-specified constraints Interpretability and usabilityInterpretability and usability Lecture-41 - What is Cluster Analysis?Lecture-41 - What is Cluster Analysis?
  • 6. Lecture-42Lecture-42 Types of Data in Cluster AnalysisTypes of Data in Cluster Analysis
  • 7. Data StructuresData Structures Data matrixData matrix  (two modes)(two modes) Dissimilarity matrixDissimilarity matrix  (one mode)(one mode)                   npx...nfx...n1x ............... ipx...ifx...i1x ............... 1px...1fx...11x                 0...)2,()1,( ::: )2,3() ...ndnd 0dd(3,1 0d(2,1) 0 Lecture-42 - Types of Data in Cluster AnalysisLecture-42 - Types of Data in Cluster Analysis
  • 8. Measure the Quality of ClusteringMeasure the Quality of Clustering Dissimilarity/Similarity metric: Similarity is expressed inDissimilarity/Similarity metric: Similarity is expressed in terms of a distance function, which is typically metric:terms of a distance function, which is typically metric: dd((i, ji, j)) There is a separate “quality” function that measures theThere is a separate “quality” function that measures the “goodness” of a cluster.“goodness” of a cluster. The definitions of distance functions are usually veryThe definitions of distance functions are usually very different for interval-scaled, boolean, categorical, ordinaldifferent for interval-scaled, boolean, categorical, ordinal and ratio variables.and ratio variables. Weights should be associated with different variablesWeights should be associated with different variables based on applications and data semantics.based on applications and data semantics. It is hard to define “similar enough” or “good enough”It is hard to define “similar enough” or “good enough”  the answer is typically highly subjective.the answer is typically highly subjective. Lecture-42 - Types of Data in Cluster AnalysisLecture-42 - Types of Data in Cluster Analysis
  • 9. Type of data in clustering analysisType of data in clustering analysis Interval-scaled variablesInterval-scaled variables Binary variablesBinary variables Categorical, Ordinal, and Ratio ScaledCategorical, Ordinal, and Ratio Scaled variablesvariables Variables of mixed typesVariables of mixed types Lecture-42 - Types of Data in Cluster AnalysisLecture-42 - Types of Data in Cluster Analysis
  • 10. Interval-valued variablesInterval-valued variables Standardize dataStandardize data  Calculate the mean absolute deviation:Calculate the mean absolute deviation: wherewhere  Calculate the standardized measurement (Calculate the standardized measurement (z-scorez-score)) Using mean absolute deviation is more robustUsing mean absolute deviation is more robust than using standard deviationthan using standard deviation .)... 21 1 nffff xx(xnm +++= |)|...|||(|1 21 fnffffff mxmxmxns −++−+−= f fif if s mx z − = Lecture-42 - Types of Data in Cluster AnalysisLecture-42 - Types of Data in Cluster Analysis
  • 11. Similarity and Dissimilarity ObjectsSimilarity and Dissimilarity Objects Distances are normally used to measure the similarity orDistances are normally used to measure the similarity or dissimilarity between two data objectsdissimilarity between two data objects Some popular ones include:Some popular ones include: Minkowski distanceMinkowski distance:: wherewhere ii = (= (xxi1i1,, xxi2i2, …,, …, xxipip) and) and jj = (= (xxj1j1,, xxj2j2, …,, …, xxjpjp) are two) are two pp-dimensional-dimensional data objects, anddata objects, and qq is a positive integeris a positive integer IfIf qq == 11,, dd is Manhattan distanceis Manhattan distance q q pp qq j x i x j x i x j x i xjid )||...|||(|),( 2211 −++−+−= ||...||||),( 2211 pp j x i x j x i x j x i xjid −++−+−= Lecture-42 - Types of Data in Cluster AnalysisLecture-42 - Types of Data in Cluster Analysis
  • 12. Similarity and Dissimilarity ObjectsSimilarity and Dissimilarity Objects If qIf q == 22,, dd is Euclidean distance:is Euclidean distance:  PropertiesProperties d(i,j)d(i,j) ≥≥ 00 d(i,i)d(i,i) = 0= 0 d(i,j)d(i,j) == d(j,i)d(j,i) d(i,j)d(i,j) ≤≤ d(i,k)d(i,k) ++ d(k,j)d(k,j) Also one can use weighted distance, parametricAlso one can use weighted distance, parametric Pearson product moment correlation, or otherPearson product moment correlation, or other disimilarity measures.disimilarity measures. )||...|||(|),( 22 22 2 11 pp j x i x j x i x j x i xjid −++−+−= Lecture-42 - Types of Data in Cluster AnalysisLecture-42 - Types of Data in Cluster Analysis
  • 13. Binary VariablesBinary Variables A contingency table for binary dataA contingency table for binary data Simple matching coefficient (invariant, if theSimple matching coefficient (invariant, if the binary variable isbinary variable is symmetricsymmetric):): Jaccard coefficient (noninvariant if the binaryJaccard coefficient (noninvariant if the binary variable isvariable is asymmetricasymmetric):): dcba cbjid +++ +=),( pdbcasum dcdc baba sum ++ + + 0 1 01 cba cbjid ++ +=),( Object i Object j Lecture-42 - Types of Data in Cluster AnalysisLecture-42 - Types of Data in Cluster Analysis
  • 14. Dissimilarity between BinaryDissimilarity between Binary VariablesVariables ExampleExample  gender is a symmetric attributegender is a symmetric attribute  the remaining attributes are asymmetric binarythe remaining attributes are asymmetric binary  let the values Y and P be set to 1, and the value N be set to 0let the values Y and P be set to 1, and the value N be set to 0 Name Gender Fever Cough Test-1 Test-2 Test-3 Test-4 Jack M Y N P N N N Mary F Y N P N P N Jim M Y P N N N N 75.0 211 21 ),( 67.0 111 11 ),( 33.0 102 10 ),( = ++ + = = ++ + = = ++ + = maryjimd jimjackd maryjackd Lecture-42 - Types of Data in Cluster AnalysisLecture-42 - Types of Data in Cluster Analysis
  • 15. Categorical VariablesCategorical Variables A generalization of the binary variable in that itA generalization of the binary variable in that it can take more than 2 states, e.g., red, yellow,can take more than 2 states, e.g., red, yellow, blue, greenblue, green Method 1: Simple matchingMethod 1: Simple matching  MM is no of matches,is no of matches, pp is total no of variablesis total no of variables Method 2: use a large number of binary variablesMethod 2: use a large number of binary variables  creating a new binary variable for each of thecreating a new binary variable for each of the MM nominal statesnominal states p mpjid −=),( Lecture-42 - Types of Data in Cluster AnalysisLecture-42 - Types of Data in Cluster Analysis
  • 16. Ordinal VariablesOrdinal Variables An ordinal variable can be discrete or continuousAn ordinal variable can be discrete or continuous order is important, e.g., rankorder is important, e.g., rank Can be treated like interval-scaledCan be treated like interval-scaled  replacingreplacing xxifif by their rankby their rank  map the range of each variable onto [0, 1] by replacingmap the range of each variable onto [0, 1] by replacing ii-th object in the-th object in the ff-th variable by-th variable by  compute the dissimilarity using methods for interval-compute the dissimilarity using methods for interval- scaled variablesscaled variables 1 1 − − = f if if M r z },...,1{ fif Mr ∈ Lecture-42 - Types of Data in Cluster AnalysisLecture-42 - Types of Data in Cluster Analysis
  • 17. Ratio-Scaled VariablesRatio-Scaled Variables Ratio-scaled variable: a positive measurement onRatio-scaled variable: a positive measurement on a nonlinear scale, approximately at exponentiala nonlinear scale, approximately at exponential scale, such asscale, such as AeAeBtBt oror AeAe-Bt-Bt Methods:Methods:  treat them like interval-scaled variablestreat them like interval-scaled variables  apply logarithmic transformationapply logarithmic transformation yyifif == log(xlog(xifif))  treat them as continuous ordinal data treat their rank astreat them as continuous ordinal data treat their rank as interval-scaled.interval-scaled. Lecture-42 - Types of Data in Cluster AnalysisLecture-42 - Types of Data in Cluster Analysis
  • 18. Variables of MixedVariables of Mixed TypesTypesA database may contain all the six types of variablesA database may contain all the six types of variables  symmetric binary, asymmetric binary, nominal, ordinal,symmetric binary, asymmetric binary, nominal, ordinal, interval and ratio.interval and ratio. One may use a weighted formula to combine theirOne may use a weighted formula to combine their effects.effects.  ff is binary or nominal:is binary or nominal: ddijij (f)(f) = 0 if x= 0 if xifif = x= xjfjf , or d, or dijij (f)(f) = 1 o.w.= 1 o.w.  ff is interval-based: use the normalized distanceis interval-based: use the normalized distance  ff is ordinal or ratio-scaledis ordinal or ratio-scaled compute ranks rcompute ranks rifif andand and treat zand treat zifif as interval-scaledas interval-scaled )( 1 )()( 1 ),( f ij p f f ij f ij p f d jid δ δ = = Σ Σ = 1 1 − − = f if M rzif Lecture-42 - Types of Data in Cluster AnalysisLecture-42 - Types of Data in Cluster Analysis
  • 19. Lecture-43Lecture-43 A Categorization of MajorA Categorization of Major Clustering MethodsClustering Methods
  • 20. Major Clustering ApproachesMajor Clustering Approaches Partitioning algorithmsPartitioning algorithms  Construct various partitions and then evaluate them by someConstruct various partitions and then evaluate them by some criterioncriterion Hierarchy algorithmsHierarchy algorithms  Create a hierarchical decomposition of the set of data (or objects)Create a hierarchical decomposition of the set of data (or objects) using some criterionusing some criterion Density-basedDensity-based  based on connectivity and density functionsbased on connectivity and density functions Grid-basedGrid-based  based on a multiple-level granularity structurebased on a multiple-level granularity structure Model-basedModel-based  A model is hypothesized for each of the clusters and the idea is toA model is hypothesized for each of the clusters and the idea is to find the best fit of that model to each otherfind the best fit of that model to each other Lecture-43 - A Categorization of Major Clustering MethodsLecture-43 - A Categorization of Major Clustering Methods
  • 22. Partitioning Algorithms: Basic ConceptPartitioning Algorithms: Basic Concept Partitioning method: Construct a partition of a databasePartitioning method: Construct a partition of a database DD ofof nn objects into a set ofobjects into a set of kk clustersclusters Given aGiven a kk, find a partition of, find a partition of k clustersk clusters that optimizes thethat optimizes the chosen partitioning criterionchosen partitioning criterion  Global optimal: exhaustively enumerate all partitionsGlobal optimal: exhaustively enumerate all partitions  Heuristic methods:Heuristic methods: k-meansk-means andand k-medoidsk-medoids algorithmsalgorithms  k-meansk-means - Each cluster is represented by the center of- Each cluster is represented by the center of the clusterthe cluster  k-medoidsk-medoids or PAM (Partition around medoids) - Eachor PAM (Partition around medoids) - Each cluster is represented by one of the objects in thecluster is represented by one of the objects in the clustercluster Lecture-44 - Partitioning MethodsLecture-44 - Partitioning Methods
  • 23. TheThe K-MeansK-Means Clustering MethodClustering Method GivenGiven kk, the, the k-meansk-means algorithm isalgorithm is implemented in 4 steps:implemented in 4 steps:  Partition objects intoPartition objects into kk nonempty subsetsnonempty subsets  Compute seed points as the centroids of theCompute seed points as the centroids of the clusters of the current partition. The centroid is theclusters of the current partition. The centroid is the center (mean point) of the cluster.center (mean point) of the cluster.  Assign each object to the cluster with the nearestAssign each object to the cluster with the nearest seed point.seed point.  Go back to Step 2, stop when no more newGo back to Step 2, stop when no more new assignment.assignment. Lecture-44 - Partitioning MethodsLecture-44 - Partitioning Methods
  • 24. TheThe K-MeansK-Means Clustering MethodClustering Method ExampleExample 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 Lecture-44 - Partitioning MethodsLecture-44 - Partitioning Methods
  • 25. thethe K-MeansK-Means MethodMethod StrengthStrength  Relatively efficientRelatively efficient:: OO((tkntkn), where), where nn is no of objects,is no of objects, kk isis no of clusters, andno of clusters, and tt is no of iterations. Normally,is no of iterations. Normally, kk,, tt <<<< nn..  Often terminates at aOften terminates at a local optimumlocal optimum. The. The global optimumglobal optimum may be found using techniques such as:may be found using techniques such as: deterministicdeterministic annealingannealing andand genetic algorithmsgenetic algorithms WeaknessWeakness  Applicable only whenApplicable only when meanmean is defined, then what aboutis defined, then what about categorical data?categorical data?  Need to specifyNeed to specify k,k, thethe numbernumber of clusters, in advanceof clusters, in advance  Unable to handle noisy data andUnable to handle noisy data and outliersoutliers  Not suitable to discover clusters withNot suitable to discover clusters with non-convex shapesnon-convex shapes Lecture-44 - Partitioning MethodsLecture-44 - Partitioning Methods
  • 26. Variations of theVariations of the K-MeansK-Means MethodMethod A few variants of theA few variants of the k-meansk-means which differ inwhich differ in  Selection of the initialSelection of the initial kk meansmeans  Dissimilarity calculationsDissimilarity calculations  Strategies to calculate cluster meansStrategies to calculate cluster means Handling categorical data:Handling categorical data: k-modesk-modes  Replacing means of clusters with modesReplacing means of clusters with modes  Using new dissimilarity measures to deal withUsing new dissimilarity measures to deal with categorical objectscategorical objects  Using a frequency-based method to update modes ofUsing a frequency-based method to update modes of clustersclusters  A mixture of categorical and numerical data:A mixture of categorical and numerical data: k-prototypek-prototype methodmethod Lecture-44 - Partitioning MethodsLecture-44 - Partitioning Methods
  • 27. TheThe KK--MedoidsMedoids Clustering MethodClustering Method FindFind representativerepresentative objects, calledobjects, called medoidsmedoids, in clusters, in clusters PAMPAM (Partitioning Around Medoids, 1987)(Partitioning Around Medoids, 1987)  starts from an initial set of medoids and iterativelystarts from an initial set of medoids and iteratively replaces one of the medoids by one of the non-replaces one of the medoids by one of the non- medoids if it improves the total distance of the resultingmedoids if it improves the total distance of the resulting clusteringclustering  PAMPAM works effectively for small data sets, but does notworks effectively for small data sets, but does not scale well for large data setsscale well for large data sets CLARACLARA (Kaufmann & Rousseeuw, 1990)(Kaufmann & Rousseeuw, 1990) CLARANSCLARANS (Ng & Han, 1994): Randomized sampling(Ng & Han, 1994): Randomized sampling Focusing + spatial data structure (Ester et al., 1995)Focusing + spatial data structure (Ester et al., 1995) Lecture-44 - Partitioning MethodsLecture-44 - Partitioning Methods
  • 28. PAM (Partitioning Around Medoids) (1987)PAM (Partitioning Around Medoids) (1987) PAM (Kaufman and Rousseeuw, 1987), built inPAM (Kaufman and Rousseeuw, 1987), built in SplusSplus Use real object to represent the clusterUse real object to represent the cluster  SelectSelect kk representative objects arbitrarilyrepresentative objects arbitrarily  For each pair of non-selected objectFor each pair of non-selected object hh and selectedand selected objectobject ii, calculate the total swapping cost, calculate the total swapping cost TCTCihih  For each pair ofFor each pair of ii andand hh,, IfIf TCTCihih < 0,< 0, ii is replaced byis replaced by hh Then assign each non-selected object to the mostThen assign each non-selected object to the most similar representative objectsimilar representative object  repeat steps 2-3 until there is no changerepeat steps 2-3 until there is no change Lecture-44 - Partitioning MethodsLecture-44 - Partitioning Methods
  • 29. PAM Clustering:PAM Clustering: Total swapping costTotal swapping cost TCTCihih==∑∑jjCCjihjih 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 j i h t Cjih = 0 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 t i h j Cjih = d(j, h) - d(j, i) 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 h i t j Cjih = d(j, t) - d(j, i) 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 t i h j Cjih = d(j, h) - d(j, t)Lecture-44 - Partitioning MethodsLecture-44 - Partitioning Methods
  • 30. CLARACLARA (Clustering Large Applications)(Clustering Large Applications) (1990)(1990) CLARACLARA (Kaufmann and Rousseeuw in 1990)(Kaufmann and Rousseeuw in 1990)  Built in statistical analysis packages, such as S+Built in statistical analysis packages, such as S+ It drawsIt draws multiple samplesmultiple samples of the data set, appliesof the data set, applies PAMPAM onon each sample, and gives the best clustering as the outputeach sample, and gives the best clustering as the output StrengthStrength: deals with larger data sets than: deals with larger data sets than PAMPAM Weakness:Weakness:  Efficiency depends on the sample sizeEfficiency depends on the sample size  A good clustering based on samples will notA good clustering based on samples will not necessarily represent a good clustering of the wholenecessarily represent a good clustering of the whole data set if the sample is biaseddata set if the sample is biased Lecture-44 - Partitioning MethodsLecture-44 - Partitioning Methods
  • 31. CLARANSCLARANS (“Randomized” CLARA)(“Randomized” CLARA) (1994)(1994) CLARANSCLARANS (A Clustering Algorithm based on Randomized(A Clustering Algorithm based on Randomized Search) CLARANS draws sample of neighborsSearch) CLARANS draws sample of neighbors dynamicallydynamically The clustering process can be presented as searching aThe clustering process can be presented as searching a graph where every node is a potential solution, that is, agraph where every node is a potential solution, that is, a set ofset of kk medoidsmedoids If the local optimum is found,If the local optimum is found, CLARANSCLARANS starts with newstarts with new randomly selected node in search for a new local optimumrandomly selected node in search for a new local optimum It is more efficient and scalable than bothIt is more efficient and scalable than both PAMPAM andand CLARACLARA Focusing techniques and spatial access structures mayFocusing techniques and spatial access structures may further improve its performancefurther improve its performance Lecture-44 - Partitioning MethodsLecture-44 - Partitioning Methods
  • 33. Hierarchical ClusteringHierarchical Clustering Use distance matrix as clustering criteria. ThisUse distance matrix as clustering criteria. This method does not require the number of clustersmethod does not require the number of clusters kk as an input, but needs a termination conditionas an input, but needs a termination condition Step 0 Step 1 Step 2 Step 3 Step 4 b d c e a a b d e c d e a b c d e Step 4 Step 3 Step 2 Step 1 Step 0 agglomerative (AGNES) divisive (DIANA) Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods
  • 34. AGNES (Agglomerative Nesting)AGNES (Agglomerative Nesting) Introduced in Kaufmann and Rousseeuw (1990)Introduced in Kaufmann and Rousseeuw (1990) Implemented in statistical analysis packages, e.g.,Implemented in statistical analysis packages, e.g., SplusSplus Use the Single-Link method and the dissimilarityUse the Single-Link method and the dissimilarity matrix.matrix. Merge nodes that have the least dissimilarityMerge nodes that have the least dissimilarity Go on in a non-descending fashionGo on in a non-descending fashion Eventually all nodes belong to the same clusterEventually all nodes belong to the same cluster 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods
  • 35. A Dendrogram Shows How the Clusters are Merged Hierarchically Decompose data objects into a several levels of nested partitioning (tree of clusters), called a dendrogram. A clustering of the data objects is obtained by cutting the dendrogram at the desired level, then each connected component forms a cluster. Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods
  • 36. DIANA (Divisive Analysis)DIANA (Divisive Analysis) Introduced in Kaufmann and Rousseeuw (1990)Introduced in Kaufmann and Rousseeuw (1990) Implemented in statistical analysis packages,Implemented in statistical analysis packages, e.g., Spluse.g., Splus Inverse order of AGNESInverse order of AGNES Eventually each node forms a cluster on its ownEventually each node forms a cluster on its own 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods
  • 37. More on Hierarchical Clustering MethodsMore on Hierarchical Clustering Methods Major weakness of agglomerative clustering methodsMajor weakness of agglomerative clustering methods  do not scaledo not scale well: time complexity of at leastwell: time complexity of at least OO((nn22 ),), wherewhere nn is the number of total objectsis the number of total objects  can never undo what was done previouslycan never undo what was done previously Integration of hierarchical with distance-based clusteringIntegration of hierarchical with distance-based clustering  BIRCH (1996)BIRCH (1996): uses CF-tree and incrementally adjusts: uses CF-tree and incrementally adjusts the quality of sub-clustersthe quality of sub-clusters  CURE (1998CURE (1998): selects well-scattered points from the): selects well-scattered points from the cluster and then shrinks them towards the center of thecluster and then shrinks them towards the center of the cluster by a specified fractioncluster by a specified fraction  CHAMELEON (1999)CHAMELEON (1999): hierarchical clustering using: hierarchical clustering using dynamic modelingdynamic modeling Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods
  • 38. BIRCH (1996)BIRCH (1996) Birch: Balanced Iterative Reducing and Clustering usingBirch: Balanced Iterative Reducing and Clustering using Hierarchies, by Zhang, Ramakrishnan, Livny (SIGMODHierarchies, by Zhang, Ramakrishnan, Livny (SIGMOD’’96)96) Incrementally construct a CF (Clustering Feature) tree, aIncrementally construct a CF (Clustering Feature) tree, a hierarchical data structure for multiphase clusteringhierarchical data structure for multiphase clustering  Phase 1: scan DB to build an initial in-memory CF treePhase 1: scan DB to build an initial in-memory CF tree (a multi-level compression of the data that tries to(a multi-level compression of the data that tries to preserve the inherent clustering structure of the data)preserve the inherent clustering structure of the data)  Phase 2: use an arbitrary clustering algorithm to clusterPhase 2: use an arbitrary clustering algorithm to cluster the leaf nodes of the CF-treethe leaf nodes of the CF-tree Scales linearlyScales linearly: finds a good clustering with a single scan: finds a good clustering with a single scan and improves the quality with a few additional scansand improves the quality with a few additional scans Weakness:Weakness: handles only numeric data, and sensitive to thehandles only numeric data, and sensitive to the order of the data record.order of the data record. Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods
  • 39. Clustering FeatureClustering Feature VectorVector Clustering Feature: CF = (N, LS, SS) N: Number of data points LS: ∑N i=1=Xi SS: ∑N i=1=Xi 2 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 CF = (5, (16,30),(54,190)) (3,4) (2,6) (4,5) (4,7) (3,8) Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods
  • 40. CF TreeCF Tree CF1 child1 CF3 child3 CF2 child2 CF6 child6 CF1 child1 CF3 child3 CF2 child2 CF5 child5 CF1 CF2 CF6 prev next CF1 CF2 CF4 prev next B = 7 L = 6 Root Non-leaf node Leaf node Leaf node Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods
  • 41. CURECURE (Clustering(Clustering UsingUsing REpresentatives )REpresentatives ) CURE: proposed by Guha, Rastogi & Shim, 1998CURE: proposed by Guha, Rastogi & Shim, 1998  Stops the creation of a cluster hierarchy if a levelStops the creation of a cluster hierarchy if a level consists ofconsists of kk clustersclusters  Uses multiple representative points to evaluate theUses multiple representative points to evaluate the distance between clusters, adjusts well to arbitrarydistance between clusters, adjusts well to arbitrary shaped clusters and avoids single-link effectshaped clusters and avoids single-link effect Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods
  • 42. Drawbacks of Distance-Drawbacks of Distance- Based MethodBased Method Drawbacks of square-error based clusteringDrawbacks of square-error based clustering methodmethod  Consider only one point as representative of a clusterConsider only one point as representative of a cluster  Good only for convex shaped, similar size andGood only for convex shaped, similar size and density, and ifdensity, and if kk can be reasonably estimatedcan be reasonably estimatedLecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods
  • 43. Cure: The AlgorithmCure: The Algorithm  Draw random sampleDraw random sample ss..  Partition sample toPartition sample to pp partitions with sizepartitions with size s/ps/p  Partially cluster partitions intoPartially cluster partitions into s/pqs/pq clustersclusters  Eliminate outliersEliminate outliers By random samplingBy random sampling If a cluster grows too slow, eliminate it.If a cluster grows too slow, eliminate it.  Cluster partial clusters.Cluster partial clusters.  Label data in diskLabel data in disk Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods
  • 44. Data Partitioning andData Partitioning and ClusteringClustering  s = 50s = 50  p = 2p = 2  s/p = 25s/p = 25 x x x y y y y x y x s/pq = 5 Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods
  • 45. Cure: Shrinking RepresentativeCure: Shrinking Representative PointsPoints Shrink the multiple representative points towardsShrink the multiple representative points towards the gravity center by a fraction ofthe gravity center by a fraction of αα.. Multiple representatives capture the shape of theMultiple representatives capture the shape of the cluster x y x y Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods
  • 46. Clustering CategoricalClustering Categorical Data: ROCKData: ROCK ROCK: Robust Clustering using linKs,ROCK: Robust Clustering using linKs, by S. Guha, R. Rastogi, K. Shim (ICDE’99).by S. Guha, R. Rastogi, K. Shim (ICDE’99).  Use links to measure similarity/proximityUse links to measure similarity/proximity  Not distance basedNot distance based  Computational complexity:Computational complexity: Basic ideas:Basic ideas:  Similarity function and neighbors:Similarity function and neighbors: LetLet TT11 = {1,2,3},= {1,2,3}, TT22={3,4,5}={3,4,5} O n nm m n nm a( log )2 2 + + Sim T T T T T T ( , )1 2 1 2 1 2 = ∩ ∪ Sim T T( , ) { } { , , , , } .1 2 3 1 2 3 4 5 1 5 0 2= = = Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods
  • 47. Rock: AlgorithmRock: Algorithm Links: The number of common neighboursLinks: The number of common neighbours for the two points.for the two points. AlgorithmAlgorithm  Draw random sampleDraw random sample  Cluster with linksCluster with links  Label data in diskLabel data in disk {1,2,3}, {1,2,4}, {1,2,5}, {1,3,4}, {1,3,5} {1,4,5}, {2,3,4}, {2,3,5}, {2,4,5}, {3,4,5} {1,2,3} {1,2,4}3 Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods
  • 48. CHAMELEONCHAMELEON CHAMELEON: hierarchical clustering using dynamicCHAMELEON: hierarchical clustering using dynamic modeling, by G. Karypis, E.H. Han and V. Kumarmodeling, by G. Karypis, E.H. Han and V. Kumar’’9999 Measures the similarity based on a dynamic modelMeasures the similarity based on a dynamic model  Two clusters are merged only if theTwo clusters are merged only if the interconnectivityinterconnectivity andand closeness (proximity)closeness (proximity) between two clusters arebetween two clusters are highhigh relative torelative to the internal interconnectivity of thethe internal interconnectivity of the clusters and closeness of items within the clustersclusters and closeness of items within the clusters A two phase algorithmA two phase algorithm  1. Use a graph partitioning algorithm: cluster objects1. Use a graph partitioning algorithm: cluster objects into a large number of relatively small sub-clustersinto a large number of relatively small sub-clusters  2. Use an agglomerative hierarchical clustering2. Use an agglomerative hierarchical clustering algorithm: find the genuine clusters by repeatedlyalgorithm: find the genuine clusters by repeatedly combining these sub-clusterscombining these sub-clusters Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods
  • 49. Overall Framework ofOverall Framework of CHAMELEONCHAMELEON Construct Sparse Graph Partition the Graph Merge Partition Final Clusters Data Set Lecture-45 - Hierarchical MethodsLecture-45 - Hierarchical Methods
  • 51. Density-Based Clustering MethodsDensity-Based Clustering Methods Clustering based on density (local clusterClustering based on density (local cluster criterion), such as density-connected pointscriterion), such as density-connected points Major features:Major features:  Discover clusters of arbitrary shapeDiscover clusters of arbitrary shape  Handle noiseHandle noise  One scanOne scan  Need density parameters as termination conditionNeed density parameters as termination condition Several methodsSeveral methods  DBSCANDBSCAN  OPTICSOPTICS  DENCLUEDENCLUE  CLIQUECLIQUE Lecture-46 - Density-Based MethodsLecture-46 - Density-Based Methods
  • 52. Density-Based ClusteringDensity-Based Clustering Two parametersTwo parameters::  EpsEps: Maximum radius of the neighbourhood: Maximum radius of the neighbourhood  MinPtsMinPts: Minimum number of points in an Eps-: Minimum number of points in an Eps- neighbourhood of that pointneighbourhood of that point NNEpsEps(p):(p): {q belongs to D | dist(p,q) <= Eps}{q belongs to D | dist(p,q) <= Eps} Directly density-reachableDirectly density-reachable:: A pointA point pp is directlyis directly density-reachable from a pointdensity-reachable from a point qq wrt.wrt. EpsEps, MinPts, MinPts ifif  1)1) pp belongs tobelongs to NNEpsEps(q)(q)  2) core point condition:2) core point condition: |N|NEpsEps (q)| >= MinPts(q)| >= MinPts p q MinPts = 5 Eps = 1 cm Lecture-46 - Density-Based MethodsLecture-46 - Density-Based Methods
  • 53. Density-Based ClusteringDensity-Based Clustering Density-reachable:Density-reachable:  A pointA point pp is density-reachable fromis density-reachable from a pointa point qq wrt.wrt. EpsEps,, MinPtsMinPts if thereif there is a chain of pointsis a chain of points pp11,, ……,, ppnn,, pp11 == qq,, ppnn == pp such thatsuch that ppi+1i+1 is directlyis directly density-reachable fromdensity-reachable from ppii Density-connectedDensity-connected  A pointA point pp is density-connected to ais density-connected to a pointpoint qq wrt.wrt. EpsEps,, MinPtsMinPts if there isif there is a pointa point oo such that both,such that both, pp andand qq are density-reachable fromare density-reachable from oo wrt.wrt. EpsEps andand MinPtsMinPts.. p q p1 p q o Lecture-46 - Density-Based MethodsLecture-46 - Density-Based Methods
  • 54. DBSCAN: Density Based SpatialDBSCAN: Density Based Spatial Clustering of Applications with NoiseClustering of Applications with Noise Relies on aRelies on a density-baseddensity-based notion of cluster: Anotion of cluster: A clustercluster is defined as a maximal set of density-is defined as a maximal set of density- connected pointsconnected points Discovers clusters of arbitrary shape in spatialDiscovers clusters of arbitrary shape in spatial databases with noisedatabases with noise Core Border Outlier Eps = 1cm MinPts = 5 Lecture-46 - Density-Based MethodsLecture-46 - Density-Based Methods
  • 55. DBSCAN: The AlgorithmDBSCAN: The Algorithm  Arbitrary select a pointArbitrary select a point pp  Retrieve all points density-reachable fromRetrieve all points density-reachable from pp wrtwrt EpsEps andand MinPtsMinPts..  IfIf pp is a core point, a cluster is formed.is a core point, a cluster is formed.  IfIf pp is a border point, no points are density-reachableis a border point, no points are density-reachable fromfrom pp and DBSCAN visits the next point of theand DBSCAN visits the next point of the database.database.  Continue the process until all of the points have beenContinue the process until all of the points have been processed.processed. Lecture-46 - Density-Based MethodsLecture-46 - Density-Based Methods
  • 56. OPTICS: A Cluster-Ordering MethodOPTICS: A Cluster-Ordering Method OPTICS: Ordering Points To Identify theOPTICS: Ordering Points To Identify the Clustering StructureClustering Structure  Ankerst, Breunig, Kriegel, and Sander (SIGMODAnkerst, Breunig, Kriegel, and Sander (SIGMOD’’99)99)  Produces a special order of the database wrt itsProduces a special order of the database wrt its density-based clustering structuredensity-based clustering structure  This cluster-ordering contains info equiv to theThis cluster-ordering contains info equiv to the density-based clusterings corresponding to a broaddensity-based clusterings corresponding to a broad range of parameter settingsrange of parameter settings  Good for both automatic and interactive clusterGood for both automatic and interactive cluster analysis, including finding intrinsic clustering structureanalysis, including finding intrinsic clustering structure  Can be represented graphically or using visualizationCan be represented graphically or using visualization techniquestechniques Lecture-46 - Density-Based MethodsLecture-46 - Density-Based Methods
  • 57. OPTICS: Some Extension fromOPTICS: Some Extension from DBSCANDBSCAN Index-based:Index-based: k = number of dimensionsk = number of dimensions N = 20N = 20 p = 75%p = 75% M = N(1-p) = 5M = N(1-p) = 5  Complexity: O(Complexity: O(kNkN22 )) Core DistanceCore Distance Reachability DistanceReachability Distance D p2 MinPts = 5 ε = 3 cm Max (core-distance (o), d (o, p)) r(p1, o) = 2.8cm. r(p2,o) = 4cm o o p1 Lecture-46 - Density-Based MethodsLecture-46 - Density-Based Methods
  • 58. ε ε Reachability -distance Cluster-order of the objects undefined ε ‘ Lecture-46 - Density-Based MethodsLecture-46 - Density-Based Methods
  • 59. DENCLUE: using density functionsDENCLUE: using density functions DENsity-based CLUstEring by Hinneburg & KeimDENsity-based CLUstEring by Hinneburg & Keim (KDD(KDD’’98)98) Major featuresMajor features  Solid mathematical foundationSolid mathematical foundation  Good for data sets with large amounts of noiseGood for data sets with large amounts of noise  Allows a compact mathematical description of arbitrarilyAllows a compact mathematical description of arbitrarily shaped clusters in high-dimensional data setsshaped clusters in high-dimensional data sets  Significant faster than existing algorithm (faster thanSignificant faster than existing algorithm (faster than DBSCAN by a factor of up to 45)DBSCAN by a factor of up to 45)  But needs a large number of parametersBut needs a large number of parameters Lecture-46 - Density-Based MethodsLecture-46 - Density-Based Methods
  • 60. Uses grid cells but only keeps information about gridUses grid cells but only keeps information about grid cells that do actually contain data points and managescells that do actually contain data points and manages these cells in a tree-based access structure.these cells in a tree-based access structure. Influence function: describes the impact of a data pointInfluence function: describes the impact of a data point within its neighborhood.within its neighborhood. Overall density of the data space can be calculated asOverall density of the data space can be calculated as the sum of the influence function of all data points.the sum of the influence function of all data points. Clusters can be determined mathematically byClusters can be determined mathematically by identifying density attractors.identifying density attractors. Density attractors are local maximal of the overallDensity attractors are local maximal of the overall density function.density function. Denclue: Technical EssenceDenclue: Technical Essence Lecture-46 - Density-Based MethodsLecture-46 - Density-Based Methods
  • 61. Gradient: The steepness of aGradient: The steepness of a slopeslope ExampleExample ∑= − = N i xxd D Gaussian i exf 1 2 ),( 2 2 )( σ ∑ = − ⋅−=∇ N i xxd ii D Gaussian i exxxxf 1 2 ),( 2 2 )(),( σ f x y eGaussian d x y ( , ) ( , ) = − 2 2 2σ Lecture-46 - Density-Based MethodsLecture-46 - Density-Based Methods
  • 62. Density AttractorDensity Attractor Lecture-46 - Density-Based MethodsLecture-46 - Density-Based Methods
  • 63. Center-Defined and ArbitraryCenter-Defined and Arbitrary Lecture-46 - Density-Based MethodsLecture-46 - Density-Based Methods
  • 65. Grid-Based Clustering MethodGrid-Based Clustering Method Using multi-resolution grid data structureUsing multi-resolution grid data structure Several interesting methodsSeveral interesting methods  STING (a STatistical INformation Grid approach)STING (a STatistical INformation Grid approach)  WaveClusterWaveCluster A multi-resolution clustering approach usingA multi-resolution clustering approach using wavelet methodwavelet method  CLIQUECLIQUE Lecture-47 - Grid-Based MethodsLecture-47 - Grid-Based Methods
  • 66. STING: A Statistical Information GridSTING: A Statistical Information Grid ApproachApproach Wang, Yang and Muntz (VLDB’97)Wang, Yang and Muntz (VLDB’97) The spatial area area is divided into rectangularThe spatial area area is divided into rectangular cellscells There are several levels of cells corresponding toThere are several levels of cells corresponding to different levels of resolutiondifferent levels of resolution Lecture-47 - Grid-Based MethodsLecture-47 - Grid-Based Methods
  • 67. STING: A Statistical InformationSTING: A Statistical Information Grid ApproachGrid Approach  Each cell at a high level is partitioned into a number ofEach cell at a high level is partitioned into a number of smaller cells in the next lower levelsmaller cells in the next lower level  Statistical info of each cell is calculated and storedStatistical info of each cell is calculated and stored beforehand and is used to answer queriesbeforehand and is used to answer queries  Parameters of higher level cells can be easily calculatedParameters of higher level cells can be easily calculated from parameters of lower level cellfrom parameters of lower level cell countcount,, meanmean,, ss,, minmin,, maxmax type of distribution—normal,type of distribution—normal, uniformuniform, etc., etc.  Use a top-down approach to answer spatial data queriesUse a top-down approach to answer spatial data queries  Start from a pre-selected layerStart from a pre-selected layer——typically with a smalltypically with a small number of cellsnumber of cells  For each cell in the current level compute the confidenceFor each cell in the current level compute the confidence intervalinterval Lecture-47 - Grid-Based MethodsLecture-47 - Grid-Based Methods
  • 68. STING: A Statistical Information GridSTING: A Statistical Information Grid ApproachApproach  Remove the irrelevant cells from further considerationRemove the irrelevant cells from further consideration  When finish examining the current layer, proceed toWhen finish examining the current layer, proceed to the next lower levelthe next lower level  Repeat this process until the bottom layer is reachedRepeat this process until the bottom layer is reached  Advantages:Advantages: Query-independent, easy to parallelize, incrementalQuery-independent, easy to parallelize, incremental updateupdate O(K),O(K), wherewhere KK is the number of grid cells at theis the number of grid cells at the lowest levellowest level  Disadvantages:Disadvantages: All the cluster boundaries are either horizontal orAll the cluster boundaries are either horizontal or vertical, and no diagonal boundary is detectedvertical, and no diagonal boundary is detected Lecture-47 - Grid-Based MethodsLecture-47 - Grid-Based Methods
  • 69. WaveClusterWaveCluster Sheikholeslami, Chatterjee, and Zhang (VLDB’98)Sheikholeslami, Chatterjee, and Zhang (VLDB’98) A multi-resolution clustering approach which appliesA multi-resolution clustering approach which applies wavelet transform to the feature spacewavelet transform to the feature space  A wavelet transform is a signal processingA wavelet transform is a signal processing technique that decomposes a signal into differenttechnique that decomposes a signal into different frequency sub-band.frequency sub-band. Both grid-based and density-basedBoth grid-based and density-based Input parameters:Input parameters:  No of grid cells for each dimensionNo of grid cells for each dimension  the wavelet, and the no of applications of waveletthe wavelet, and the no of applications of wavelet transform.transform. Lecture-47 - Grid-Based MethodsLecture-47 - Grid-Based Methods
  • 70. WaveClusterWaveCluster How to apply wavelet transform to find clustersHow to apply wavelet transform to find clusters  Summaries the data by imposing a multidimensionalSummaries the data by imposing a multidimensional grid structure onto data spacegrid structure onto data space  These multidimensional spatial data objects areThese multidimensional spatial data objects are represented in a n-dimensional feature spacerepresented in a n-dimensional feature space  Apply wavelet transform on feature space to find theApply wavelet transform on feature space to find the dense regions in the feature spacedense regions in the feature space  Apply wavelet transform multiple times which result inApply wavelet transform multiple times which result in clusters at different scales from fine to coarseclusters at different scales from fine to coarse Lecture-47 - Grid-Based MethodsLecture-47 - Grid-Based Methods
  • 71. What Is WaveletWhat Is Wavelet Lecture-47 - Grid-Based MethodsLecture-47 - Grid-Based Methods
  • 72. QuantizationQuantization Lecture-47 - Grid-Based MethodsLecture-47 - Grid-Based Methods
  • 73. TransformationTransformation Lecture-47 - Grid-Based MethodsLecture-47 - Grid-Based Methods
  • 74. WaveClusterWaveCluster Why is wavelet transformation useful forWhy is wavelet transformation useful for clusteringclustering  Unsupervised clusteringUnsupervised clustering It uses hat-shape filters to emphasize region whereIt uses hat-shape filters to emphasize region where points cluster, but simultaneously to suppress weakerpoints cluster, but simultaneously to suppress weaker information in their boundaryinformation in their boundary  Effective removal of outliersEffective removal of outliers  Multi-resolutionMulti-resolution  Cost efficiencyCost efficiency Major features:Major features:  Complexity O(N)Complexity O(N)  Detect arbitrary shaped clusters at different scalesDetect arbitrary shaped clusters at different scales  Not sensitive to noise, not sensitive to input orderNot sensitive to noise, not sensitive to input order  Only applicable to low dimensional dataOnly applicable to low dimensional data Lecture-47 - Grid-Based MethodsLecture-47 - Grid-Based Methods
  • 75. CLIQUE (Clustering In QUEst)CLIQUE (Clustering In QUEst) Agrawal, Gehrke, Gunopulos, Raghavan (SIGMOD’98).Agrawal, Gehrke, Gunopulos, Raghavan (SIGMOD’98). Automatically identifying subspaces of a high dimensionalAutomatically identifying subspaces of a high dimensional data space that allow better clustering than original spacedata space that allow better clustering than original space CLIQUE can be considered as both density-based and grid-CLIQUE can be considered as both density-based and grid- basedbased  It partitions each dimension into the same number ofIt partitions each dimension into the same number of equal length intervalequal length interval  It partitions an m-dimensional data space into non-It partitions an m-dimensional data space into non- overlapping rectangular unitsoverlapping rectangular units  A unit is dense if the fraction of total data pointsA unit is dense if the fraction of total data points contained in the unit exceeds the input model parametercontained in the unit exceeds the input model parameter  A cluster is a maximal set of connected dense unitsA cluster is a maximal set of connected dense units within a subspacewithin a subspace Lecture-47 - Grid-Based MethodsLecture-47 - Grid-Based Methods
  • 76. CLIQUE: The Major StepsCLIQUE: The Major Steps Partition the data space and find the number of points thatPartition the data space and find the number of points that lie inside each cell of the partition.lie inside each cell of the partition. Identify the subspaces that contain clusters using theIdentify the subspaces that contain clusters using the Apriori principleApriori principle Identify clusters:Identify clusters:  Determine dense units in all subspaces of interestsDetermine dense units in all subspaces of interests  Determine connected dense units in all subspaces ofDetermine connected dense units in all subspaces of interests.interests. Generate minimal description for the clustersGenerate minimal description for the clusters  Determine maximal regions that cover a cluster ofDetermine maximal regions that cover a cluster of connected dense units for each clusterconnected dense units for each cluster  Determination of minimal cover for each clusterDetermination of minimal cover for each cluster Lecture-47 - Grid-Based MethodsLecture-47 - Grid-Based Methods
  • 77. Salary (10,000) 20 30 40 50 60 age 54312670 20 30 40 50 60 age 54312670 Vacation (week) age Vacation Salary 30 50 τ = 3 Lecture-47 - Grid-Based MethodsLecture-47 - Grid-Based Methods
  • 78. Strength and Weakness ofStrength and Weakness of CLIQUECLIQUE StrengthStrength  ItIt automaticallyautomatically finds subspaces of thefinds subspaces of the highesthighest dimensionalitydimensionality such that high density clusters exist insuch that high density clusters exist in those subspacesthose subspaces  It isIt is insensitiveinsensitive to the order of records in input andto the order of records in input and does not presume some canonical data distributiondoes not presume some canonical data distribution  It scalesIt scales linearlylinearly with the size of input and has goodwith the size of input and has good scalability as the number of dimensions in the datascalability as the number of dimensions in the data increasesincreases WeaknessWeakness  The accuracy of the clustering result may beThe accuracy of the clustering result may be degraded at the expense of simplicity of the methoddegraded at the expense of simplicity of the method Lecture-47 - Grid-Based MethodsLecture-47 - Grid-Based Methods
  • 80. Model-Based Clustering MethodsModel-Based Clustering Methods Attempt to optimize the fit between the data and some mathematicalAttempt to optimize the fit between the data and some mathematical modelmodel Statistical and AI approachStatistical and AI approach  Conceptual clusteringConceptual clustering A form of clustering in machine learningA form of clustering in machine learning Produces a classification scheme for a set of unlabeled objectsProduces a classification scheme for a set of unlabeled objects Finds characteristic description for each concept (class)Finds characteristic description for each concept (class)  COBWEBCOBWEB A popular a simple method of incremental conceptual learningA popular a simple method of incremental conceptual learning Creates a hierarchical clustering in the form of a classificationCreates a hierarchical clustering in the form of a classification treetree Each node refers to a concept and contains a probabilisticEach node refers to a concept and contains a probabilistic description of that conceptdescription of that concept Lecture-48 - Model-Based Clustering MethodsLecture-48 - Model-Based Clustering Methods
  • 81. COBWEB Clustering MethodCOBWEB Clustering Method A classification tree Lecture-48 - Model-Based Clustering MethodsLecture-48 - Model-Based Clustering Methods
  • 82. More on Statistical-Based ClusteringMore on Statistical-Based Clustering Limitations of COBWEBLimitations of COBWEB  The assumption that the attributes are independentThe assumption that the attributes are independent of each other is often too strong because correlationof each other is often too strong because correlation may existmay exist  Not suitable for clustering large database data –Not suitable for clustering large database data – skewed tree and expensive probability distributionsskewed tree and expensive probability distributions CLASSITCLASSIT  an extension of COBWEB for incremental clusteringan extension of COBWEB for incremental clustering of continuous dataof continuous data  suffers similar problems as COBWEBsuffers similar problems as COBWEB AutoClass (Cheeseman and Stutz, 1996)AutoClass (Cheeseman and Stutz, 1996)  Uses Bayesian statistical analysis to estimate theUses Bayesian statistical analysis to estimate the number of clustersnumber of clusters  Popular in industryPopular in industry Lecture-48 - Model-Based Clustering MethodsLecture-48 - Model-Based Clustering Methods
  • 83. Other Model-Based Clustering MethodsOther Model-Based Clustering Methods Neural network approachesNeural network approaches  Represent each cluster as an exemplar, acting as aRepresent each cluster as an exemplar, acting as a “prototype” of the cluster“prototype” of the cluster  New objects are distributed to the cluster whoseNew objects are distributed to the cluster whose exemplar is the most similar according to someexemplar is the most similar according to some dostance measuredostance measure Competitive learningCompetitive learning  Involves a hierarchical architecture of several unitsInvolves a hierarchical architecture of several units (neurons)(neurons)  Neurons compete in a “winner-takes-all” fashion forNeurons compete in a “winner-takes-all” fashion for the object currently being presentedthe object currently being presented Lecture-48 - Model-Based Clustering MethodsLecture-48 - Model-Based Clustering Methods
  • 84. Model-Based ClusteringModel-Based Clustering MethodsMethods Lecture-48 - Model-Based Clustering MethodsLecture-48 - Model-Based Clustering Methods
  • 85. Self-organizing feature maps (SOMs)Self-organizing feature maps (SOMs) Clustering is also performed by having severalClustering is also performed by having several units competing for the current objectunits competing for the current object The unit whose weight vector is closest to theThe unit whose weight vector is closest to the current object winscurrent object wins The winner and its neighbors learn by havingThe winner and its neighbors learn by having their weights adjustedtheir weights adjusted SOMs are believed to resemble processing thatSOMs are believed to resemble processing that can occur in the braincan occur in the brain Useful for visualizing high-dimensional data inUseful for visualizing high-dimensional data in 2- or 3-D space2- or 3-D space Lecture-48 - Model-Based Clustering MethodsLecture-48 - Model-Based Clustering Methods
  • 87. What Is Outlier Discovery?What Is Outlier Discovery? What are outliers?What are outliers?  The set of objects are considerably dissimilar fromThe set of objects are considerably dissimilar from the remainder of the datathe remainder of the data  Example: Sports: Michael Jordon, WayneExample: Sports: Michael Jordon, Wayne Gretzky, ...Gretzky, ... ProblemProblem  Find top n outlier pointsFind top n outlier points Applications:Applications:  Credit card fraud detectionCredit card fraud detection  Telecom fraud detectionTelecom fraud detection  Customer segmentationCustomer segmentation  Medical analysisMedical analysis Lecture-49 - Outlier AnalysisLecture-49 - Outlier Analysis
  • 88. Outlier Discovery:Outlier Discovery: Statistical ApproachesStatistical Approaches Assume a model underlying distribution thatAssume a model underlying distribution that generates data set (e.g. normal distribution)generates data set (e.g. normal distribution) Use discordancy tests depending onUse discordancy tests depending on  data distributiondata distribution  distribution parameter (e.g., mean, variance)distribution parameter (e.g., mean, variance)  number of expected outliersnumber of expected outliers DrawbacksDrawbacks  most tests are for single attributemost tests are for single attribute  In many cases, data distribution may not be knownIn many cases, data distribution may not be known Lecture-49 - Outlier AnalysisLecture-49 - Outlier Analysis
  • 89. Outlier Discovery: Distance-BasedOutlier Discovery: Distance-Based ApproachApproach Introduced to counter the main limitationsIntroduced to counter the main limitations imposed by statistical methodsimposed by statistical methods  We need multi-dimensional analysis without knowingWe need multi-dimensional analysis without knowing data distribution.data distribution. Distance-based outlier: A DB(p, D)-outlier is anDistance-based outlier: A DB(p, D)-outlier is an object O in a dataset T such that at least aobject O in a dataset T such that at least a fraction p of the objects in T lies at a distancefraction p of the objects in T lies at a distance greater than D from Ogreater than D from O Algorithms for mining distance-based outliersAlgorithms for mining distance-based outliers  Index-based algorithmIndex-based algorithm  Nested-loop algorithmNested-loop algorithm  Cell-based algorithmCell-based algorithm Lecture-49 - Outlier AnalysisLecture-49 - Outlier Analysis
  • 90. Outlier Discovery: Deviation-BasedOutlier Discovery: Deviation-Based ApproachApproach Identifies outliers by examining the mainIdentifies outliers by examining the main characteristics of objects in a groupcharacteristics of objects in a group Objects that “deviate” from this description areObjects that “deviate” from this description are considered outliersconsidered outliers sequential exception techniquesequential exception technique  simulates the way in which humans can distinguishsimulates the way in which humans can distinguish unusual objects from among a series of supposedlyunusual objects from among a series of supposedly like objectslike objects OLAP data cube techniqueOLAP data cube technique  uses data cubes to identify regions of anomalies inuses data cubes to identify regions of anomalies in large multidimensional datalarge multidimensional data Lecture-49 - Outlier AnalysisLecture-49 - Outlier Analysis