请输入您要查询的百科知识:

 

词条 K-medoids
释义

  1. Algorithms

  2. Demonstration of PAM

     Step 1  Step 2 

  3. Software

  4. References

{{DISPLAYTITLE:k-medoids}}

The {{math|k}}-medoids or PAM algorithm is a clustering algorithm reminiscent to the {{math|k}}-means algorithm. Both the {{math|k}}-means and {{math|k}}-medoids algorithms are partitional (breaking the dataset up into groups) and both attempt to minimize the distance between points labeled to be in a cluster and a point designated as the center of that cluster. In contrast to the {{math|k}}-means algorithm, {{math|k}}-medoids chooses data points as centers (medoids or exemplars) and can be used with arbitrary distances, while in {{math|k}}-means the centre of a clusters is not necessarily one of the input data points (it is the average between the points in the cluster). The PAM method was proposed in 1987[1] for the work with norm and other distances.

{{math|k}}-medoid is a classical partitioning technique of clustering, which clusters the data set of {{math|n}} objects into {{math|k}} clusters, with the number {{math|k}} of clusters assumed known a priori (which implies that the programmer must specify k before the execution of the algorithm). The "goodness" of the given value of {{math|k}} can be assessed with methods such as silhouette.

It is more robust to noise and outliers as compared to {{math|k}}-means because it minimizes a sum of pairwise dissimilarities instead of a sum of squared Euclidean distances.

A medoid can be defined as the object of a cluster whose average dissimilarity to all the objects in the cluster is minimal, that is, it is a most centrally located point in the cluster.

Algorithms

The most common realisation of {{math|k}}-medoid clustering is the Partitioning Around Medoids (PAM) algorithm. PAM uses a greedy search which may not find the optimum solution, but it is faster than exhaustive search. It works as follows:

  1. Initialize: select {{mvar|k}} of the {{mvar|n}} data points as the medoids
  2. Associate each data point to the closest medoid.
  3. While the cost of the configuration decreases:
    1. For each medoid {{mvar|m}}, for each non-medoid data point {{mvar|o}}:
    2. Swap {{mvar|m}} and {{mvar|o}}, associate each data point to the closest medoid, recompute the cost (sum of distances of points to their medoid)
    3. If the total cost of the configuration increased in the previous step, undo the swap

The runtime complexity of the original PAM algorithm per iteration of (3) is , but this can be reduced to .[2]

Algorithms other than PAM have also been suggested in the literature, including the following Voronoi iteration method:[3]{{r|park}}

  1. Select initial medoids
  2. Iterate while the cost decreases:
    1. In each cluster, make the point that minimizes the sum of distances within the cluster the medoid
    2. Reassign each point to the cluster defined by the closest medoid determined in the previous step.

However, Voronoi iteration finds much worse results, as it does not allow reassigning points to other clusters while changing means, and thus only explores a much smaller search space.[2]

The approximate algorithms CLARA and CLARANS trade runtime for optimality. CLARA applies PAM on multiple subsamples, keeping the best result. CLARANS works on the entire data set, but only explores a subset of the possible swaps of medoids and non-medoids.

Demonstration of PAM

{{tone|date=September 2015}}

Cluster the following data set of ten objects into two clusters i.e. {{math|k {{=}} 2}}.

Consider a data set of ten objects as follows :

X126
X2 3 4
X3 3 8
X4 4 7
X5 6 2
X6 6 4
X7 7 3
X8 7 4
X9 8 5
X10 7 6
{{Clear}}

Step 1

Two observations ( and ) are randomly selected as medoids (cluster centers). Manhattan distances are calculated to each center to associate each data object to its nearest medoid (marked in bold).

Data objectDistance to
1 (2, 6) 3 7
2 (3, 4) 0 4
3 (3, 8) 4 8
4 (4, 7) 4 6
5 (6, 2) 5 3
6 (6, 4) 3 1
7 (7, 3) 5 1
8 (7, 4) 4 0
9 (8, 5) 6 2
10 (7, 6) 6 2
Cost 11 9

Since the points {{math|(2,6)}} {{math|(3,4)}} {{math|(3,8)}} and {{math|(4,7)}} are closer to {{math|c1}} hence they form one cluster whilst the remaining points form another cluster. Therefore, the clusters become:

{{math|Cluster1 {{=}} {(2,6)(3,4)(3,8)(4,7)} }}

{{math|Cluster2 {{=}} {(7,4)(6,2)(6,4)(7,3)(8,5)(7,6)} }}

The total cost of this clustering is the sum of the distance between a data point and its cluster center:

Step 2

Select one of the nonmedoids {{mvar|O′}}

Let us assume {{math|O′ {{=}} (7,3)}}, i.e. {{math|x7}}.

So now the medoids are {{math|c1(3,4)}} and {{math|O′(7,3)}}

If {{tmath|c_1}} and {{mvar|O′}} are new medoids, calculate the total cost involved

By using the formula in the step 1

{{mvar|i{{math|c1Data objects ({{math|Xi) Cost (distance)
134263
3 3 4 3 8 4
4 3 4 4 7 4
5 3 4 6 2 5
6 3 4 6 4 3
8 3 4 7 4 4
9 3 4 8 5 6
10 3 4 7 6 6
{{mvar|i{{mvar|O′Data objects ({{math|Xi) Cost (distance)
173268
3 7 3 3 8 9
4 7 3 4 7 7
5 7 3 6 2 2
6 7 3 6 4 2
8 7 3 7 4 1
9 7 3 8 5 3
10 7 3 7 6 3
{{Clear}}

So cost of swapping medoid from {{math|c2}} to {{mvar|O′}} is

So moving to {{mvar|O′}} would be a bad idea, so the previous choice was good. So we try other nonmedoids points to get minimum distance.

It may happen some data points may shift from one cluster to another cluster depending upon their closeness to medoid.

In some standard situations, k-medoids demonstrate better performance than k-means. An example is presented in Fig. 2.

The most time-consuming part of the k-medoids algorithm is the calculation of the distances between objects. If a quadratic preprocessing and storage is applicable, the distances matrix can be precomputed to achieve consequent speed-up. See for example,[5] where the authors also introduce a heuristic to choose the initial {{mvar|k}} medoids.

Software

  • ELKI includes several k-means variants, including an EM-based k-medoids and the original PAM algorithm.
  • Julia contains a k-medoid implementation in the JuliaStats clustering package.
  • KNIME includes a k-medoid implementation supporting a variety of efficient matrix distance measures, as well as a number of native (and integrated third-party) k-means implementations
  • R includes variants of k-means in the "flexclust" package and PAM is implemented in the "cluster" package.
  • RapidMiner has an operator named KMedoids, but it does not implement the KMedoids algorithm correctly. Instead, it is a k-means variant, that substitutes the mean with the closest data point (which is not the medoid).
  • MATLAB implements PAM, CLARA, and two other algorithms to solve the k-medoid clustering problem.

References

1. ^Kaufman, L. and Rousseeuw, P.J. (1987), Clustering by means of Medoids, in Statistical Data Analysis Based on the –Norm and Related Methods, edited by Y. Dodge, North-Holland, 405–416.
2. ^{{cite arxiv|last=Schubert|first=Erich|last2=Rousseeuw|first2=Peter J.|date=2018-10-12|title=Faster k-Medoids Clustering: Improving the PAM, CLARA, and CLARANS Algorithms|eprint=1810.05691|class=cs.LG}}
3. ^T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning, Springer (2001), 468–469.
4. ^The illustration was prepared with the Java applet, E.M. Mirkes, K-means and K-medoids: applet. University of Leicester, 2011.
5. ^H.S. Park , C.H. Jun, A simple and fast algorithm for K-medoids clustering, Expert Systems with Applications, 36, (2) (2009), 3336–3341.
{{DEFAULTSORT:K-Medoids}}

1 : Cluster analysis algorithms

随便看

 

开放百科全书收录14589846条英语、德语、日语等多语种百科知识,基本涵盖了大多数领域的百科知识,是一部内容自由、开放的电子版国际百科全书。

 

Copyright © 2023 OENC.NET All Rights Reserved
京ICP备2021023879号 更新时间:2024/9/24 14:30:48