k-Means 演算法
使用 scikit-learn
KMeans 的重要建構子參數
KMeans 的重要成員變數:
KMeans 的重要個例操作:
選項 1:先 fit 再分別計算
選項 2:fit_predict 後再計算距離
選項 3:fit_transform 後再predict獲取標籤
## 三種方式的比較
### 功能等價性
這三種方式在**功能上完全等價**,只要確保只訓練模型一次就行。它們都會得到相同的聚類中心、距離矩陣和標籤。
### 區別與選擇依據
1. **可讀性與明確性**:
– 選項 1 最明確,每個步驟分開,便於理解流程
– 選項 2 和 3 結合了部分步驟,代碼更簡潔
2. **使用目的**:
– 如果主要關注標籤,選項 2 更直觀
– 如果主要關注距離矩陣,選項 3 更符合邏輯
– 如果需要完全控制每個步驟,選項 1 更合適
3. **效能**:
– 三種方式效能基本相同,因為底層實現相似
– `fit_transform()` 和 `fit_predict()` 都會先執行 `fit()`,然後執行對應的轉換
## 關於三種方法的深入理解
1. **`fit()`**:
– 功能:訓練模型,計算聚類中心
– 返回:訓練好的模型實例
2. **`fit_transform()`**:
– 功能:訓練模型並計算每個點到聚類中心的距離
– 返回:距離矩陣(形狀為 [n_samples, n_clusters])
3. **`fit_predict()`**:
– 功能:訓練模型並分配聚類標籤
– 返回:標籤數組(形狀為 [n_samples])
4. **`transform()`**:
– 功能:計算數據點到現有聚類中心的距離
– 前提:模型必須先經過訓練
5. **`predict()`**:
– 功能:分配數據點到最近的聚類
– 前提:模型必須先經過訓練
## 最佳實踐建議
對於您的 K-means 應用,我的建議是:
1. **如果同時需要標籤和距離**:
“`python
# 選擇最清晰的方式
kmeans.fit(df[[“x”, “y”]])
distances = kmeans.transform(df[[“x”, “y”]])
labels = kmeans.predict(df[[“x”, “y”]])
“`
2. **如果主要關注標籤**,選項 2 是不錯的選擇:
“`python
labels = kmeans.fit_predict(df[[“x”, “y”]])
# 需要時才計算距離
distances = kmeans.transform(df[[“x”, “y”]])
“`
3. **如果需要進行大量實驗**,選項 1 可能更有助於理解和調試代碼。
無論選擇哪種方式,確保只訓練模型一次是關鍵,以保證結果的一致性和計算效率。
部分測試資料:
chinese | english | math |
77 | 89 | 63 |
73 | 40 | 60 |
69 | 57 | 50 |
85 | 67 | 60 |
55 | 55 | 55 |
80 | 84 | 83 |
80 | 70 | 70 |
60 | 61 | 60 |
60 | 80 | 70 |
75 | 91 | 53 |
62 | 62 | 67 |
66 | 75 | 75 |
67 | 40 | 89 |
72 | 60 | 42 |
74 | 62 | 67 |
78 | 86 | 85 |
70 | 63 | 60 |
78 | 80 | 69 |
82 | 82 | 78 |
excel為1~46列,第一列為標題列
df.shape = (45, 3)
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.cluster import KMeans
fpath = r”C:\Python\P107\doc\student_grades_real.csv”
df = pd.read_csv(fpath)
#print(df) ; df.shape = (45, 3)
lisChiCent = [] ; lisEngCent = [] ; lisMathCent= []
Xtrain, Xtest = train_test_split\
(df, test_size= 0.33,
random_state=42, shuffle = True)
#()中資料只有放一個df,輸出Xtrain, Xtest兩個
#先前放了X,y兩份資料,才能輸出
#Xtrain, Xtest, ytrain, ytest四個
print(“Xtrain:”, type(Xtrain), Xtrain.shape)
#<class ‘pandas.core.frame.DataFrame’> (30, 3)
print(“Xtest:”, type(Xtest), Xtest.shape)
# <class ‘pandas.core.frame.DataFrame’> (15, 3)
kmean = KMeans(n_clusters =2 ,random_state =42)
distMatrix = kmean.fit_transform(Xtrain)
“””
#distMatrix.shape = (30, 2) ; numpy.ndarray
# Xtrain列數30, 分為2 cluster
#產生模型,也順便列出屬性串列中的
每個項目屬於哪個聚類的對應矩陣,
描述樣本到各組中心點的距離
“””
cluster_cent = pd.DataFrame\
(kmean.cluster_centers_,
columns = df.columns.tolist())
“””
#kmean.cluster_centers_
array([[76.26666667, 83.13333333, 75.13333333],
[64.93333333, 59.26666667, 56.8 ]])
2個group,每個group的中心點都有
x,y,z (chinese, english, math)
“””
print(“cluster center:\n”,cluster_cent)
print(“The distance:”, kmean.inertia_)
print(“Totally”,kmean.n_iter_,
“iterations executed for finding the stable cluster”)
print(“The distance matrix from raw data to cluster:\n”,
pd.DataFrame(distMatrix,
columns= [“to cluster#0”, “to cluster#1”]) )
XtrainNew = pd.DataFrame(Xtrain,
columns= df.columns.tolist())
#columns = [‘chinese’, ‘english’, ‘math’]
XtrainNew.insert(loc = df.columns.size,
column = “groupID”,
value = kmean.labels_)
print(“The updated Xtrain:\n”, XtrainNew)
#kmean.labels_
#array([1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1,
1, 0, 0, 0, 1, 1, 1, 0])
#ndarray shape = (30,)
部分輸出結果
(DataFrame distMatrix截掉一些):
distMatrix.shape = (30, 2)
distMatrix = kmean.fit_transform(Xtrain)
舉例: 第16位學生
距離cluster 0的質心21.8
距離cluster 1的質心15
其實沒有明顯的區分
但數學上就直接比大小
歸類為距離比較近的cluster 1
參考解答:
推薦hahow線上學習python: https://igrape.net/30afN
承接前面的程式碼:
lisChiCent = [] ; lisEngCent = [] ; lisMathCent= []
for item in range(2):
chi = kmean.cluster_centers_[item,0]
lisChiCent.append(chi)
eng = kmean.cluster_centers_[item,1]
lisEngCent.append(eng)
math = kmean.cluster_centers_[item,2]
lisMathCent.append(math)
“””cluster_centers_
一個二維 ndarray ,列出所有聚類的中心點
kmean.cluster_centers_
array([[76.26666667, 83.13333333, 75.13333333],
[64.93333333, 59.26666667, 56.8 ]])
lisChiCent, lisEngCent, lisMathCent 長度皆為2
kmean.cluster_centers_ 是ndarray
其實不須for迴圈,可以用切片的方法
取出兩個質心的座標
(lisChiCent, lisEngCent, lisMathCent)
“””
# lisChi_1 = [] ; lisChi_2 = []
# lisEng_1 = [] ; lisEng_2 = []
# lisMath_1 = [] ; lisMath_2 = []
groupIDary = XtrainNew[“groupID”].values
lisTrueIdx = groupIDary.nonzero()[0].tolist()
“””
以下方法可以得到一樣的lisTrueIdx
若groupID有三種以上可以派上用場:
“””
XtrainNew.index = range(len(XtrainNew))
XtrainNew1 = XtrainNew.iloc[lisTrueIdx,:]
XtrainNew0 = XtrainNew.drop(lisTrueIdx,axis=0)
#依據groupID將XtrainNew分為
#XtrainNew1(group1) , XtrainNew0(group0)
lisChi0 = XtrainNew0[“chinese”].tolist()
lisChi1 = XtrainNew1[“chinese”].tolist()
lisEng0 = XtrainNew0[“english”].tolist()
lisEng1 = XtrainNew1[“english”].tolist()
lisMath0 = XtrainNew0[“math”].tolist()
lisMath1 = XtrainNew1[“math”].tolist()
fig = plt.figure()
ax = plt.axes( projection =”3d” )
ax.scatter(lisChi0, lisEng0, lisMath0,
label=”student cluster0″, color=”b”, marker=”^”)
ax.scatter(lisChi1, lisEng1, lisMath1,
label=”student cluster1″, color=”g”, marker=”*”)
ax.scatter(lisChiCent, lisEngCent, lisMathCent,
label=”cluster center”,color=”r”, marker=”o”)
參考解答:
3D 散佈圖:
視覺化顯示:學生分群的彩色顯示:
推薦hahow線上學習python: https://igrape.net/30afN
用剩下的驗證樣本預測:
承接上面的程式碼:
label_test = kmean.predict(Xtest)
#array([0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0])
# shape = (15,)
#前面訓練樣本使用 kmean.labels_
#現在測試樣本使用 kmean.predict(Xtest)
print(“kmean.predict(Xtest):\n”,label_test)
dfXtest = pd.DataFrame(data = Xtest,
columns = df.columns.tolist())
dfXtest.insert(loc = dfXtest.columns.size ,
column = “groupid”,
value = label_test)
print(“The updated df Xtest:\n”,dfXtest)
print(“The precision of test sample =”,
kmean.score(Xtest))
“””雖然一樣有.predict()可以用
但跟前面的監督式學習不同
非監督式學習分群演算法的預測值
無真實資料可以比對
無標準答案
score是依據負的質心距離總和評價
越接近0越好”””
輸出結果:
其他分群演算法
https://scikit-learn.org/stable/modules/clustering.html
推薦hahow線上學習python: https://igrape.net/30afN
[1;31mInit signature:[0m
[0mKMeans[0m[1;33m([0m[1;33m
[0m [0mn_clusters[0m[1;33m=[0m[1;36m8[0m[1;33m,[0m[1;33m
[0m [1;33m*[0m[1;33m,[0m[1;33m
[0m [0minit[0m[1;33m=[0m[1;34m’k-means++’[0m[1;33m,[0m[1;33m
[0m [0mn_init[0m[1;33m=[0m[1;34m’auto’[0m[1;33m,[0m[1;33m
[0m [0mmax_iter[0m[1;33m=[0m[1;36m300[0m[1;33m,[0m[1;33m
[0m [0mtol[0m[1;33m=[0m[1;36m0.0001[0m[1;33m,[0m[1;33m
[0m [0mverbose[0m[1;33m=[0m[1;36m0[0m[1;33m,[0m[1;33m
[0m [0mrandom_state[0m[1;33m=[0m[1;32mNone[0m[1;33m,[0m[1;33m
[0m [0mcopy_x[0m[1;33m=[0m[1;32mTrue[0m[1;33m,[0m[1;33m
[0m [0malgorithm[0m[1;33m=[0m[1;34m’lloyd’[0m[1;33m,[0m[1;33m
[0m[1;33m)[0m[1;33m[0m[1;33m[0m[0m
[1;31mDocstring:[0m
K-Means clustering.
Read more in the :ref:`User Guide <k_means>`.
Parameters
———-
n_clusters : int, default=8
The number of clusters to form as well as the number of
centroids to generate.
For an example of how to choose an optimal value for `n_clusters` refer to
:ref:`sphx_glr_auto_examples_cluster_plot_kmeans_silhouette_analysis.py`.
init : {‘k-means++’, ‘random’}, callable or array-like of shape (n_clusters, n_features), default=’k-means++’
Method for initialization:
* ‘k-means++’ : selects initial cluster centroids using sampling based on an empirical probability distribution of the points’ contribution to the overall inertia. This technique speeds up convergence. The algorithm implemented is “greedy k-means++”. It differs from the vanilla k-means++ by making several trials at each sampling step and choosing the best centroid among them.
* ‘random’: choose `n_clusters` observations (rows) at random from data for the initial centroids.
* If an array is passed, it should be of shape (n_clusters, n_features) and gives the initial centers.
* If a callable is passed, it should take arguments X, n_clusters and a random state and return an initialization.
For an example of how to use the different `init` strategies, see
:ref:`sphx_glr_auto_examples_cluster_plot_kmeans_digits.py`.
For an evaluation of the impact of initialization, see the example
:ref:`sphx_glr_auto_examples_cluster_plot_kmeans_stability_low_dim_dense.py`.
n_init : ‘auto’ or int, default=’auto’
Number of times the k-means algorithm is run with different centroid
seeds. The final results is the best output of `n_init` consecutive runs
in terms of inertia. Several runs are recommended for sparse
high-dimensional problems (see :ref:`kmeans_sparse_high_dim`).
When `n_init=’auto’`, the number of runs depends on the value of init:
10 if using `init=’random’` or `init` is a callable;
1 if using `init=’k-means++’` or `init` is an array-like.
.. versionadded:: 1.2
Added ‘auto’ option for `n_init`.
.. versionchanged:: 1.4
Default value for `n_init` changed to `’auto’`.
max_iter : int, default=300
Maximum number of iterations of the k-means algorithm for a
single run.
tol : float, default=1e-4
Relative tolerance with regards to Frobenius norm of the difference
in the cluster centers of two consecutive iterations to declare
convergence.
verbose : int, default=0
Verbosity mode.
random_state : int, RandomState instance or None, default=None
Determines random number generation for centroid initialization. Use
an int to make the randomness deterministic.
See :term:`Glossary <random_state>`.
copy_x : bool, default=True
When pre-computing distances it is more numerically accurate to center
the data first. If copy_x is True (default), then the original data is
not modified. If False, the original data is modified, and put back
before the function returns, but small numerical differences may be
introduced by subtracting and then adding the data mean. Note that if
the original data is not C-contiguous, a copy will be made even if
copy_x is False. If the original data is sparse, but not in CSR format,
a copy will be made even if copy_x is False.
algorithm : {“lloyd”, “elkan”}, default=”lloyd”
K-means algorithm to use. The classical EM-style algorithm is `”lloyd”`.
The `”elkan”` variation can be more efficient on some datasets with
well-defined clusters, by using the triangle inequality. However it’s
more memory intensive due to the allocation of an extra array of shape
`(n_samples, n_clusters)`.
.. versionchanged:: 0.18
Added Elkan algorithm
.. versionchanged:: 1.1
Renamed “full” to “lloyd”, and deprecated “auto” and “full”.
Changed “auto” to use “lloyd” instead of “elkan”.
Attributes
———-
cluster_centers_ : ndarray of shape (n_clusters, n_features)
Coordinates of cluster centers. If the algorithm stops before fully
converging (see “tol“ and “max_iter“), these will not be
consistent with “labels_“.
labels_ : ndarray of shape (n_samples,)
Labels of each point
inertia_ : float
Sum of squared distances of samples to their closest cluster center,
weighted by the sample weights if provided.
n_iter_ : int
Number of iterations run.
n_features_in_ : int
Number of features seen during :term:`fit`.
.. versionadded:: 0.24
feature_names_in_ : ndarray of shape (`n_features_in_`,)
Names of features seen during :term:`fit`. Defined only when `X`
has feature names that are all strings.
.. versionadded:: 1.0
See Also
——–
MiniBatchKMeans : Alternative online implementation that does incremental
updates of the centers positions using mini-batches.
For large scale learning (say n_samples > 10k) MiniBatchKMeans is
probably much faster than the default batch implementation.
Notes
—–
The k-means problem is solved using either Lloyd’s or Elkan’s algorithm.
The average complexity is given by O(k n T), where n is the number of
samples and T is the number of iteration.
The worst case complexity is given by O(n^(k+2/p)) with
n = n_samples, p = n_features.
Refer to :doi:`”How slow is the k-means method?” D. Arthur and S. Vassilvitskii –
SoCG2006.<10.1145/1137856.1137880>` for more details.
In practice, the k-means algorithm is very fast (one of the fastest
clustering algorithms available), but it falls in local minima. That’s why
it can be useful to restart it several times.
If the algorithm stops before fully converging (because of “tol“ or
“max_iter“), “labels_“ and “cluster_centers_“ will not be consistent,
i.e. the “cluster_centers_“ will not be the means of the points in each
cluster. Also, the estimator will reassign “labels_“ after the last
iteration to make “labels_“ consistent with “predict“ on the training
set.
Examples
——–
>>> from sklearn.cluster import KMeans
>>> import numpy as np
>>> X = np.array([[1, 2], [1, 4], [1, 0],
… [10, 2], [10, 4], [10, 0]])
>>> kmeans = KMeans(n_clusters=2, random_state=0, n_init=”auto”).fit(X)
>>> kmeans.labels_
array([1, 1, 1, 0, 0, 0], dtype=int32)
>>> kmeans.predict([[0, 0], [12, 3]])
array([1, 0], dtype=int32)
>>> kmeans.cluster_centers_
array([[10., 2.],
[ 1., 2.]])
For examples of common problems with K-Means and how to address them see
:ref:`sphx_glr_auto_examples_cluster_plot_kmeans_assumptions.py`.
For a demonstration of how K-Means can be used to cluster text documents see
:ref:`sphx_glr_auto_examples_text_plot_document_clustering.py`.
For a comparison between K-Means and MiniBatchKMeans refer to example
:ref:`sphx_glr_auto_examples_cluster_plot_mini_batch_kmeans.py`.
For a comparison between K-Means and BisectingKMeans refer to example
:ref:`sphx_glr_auto_examples_cluster_plot_bisect_kmeans.py`.
[1;31mFile:[0m c:\users\iec120639\appdata\local\anaconda3\lib\site-packages\sklearn\cluster\_kmeans.py
[1;31mType:[0m ABCMeta
[1;31mSubclasses:[0m