Python 非監督式機器學習: 距離導向聚類法(k-Means 演算法); 使用 scikit-learn ; 學生分群 ; from sklearn.cluster import KMeans

加入好友
加入社群
Python 非監督式機器學習: 距離導向聚類法(k-Means 演算法); 使用 scikit-learn ; 學生分群 ; from sklearn.cluster import KMeans - 儲蓄保險王

k-Means 演算法

使用 scikit-learn

Python 非監督式機器學習: 距離導向聚類法(k-Means 演算法); 使用 scikit-learn ; 學生分群 ; from sklearn.cluster import KMeans - 儲蓄保險王

KMeans 的重要建構子參數

Python 非監督式機器學習: 距離導向聚類法(k-Means 演算法); 使用 scikit-learn ; 學生分群 ; from sklearn.cluster import KMeans - 儲蓄保險王

KMeans 的重要成員變數:

Python 非監督式機器學習: 距離導向聚類法(k-Means 演算法); 使用 scikit-learn ; 學生分群 ; from sklearn.cluster import KMeans - 儲蓄保險王

KMeans 的重要個例操作:

Python 非監督式機器學習: 距離導向聚類法(k-Means 演算法); 使用 scikit-learn ; 學生分群 ; from sklearn.cluster import KMeans - 儲蓄保險王

https://pse.is/3wuxml

Python 非監督式機器學習: 距離導向聚類法(k-Means 演算法); 使用 scikit-learn ; 學生分群 ; from sklearn.cluster import KMeans - 儲蓄保險王

部分測試資料:

chinese english math
77 89 63
73 40 60
69 57 50
85 67 60
55 55 55
80 84 83
80 70 70
60 61 60
60 80 70
75 91 53
62 62 67
66 75 75
67 40 89
72 60 42
74 62 67
78 86 85
70 63 60
78 80 69
82 82 78

excel為1~46列,第一列為標題列

df.shape = (45, 3)

 

import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.cluster import KMeans

fpath = r”C:\Python\P107\doc\student_grades_real.csv”
df = pd.read_csv(fpath)
#print(df) ; df.shape = (45, 3)

lisChiCent = [] ; lisEngCent = [] ; lisMathCent= []

Xtrain, Xtest = train_test_split\
(df, test_size= 0.33,
random_state=42, shuffle = True)
#()中資料只有放一個df,輸出Xtrain, Xtest兩個
#先前放了X,y兩份資料,才能輸出
#Xtrain, Xtest, ytrain, ytest四個
print(“Xtrain:”, type(Xtrain), Xtrain.shape)
#<class ‘pandas.core.frame.DataFrame’> (30, 3)
print(“Xtest:”, type(Xtest), Xtest.shape)
# <class ‘pandas.core.frame.DataFrame’> (15, 3)

kmean = KMeans(n_clusters =2 ,random_state =42)
distMatrix = kmean.fit_transform(Xtrain)

“””
#distMatrix.shape = (30, 2) ; numpy.ndarray

# Xtrain列數30, 分為2 cluster

#產生模型,也順便列出屬性串列中的

每個項目屬於哪個聚類的對應矩陣,

描述樣本到各組中心點的距離

“””
cluster_cent = pd.DataFrame\
(kmean.cluster_centers_,
columns = df.columns.tolist())
“””
#kmean.cluster_centers_
array([[76.26666667, 83.13333333, 75.13333333],
[64.93333333, 59.26666667, 56.8 ]])
2個group,每個group的中心點都有
x,y,z (chinese, english, math)
“””
print(“cluster center:\n”,cluster_cent)
print(“The distance:”, kmean.inertia_)
print(“Totally”,kmean.n_iter_,
“iterations executed for finding the stable cluster”)
print(“The distance matrix from raw data to cluster:\n”,
pd.DataFrame(distMatrix,
columns= [“to cluster#0”, “to cluster#1”]) )
XtrainNew = pd.DataFrame(Xtrain,
columns= df.columns.tolist())
#columns = [‘chinese’, ‘english’, ‘math’]
XtrainNew.insert(loc = df.columns.size,
column = “groupID”,
value = kmean.labels_)
print(“The updated Xtrain:\n”, XtrainNew)

#kmean.labels_

#array([1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 1,
1, 0, 0, 0, 1, 1, 1, 0])

#ndarray shape = (30,)

Python 非監督式機器學習: 距離導向聚類法(k-Means 演算法); 使用 scikit-learn ; 學生分群 ; from sklearn.cluster import KMeans - 儲蓄保險王

Python 非監督式機器學習: 距離導向聚類法(k-Means 演算法); 使用 scikit-learn ; 學生分群 ; from sklearn.cluster import KMeans - 儲蓄保險王

部分輸出結果

(DataFrame distMatrix截掉一些):

distMatrix.shape = (30, 2)

Python 非監督式機器學習: 距離導向聚類法(k-Means 演算法); 使用 scikit-learn ; 學生分群 ; from sklearn.cluster import KMeans - 儲蓄保險王

distMatrix = kmean.fit_transform(Xtrain)

舉例: 第16位學生

距離cluster 0的質心21.8

距離cluster 1的質心15

其實沒有明顯的區分

但數學上就直接比大小

歸類為距離比較近的cluster 1

Python 非監督式機器學習: 距離導向聚類法(k-Means 演算法); 使用 scikit-learn ; 學生分群 ; from sklearn.cluster import KMeans - 儲蓄保險王

參考解答:

Python 非監督式機器學習: 距離導向聚類法(k-Means 演算法); 使用 scikit-learn ; 學生分群 ; from sklearn.cluster import KMeans - 儲蓄保險王

推薦hahow線上學習python: https://igrape.net/30afN

Python 非監督式機器學習: 距離導向聚類法(k-Means 演算法); 使用 scikit-learn ; 學生分群 ; from sklearn.cluster import KMeans - 儲蓄保險王

承接前面的程式碼:

lisChiCent = [] ; lisEngCent = [] ; lisMathCent= []
for item in range(2):
    chi = kmean.cluster_centers_[item,0]
    lisChiCent.append(chi)
    eng = kmean.cluster_centers_[item,1]
    lisEngCent.append(eng)
    math = kmean.cluster_centers_[item,2]
    lisMathCent.append(math)
“””cluster_centers_
一個二維 ndarray ,列出所有聚類的中心點
kmean.cluster_centers_
array([[76.26666667, 83.13333333, 75.13333333],
[64.93333333, 59.26666667, 56.8 ]])
lisChiCent, lisEngCent, lisMathCent 長度皆為2

kmean.cluster_centers_ 是ndarray

其實不須for迴圈,可以用切片的方法

取出兩個質心的座標

(lisChiCent, lisEngCent, lisMathCent)
“””

# lisChi_1 = [] ; lisChi_2 = []
# lisEng_1 = [] ; lisEng_2 = []
# lisMath_1 = [] ; lisMath_2 = []

groupIDary = XtrainNew[“groupID”].values

lisTrueIdx = groupIDary.nonzero()[0].tolist()

“””

以下方法可以得到一樣的lisTrueIdx

若groupID有三種以上可以派上用場:

Python 非監督式機器學習: 距離導向聚類法(k-Means 演算法); 使用 scikit-learn ; 學生分群 ; from sklearn.cluster import KMeans - 儲蓄保險王

“””
XtrainNew.index = range(len(XtrainNew))
XtrainNew1 = XtrainNew.iloc[lisTrueIdx,:]
XtrainNew0 = XtrainNew.drop(lisTrueIdx,axis=0)
#依據groupID將XtrainNew分為
#XtrainNew1(group1) , XtrainNew0(group0)
lisChi0 = XtrainNew0[“chinese”].tolist()
lisChi1 = XtrainNew1[“chinese”].tolist()

lisEng0 = XtrainNew0[“english”].tolist()
lisEng1 = XtrainNew1[“english”].tolist()

lisMath0 = XtrainNew0[“math”].tolist()
lisMath1 = XtrainNew1[“math”].tolist()

fig = plt.figure()
ax = plt.axes( projection =”3d” )
ax.scatter(lisChi0, lisEng0, lisMath0,
label=”student cluster0″, color=”b”, marker=”^”)
ax.scatter(lisChi1, lisEng1, lisMath1,
label=”student cluster1″, color=”g”, marker=”*”)
ax.scatter(lisChiCent, lisEngCent, lisMathCent,
label=”cluster center”,color=”r”, marker=”o”)

Python 非監督式機器學習: 距離導向聚類法(k-Means 演算法); 使用 scikit-learn ; 學生分群 ; from sklearn.cluster import KMeans - 儲蓄保險王

Python 非監督式機器學習: 距離導向聚類法(k-Means 演算法); 使用 scikit-learn ; 學生分群 ; from sklearn.cluster import KMeans - 儲蓄保險王

參考解答:

Python 非監督式機器學習: 距離導向聚類法(k-Means 演算法); 使用 scikit-learn ; 學生分群 ; from sklearn.cluster import KMeans - 儲蓄保險王

3D 散佈圖:

Python 非監督式機器學習: 距離導向聚類法(k-Means 演算法); 使用 scikit-learn ; 學生分群 ; from sklearn.cluster import KMeans - 儲蓄保險王

視覺化顯示:學生分群的彩色顯示:

Python 非監督式機器學習: 距離導向聚類法(k-Means 演算法); 使用 scikit-learn ; 學生分群 ; from sklearn.cluster import KMeans - 儲蓄保險王

推薦hahow線上學習python: https://igrape.net/30afN

用剩下的驗證樣本預測:

Python 非監督式機器學習: 距離導向聚類法(k-Means 演算法); 使用 scikit-learn ; 學生分群 ; from sklearn.cluster import KMeans - 儲蓄保險王

承接上面的程式碼:

label_test = kmean.predict(Xtest)

#array([0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0])

# shape = (15,)

#前面訓練樣本使用 kmean.labels_

#現在測試樣本使用 kmean.predict(Xtest)

print(“kmean.predict(Xtest):\n”,label_test)

dfXtest = pd.DataFrame(data = Xtest,
columns = df.columns.tolist())
dfXtest.insert(loc = dfXtest.columns.size ,
column = “groupid”,
value = label_test)
print(“The updated df Xtest:\n”,dfXtest)
print(“The precision of test sample =”,
kmean.score(Xtest))

“””雖然一樣有.predict()可以用

但跟前面的監督式學習不同

非監督式學習分群演算法的預測值

無真實資料可以比對

無標準答案

score是依據負的質心距離總和評價

越接近0越好”””

Python 非監督式機器學習: 距離導向聚類法(k-Means 演算法); 使用 scikit-learn ; 學生分群 ; from sklearn.cluster import KMeans - 儲蓄保險王

輸出結果:

Python 非監督式機器學習: 距離導向聚類法(k-Means 演算法); 使用 scikit-learn ; 學生分群 ; from sklearn.cluster import KMeans - 儲蓄保險王

 

其他分群演算法

https://scikit-learn.org/stable/modules/clustering.html

Python 非監督式機器學習: 距離導向聚類法(k-Means 演算法); 使用 scikit-learn ; 學生分群 ; from sklearn.cluster import KMeans - 儲蓄保險王

推薦hahow線上學習python: https://igrape.net/30afN

KMeans?

Init signature:
KMeans(
 n_clusters=8,
 *,
 init=’k-means++’,
 n_init=’auto’,
 max_iter=300,
 tol=0.0001,
 verbose=0,
 random_state=None,
 copy_x=True,
 algorithm=’lloyd’,
)
Docstring:
K-Means clustering.

Read more in the :ref:`User Guide <k_means>`.

Parameters
———-

n_clusters : int, default=8
The number of clusters to form as well as the number of
centroids to generate.

For an example of how to choose an optimal value for `n_clusters` refer to
:ref:`sphx_glr_auto_examples_cluster_plot_kmeans_silhouette_analysis.py`.

init : {‘k-means++’, ‘random’}, callable or array-like of shape (n_clusters, n_features), default=’k-means++’
Method for initialization:

* ‘k-means++’ : selects initial cluster centroids using sampling based on an empirical probability distribution of the points’ contribution to the overall inertia. This technique speeds up convergence. The algorithm implemented is “greedy k-means++”. It differs from the vanilla k-means++ by making several trials at each sampling step and choosing the best centroid among them.

* ‘random’: choose `n_clusters` observations (rows) at random from data for the initial centroids.

* If an array is passed, it should be of shape (n_clusters, n_features) and gives the initial centers.

* If a callable is passed, it should take arguments X, n_clusters and a random state and return an initialization.

For an example of how to use the different `init` strategies, see
:ref:`sphx_glr_auto_examples_cluster_plot_kmeans_digits.py`.

For an evaluation of the impact of initialization, see the example
:ref:`sphx_glr_auto_examples_cluster_plot_kmeans_stability_low_dim_dense.py`.

n_init : ‘auto’ or int, default=’auto’
Number of times the k-means algorithm is run with different centroid
seeds. The final results is the best output of `n_init` consecutive runs
in terms of inertia. Several runs are recommended for sparse
high-dimensional problems (see :ref:`kmeans_sparse_high_dim`).

When `n_init=’auto’`, the number of runs depends on the value of init:
10 if using `init=’random’` or `init` is a callable;
1 if using `init=’k-means++’` or `init` is an array-like.

.. versionadded:: 1.2
Added ‘auto’ option for `n_init`.

.. versionchanged:: 1.4
Default value for `n_init` changed to `’auto’`.

max_iter : int, default=300
Maximum number of iterations of the k-means algorithm for a
single run.

tol : float, default=1e-4
Relative tolerance with regards to Frobenius norm of the difference
in the cluster centers of two consecutive iterations to declare
convergence.

verbose : int, default=0
Verbosity mode.

random_state : int, RandomState instance or None, default=None
Determines random number generation for centroid initialization. Use
an int to make the randomness deterministic.
See :term:`Glossary <random_state>`.

copy_x : bool, default=True
When pre-computing distances it is more numerically accurate to center
the data first. If copy_x is True (default), then the original data is
not modified. If False, the original data is modified, and put back
before the function returns, but small numerical differences may be
introduced by subtracting and then adding the data mean. Note that if
the original data is not C-contiguous, a copy will be made even if
copy_x is False. If the original data is sparse, but not in CSR format,
a copy will be made even if copy_x is False.

algorithm : {“lloyd”, “elkan”}, default=”lloyd”
K-means algorithm to use. The classical EM-style algorithm is `”lloyd”`.
The `”elkan”` variation can be more efficient on some datasets with
well-defined clusters, by using the triangle inequality. However it’s
more memory intensive due to the allocation of an extra array of shape
`(n_samples, n_clusters)`.

.. versionchanged:: 0.18
Added Elkan algorithm

.. versionchanged:: 1.1
Renamed “full” to “lloyd”, and deprecated “auto” and “full”.
Changed “auto” to use “lloyd” instead of “elkan”.

Attributes
———-
cluster_centers_ : ndarray of shape (n_clusters, n_features)
Coordinates of cluster centers. If the algorithm stops before fully
converging (see “tol“ and “max_iter“), these will not be
consistent with “labels_“.

labels_ : ndarray of shape (n_samples,)
Labels of each point

inertia_ : float
Sum of squared distances of samples to their closest cluster center,
weighted by the sample weights if provided.

n_iter_ : int
Number of iterations run.

n_features_in_ : int
Number of features seen during :term:`fit`.

.. versionadded:: 0.24

feature_names_in_ : ndarray of shape (`n_features_in_`,)
Names of features seen during :term:`fit`. Defined only when `X`
has feature names that are all strings.

.. versionadded:: 1.0

See Also
——–
MiniBatchKMeans : Alternative online implementation that does incremental
updates of the centers positions using mini-batches.
For large scale learning (say n_samples > 10k) MiniBatchKMeans is
probably much faster than the default batch implementation.

Notes
—–
The k-means problem is solved using either Lloyd’s or Elkan’s algorithm.

The average complexity is given by O(k n T), where n is the number of
samples and T is the number of iteration.

The worst case complexity is given by O(n^(k+2/p)) with
n = n_samples, p = n_features.
Refer to :doi:`”How slow is the k-means method?” D. Arthur and S. Vassilvitskii –
SoCG2006.<10.1145/1137856.1137880>` for more details.

In practice, the k-means algorithm is very fast (one of the fastest
clustering algorithms available), but it falls in local minima. That’s why
it can be useful to restart it several times.

If the algorithm stops before fully converging (because of “tol“ or
“max_iter“), “labels_“ and “cluster_centers_“ will not be consistent,
i.e. the “cluster_centers_“ will not be the means of the points in each
cluster. Also, the estimator will reassign “labels_“ after the last
iteration to make “labels_“ consistent with “predict“ on the training
set.

Examples
——–

>>> from sklearn.cluster import KMeans
>>> import numpy as np
>>> X = np.array([[1, 2], [1, 4], [1, 0],
… [10, 2], [10, 4], [10, 0]])
>>> kmeans = KMeans(n_clusters=2, random_state=0, n_init=”auto”).fit(X)
>>> kmeans.labels_
array([1, 1, 1, 0, 0, 0], dtype=int32)
>>> kmeans.predict([[0, 0], [12, 3]])
array([1, 0], dtype=int32)
>>> kmeans.cluster_centers_
array([[10., 2.],
[ 1., 2.]])

For examples of common problems with K-Means and how to address them see
:ref:`sphx_glr_auto_examples_cluster_plot_kmeans_assumptions.py`.

For a demonstration of how K-Means can be used to cluster text documents see
:ref:`sphx_glr_auto_examples_text_plot_document_clustering.py`.

For a comparison between K-Means and MiniBatchKMeans refer to example
:ref:`sphx_glr_auto_examples_cluster_plot_mini_batch_kmeans.py`.

For a comparison between K-Means and BisectingKMeans refer to example
:ref:`sphx_glr_auto_examples_cluster_plot_bisect_kmeans.py`.
File: c:\users\iec120639\appdata\local\anaconda3\lib\site-packages\sklearn\cluster\_kmeans.py
Type: ABCMeta
Subclasses:

加入好友
加入社群
Python 非監督式機器學習: 距離導向聚類法(k-Means 演算法); 使用 scikit-learn ; 學生分群 ; from sklearn.cluster import KMeans - 儲蓄保險王

儲蓄保險王

儲蓄險是板主最喜愛的儲蓄工具,最喜愛的投資理財工具則是ETF,最喜愛的省錢工具則是信用卡

You may also like...

發佈留言

發佈留言必須填寫的電子郵件地址不會公開。 必填欄位標示為 *