On commence par importer quelques modules python importants :
- Numpy : pour des claculs (algèbre linéaire, etc)
- Pandas : pour la lecture des fichiers csv, etc
import numpy as np
import pandas as pa
import warnings
warnings.filterwarnings("ignore")
- Deux modules supplémentaires pour essentiellement visualiser les données ...
from matplotlib import pyplot as plt
import seaborn as sns
sns.set()
La commencent les modules utiles pour le machine learning. En premier sklearn, une des bibliothèques les plus riches pour faire du ML.
En particulier, on importe :
- la fonction permettant de générer la matrice de confusion (voir le cours)
- une fonction pour découper les données en deux parties : une pour l'entraînement et une pour le test.
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
Nous allons illustrer notre premier algorithme d'apprentissage en travaillant avec le dataset breast cancer. Pour l'instant, nous allons utiliser quelques datasets de la bibliothèque sklearn. Nous verrons plus tard comment charger, grâce à pandas, des fichiers csv.
A noter que plusieurs datasets font partie de ce sous-module de sklearn.
from sklearn.datasets import load_breast_cancer
Les datasets de sklearn sont, en général, bien formatés. Ils viennent avec une structure permettant de les visualiser et de les "fouiller" facilement.
breast_cancer = load_breast_cancer()
print(breast_cancer.DESCR)
.. _breast_cancer_dataset: Breast cancer wisconsin (diagnostic) dataset -------------------------------------------- **Data Set Characteristics:** :Number of Instances: 569 :Number of Attributes: 30 numeric, predictive attributes and the class :Attribute Information: - radius (mean of distances from center to points on the perimeter) - texture (standard deviation of gray-scale values) - perimeter - area - smoothness (local variation in radius lengths) - compactness (perimeter^2 / area - 1.0) - concavity (severity of concave portions of the contour) - concave points (number of concave portions of the contour) - symmetry - fractal dimension ("coastline approximation" - 1) The mean, standard error, and "worst" or largest (mean of the three worst/largest values) of these features were computed for each image, resulting in 30 features. For instance, field 0 is Mean Radius, field 10 is Radius SE, field 20 is Worst Radius. - class: - WDBC-Malignant - WDBC-Benign :Summary Statistics: ===================================== ====== ====== Min Max ===================================== ====== ====== radius (mean): 6.981 28.11 texture (mean): 9.71 39.28 perimeter (mean): 43.79 188.5 area (mean): 143.5 2501.0 smoothness (mean): 0.053 0.163 compactness (mean): 0.019 0.345 concavity (mean): 0.0 0.427 concave points (mean): 0.0 0.201 symmetry (mean): 0.106 0.304 fractal dimension (mean): 0.05 0.097 radius (standard error): 0.112 2.873 texture (standard error): 0.36 4.885 perimeter (standard error): 0.757 21.98 area (standard error): 6.802 542.2 smoothness (standard error): 0.002 0.031 compactness (standard error): 0.002 0.135 concavity (standard error): 0.0 0.396 concave points (standard error): 0.0 0.053 symmetry (standard error): 0.008 0.079 fractal dimension (standard error): 0.001 0.03 radius (worst): 7.93 36.04 texture (worst): 12.02 49.54 perimeter (worst): 50.41 251.2 area (worst): 185.2 4254.0 smoothness (worst): 0.071 0.223 compactness (worst): 0.027 1.058 concavity (worst): 0.0 1.252 concave points (worst): 0.0 0.291 symmetry (worst): 0.156 0.664 fractal dimension (worst): 0.055 0.208 ===================================== ====== ====== :Missing Attribute Values: None :Class Distribution: 212 - Malignant, 357 - Benign :Creator: Dr. William H. Wolberg, W. Nick Street, Olvi L. Mangasarian :Donor: Nick Street :Date: November, 1995 This is a copy of UCI ML Breast Cancer Wisconsin (Diagnostic) datasets. https://goo.gl/U2Uwz2 Features are computed from a digitized image of a fine needle aspirate (FNA) of a breast mass. They describe characteristics of the cell nuclei present in the image. Separating plane described above was obtained using Multisurface Method-Tree (MSM-T) [K. P. Bennett, "Decision Tree Construction Via Linear Programming." Proceedings of the 4th Midwest Artificial Intelligence and Cognitive Science Society, pp. 97-101, 1992], a classification method which uses linear programming to construct a decision tree. Relevant features were selected using an exhaustive search in the space of 1-4 features and 1-3 separating planes. The actual linear program used to obtain the separating plane in the 3-dimensional space is that described in: [K. P. Bennett and O. L. Mangasarian: "Robust Linear Programming Discrimination of Two Linearly Inseparable Sets", Optimization Methods and Software 1, 1992, 23-34]. This database is also available through the UW CS ftp server: ftp ftp.cs.wisc.edu cd math-prog/cpo-dataset/machine-learn/WDBC/ .. topic:: References - W.N. Street, W.H. Wolberg and O.L. Mangasarian. Nuclear feature extraction for breast tumor diagnosis. IS&T/SPIE 1993 International Symposium on Electronic Imaging: Science and Technology, volume 1905, pages 861-870, San Jose, CA, 1993. - O.L. Mangasarian, W.N. Street and W.H. Wolberg. Breast cancer diagnosis and prognosis via linear programming. Operations Research, 43(4), pages 570-577, July-August 1995. - W.H. Wolberg, W.N. Street, and O.L. Mangasarian. Machine learning techniques to diagnose breast cancer from fine-needle aspirates. Cancer Letters 77 (1994) 163-171.
print(breast_cancer.feature_names)
['mean radius' 'mean texture' 'mean perimeter' 'mean area' 'mean smoothness' 'mean compactness' 'mean concavity' 'mean concave points' 'mean symmetry' 'mean fractal dimension' 'radius error' 'texture error' 'perimeter error' 'area error' 'smoothness error' 'compactness error' 'concavity error' 'concave points error' 'symmetry error' 'fractal dimension error' 'worst radius' 'worst texture' 'worst perimeter' 'worst area' 'worst smoothness' 'worst compactness' 'worst concavity' 'worst concave points' 'worst symmetry' 'worst fractal dimension']
Nous n'allons pas travailler avec le dataset dans sa globalité. Nous allons illustrer la méthode en travaillant juste sur deux attributs et sur la classe. N'oublions pas que nous cherchons à trouver un modèle, qui est en fait une fonction F telle que y = F(X).
A noter que la fonction Categorical transforme juste les deux classes en 0 et 1.
X = pa.DataFrame(breast_cancer.data, columns=breast_cancer.feature_names)
X = X[['mean area', 'mean compactness']]
y = pa.Categorical.from_codes(breast_cancer.target, breast_cancer.target_names)
y = pa.get_dummies(y, drop_first=True)
On découpe les données en deux parties : une pour l'entraînement et une pour le test. A noter que par défaut le dataset du test représente 25% de l'ensemble mais on peut changer ce découpage par défaut grâce au paramètre test_size.
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.3, random_state=1)
La commence l'utilisation du knn :
from sklearn.neighbors import KNeighborsClassifier
k = 3
knn = KNeighborsClassifier(n_neighbors=k, metric='euclidean')
knn.fit(X_train, y_train)
KNeighborsClassifier(metric='euclidean', n_neighbors=3)In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
KNeighborsClassifier(metric='euclidean', n_neighbors=3)
Une fois le modèle entraîné, on l'utilise pour prédir les classes de X_test :
y_pred = knn.predict(X_test)
sns.scatterplot(
x='mean area',
y='mean compactness',
hue='benign',
data=X_test.join(y_test, how='outer')
)
<Axes: xlabel='mean area', ylabel='mean compactness'>
plt.scatter(
X_test['mean area'],
X_test['mean compactness'],
c=y_pred,
cmap='coolwarm',
alpha=0.7
)
<matplotlib.collections.PathCollection at 0xffff57343010>
On mesure la performance de notre modèle. Pour cela, on calcule la matrice de confusion (voir le cours).
cm = confusion_matrix(y_test, y_pred)
print(cm)
[[50 13] [16 92]]
Et on calcule les métriques qui nous intéressent. Nous allons calculer l'accuracy de notre modèle :
acc = np.sum(np.diag(cm))/np.sum(cm)
print('{:.2%}'.format(acc))
83.04%
k-nn est un algorithme simple car le nombre d'hyper paramètres est de 1 (k). On peut donc écrire un programme simple pour chercher le meilleur modèle :
def best_model_search(X_train, X_test, y_train, y_test, n=10, patience=3):
k = 1
best_acc = 0
best_model = None
p = patience
while k <n and p >0 :
knn = KNeighborsClassifier(n_neighbors=k, metric='euclidean')
knn.fit(X_train, y_train)
y_pred = knn.predict(X_test)
cm = confusion_matrix(y_test, y_pred)
acc = np.sum(np.diag(cm))/np.sum(cm)
print('k: ',k, '{:.2%}'.format(acc))
if acc > best_acc:
best_acc = acc
best_model = knn
p = patience + 1
p = p -1
k = k+1
return best_model,k, best_acc
knn, k, acc = best_model_search(X_train, X_test, y_train, y_test, patience=5, n = 100)
k: 1 79.53% k: 2 80.70% k: 3 83.04% k: 4 81.87% k: 5 85.38% k: 6 83.04% k: 7 84.21% k: 8 82.46% k: 9 85.96% k: 10 83.63% k: 11 85.96% k: 12 83.63% k: 13 86.55% k: 14 85.38% k: 15 85.38% k: 16 85.38% k: 17 84.80% k: 18 85.38%
print(k, '{:.2%}'.format(acc))
19 86.55%
L'accuracy est-elle la bonne mesure de performance ?¶
# On visulaise la répartition des classes :
values, counts = np.unique(y, return_counts=True)
plt.bar(values, counts)
<BarContainer object of 2 artists>
from sklearn.metrics import classification_report
print(classification_report(y_test, y_pred))
precision recall f1-score support False 0.76 0.79 0.78 63 True 0.88 0.85 0.86 108 accuracy 0.83 171 macro avg 0.82 0.82 0.82 171 weighted avg 0.83 0.83 0.83 171
Limites de l'algorithme knn :¶
Curse of dimentionality¶
from sklearn.datasets import load_digits
import time
digits = load_digits()
dir(digits)
X = digits.data
y = digits.target
print(X.shape)
print(y.shape)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
(1797, 64) (1797,)
k = 3
knn = KNeighborsClassifier(n_neighbors=k, metric='euclidean')
t = time.perf_counter()
knn.fit(X_train, y_train)
dt = time.perf_counter() - t
print(dt)
0.0027325829723849893
t = time.perf_counter()
y_pred = knn.predict(X_test)
dt = time.perf_counter() - t
print(dt)
0.054978166008368134
cm = confusion_matrix(y_test, y_pred)
print(cm)
acc = accuracy_score(y_test, y_pred)
print('accuracy: {:.2%}'.format(acc))
[[53 0 0 0 0 0 0 0 0 0] [ 0 42 0 0 0 0 0 0 0 0] [ 0 0 40 0 0 0 0 1 0 0] [ 0 0 0 52 0 0 0 0 0 0] [ 0 0 0 0 47 0 0 0 0 0] [ 0 0 0 0 0 37 0 0 0 2] [ 0 0 0 0 0 0 43 0 0 0] [ 0 0 0 0 0 0 0 47 0 1] [ 0 0 0 0 0 0 0 0 37 0] [ 0 0 0 0 0 0 0 0 0 48]] accuracy: 99.11%
from keras.datasets import mnist
(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_train = X_train.reshape((X_train.shape[0], X_train.shape[1] * X_train.shape[2]))
X_test = X_test.reshape((X_test.shape[0], X_test.shape[1] * X_test.shape[2]))
k = 2
knn = KNeighborsClassifier(n_neighbors=k, metric='euclidean')
t = time.perf_counter()
knn.fit(X_train, y_train)
dt = time.perf_counter() - t
print(dt)
0.010209625004790723
t = time.perf_counter()
y_pred = knn.predict(X_test)
dt = time.perf_counter() - t
print(dt)
10.455791797023267
cm = confusion_matrix(y_test, y_pred)
print(cm)
acc = accuracy_score(y_test, y_pred)
print('accuracy: {:.2%}'.format(acc))
[[ 976 1 1 0 0 1 0 1 0 0] [ 0 1133 2 0 0 0 0 0 0 0] [ 11 10 995 1 2 0 0 12 1 0] [ 1 1 8 981 1 9 0 6 2 1] [ 3 7 0 0 959 0 2 3 0 8] [ 6 2 0 25 2 850 2 1 1 3] [ 7 3 0 0 5 4 939 0 0 0] [ 0 29 8 2 3 0 0 981 0 5] [ 10 2 8 28 9 29 4 5 876 3] [ 6 6 3 9 19 4 1 22 2 937]] accuracy: 96.27%
Besoin de la normalisation¶
(pas vraiment une limite de knn)