Analyse en composantes pricipales¶

Pourquoi l'ACP¶

  • Visualisation : au dela de dim=3, impossible de visualiser des nuages de points
  • Entrainement/Test des modèles de ML (voir section suivante)
  • Compression de données (voir plus loin)

Entrainement/Test de modèles ML¶

Données :¶
In [104]:
from sklearn.datasets import make_classification
X, y = make_classification(n_samples=200000, n_features=1000, n_classes=2, n_redundant=0, n_informative=2, random_state=1)
In [105]:
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.5, random_state=1)

Un knn avec les données brutes¶

In [106]:
from time import time
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=5)

t = time()
knn.fit(X_train, y_train)
dt = time() - t
print('Training time: ', dt)

t = time()
y_pred = knn.predict(X_test)
dt = time() - t
print('Testing time: ', dt)
Training time:  0.03301429748535156
Testing time:  79.41274762153625

Observation : le test prend beaucoup de temps ....

Quid du la qualité du modèle ?¶
In [107]:
from sklearn.metrics import classification_report
report = classification_report(y_test, y_pred)
print(report)
              precision    recall  f1-score   support

           0       0.61      0.56      0.59     49939
           1       0.60      0.64      0.62     50061

    accuracy                           0.60    100000
   macro avg       0.60      0.60      0.60    100000
weighted avg       0.60      0.60      0.60    100000

On y reviendra ...¶

ACP (from scratch)¶

In [109]:
from sklearn.datasets import make_classification
X, _ = make_classification(n_samples=20000, n_features=10, n_classes=2, n_redundant=0, n_informative=2, random_state=1)
  1. On centre les données :
In [110]:
import numpy as np
X_bar = np.mean(X)
Z = X - X_bar
#Z
#np.mean(Z, axis=0)
  1. On calcule la "scatter matrix" :
In [111]:
S = np.cov(Z, rowvar=False)
S
Out[111]:
array([[ 9.97594645e-01,  7.92169927e-03,  1.21494505e-03,
         1.50922259e-03,  9.25261635e-03, -4.76758407e-03,
         5.32380215e-03, -4.72297355e-03,  1.56248209e-02,
         3.17164756e-03],
       [ 7.92169927e-03,  9.80467151e-01,  1.70067003e-02,
         4.21527039e-03,  4.28054737e-03,  4.86348394e-03,
        -1.17521976e-02,  1.84708175e-02, -6.05532909e-03,
        -5.02653464e-03],
       [ 1.21494505e-03,  1.70067003e-02,  1.02635808e+00,
        -2.98751905e-03,  4.96814074e-03,  2.09749272e-03,
         1.52015549e-04,  5.27420850e-03, -9.37313033e-04,
        -1.23103420e-02],
       [ 1.50922259e-03,  4.21527039e-03, -2.98751905e-03,
         1.56423409e+00,  9.47682087e-04,  3.03885962e-03,
        -1.05738274e-03, -1.15093942e-03, -1.31818131e-03,
        -7.59157940e-03],
       [ 9.25261635e-03,  4.28054737e-03,  4.96814074e-03,
         9.47682087e-04,  9.98579314e-01,  1.07037215e-03,
         2.98493458e-03, -1.95075747e-03, -1.28613003e-02,
        -4.78037739e-03],
       [-4.76758407e-03,  4.86348394e-03,  2.09749272e-03,
         3.03885962e-03,  1.07037215e-03,  9.96213064e-01,
        -3.09503795e-05,  5.33047833e-03, -1.84751795e-03,
        -1.05975239e-03],
       [ 5.32380215e-03, -1.17521976e-02,  1.52015549e-04,
        -1.05738274e-03,  2.98493458e-03, -3.09503795e-05,
         9.90215670e-01,  2.75362982e-03, -6.00190355e-03,
        -7.46852939e-03],
       [-4.72297355e-03,  1.84708175e-02,  5.27420850e-03,
        -1.15093942e-03, -1.95075747e-03,  5.33047833e-03,
         2.75362982e-03,  1.52799450e+00, -6.42693898e-03,
         3.54767997e-03],
       [ 1.56248209e-02, -6.05532909e-03, -9.37313033e-04,
        -1.31818131e-03, -1.28613003e-02, -1.84751795e-03,
        -6.00190355e-03, -6.42693898e-03,  9.77202689e-01,
         1.18644161e-02],
       [ 3.17164756e-03, -5.02653464e-03, -1.23103420e-02,
        -7.59157940e-03, -4.78037739e-03, -1.05975239e-03,
        -7.46852939e-03,  3.54767997e-03,  1.18644161e-02,
         9.87738072e-01]])
  1. On calcule les valeurs propres et vecteurs propres de S, et on les trie selon les valeurs propres :
In [112]:
from numpy.linalg import eig
lambdas, vects = eig(S)
#print('lambdas: ', lambdas)
#print('vecteurs propres : ', vects)
idx = np.argsort(lambdas)[::-1]
lambdas_tries = lambdas[idx]
vects_tries = vects[:, idx]
  1. On détermine k : on se fixe un pourcentage de variance à garder (disons 70%)
In [113]:
k = 0
total_variance_ratio = 0
total_lambdas = np.sum(lambdas)
while k < len(lambdas) and total_variance_ratio < .7:
    k += 1
    total_variance_ratio += lambdas_tries[k] / total_lambdas
    #print(total_variance_ratio)
print('variance gardée avec ', k, ' composantes : ', total_variance_ratio)
variance gardée avec  8  composantes :  0.771926899602811
In [115]:
base = vects_tries[0:k]
base
Out[115]:
array([[-2.82405959e-03, -8.69449463e-03,  5.76025882e-02,
         7.47965444e-01,  3.54883017e-01,  6.27214044e-02,
         2.35648071e-01, -2.78743884e-01, -2.54656023e-02,
        -4.16405418e-01],
       [-6.38628187e-03,  3.41592515e-02,  3.14050383e-01,
         1.27196948e-01, -7.55652035e-02,  3.51243496e-01,
        -1.67536037e-01, -4.14879562e-01, -5.71482092e-01,
         4.77617426e-01],
       [ 5.28719855e-03,  1.13516016e-02,  8.54204352e-01,
         1.19679722e-01, -2.90747906e-01, -2.30972498e-01,
         6.47300942e-02,  3.10513421e-01,  4.53608797e-02,
        -1.23717652e-01],
       [-9.99407657e-01,  2.97114135e-02, -3.36209441e-03,
         4.30396900e-03, -5.09225094e-03, -6.29565899e-03,
        -1.31948993e-03,  1.39741312e-02, -2.41872184e-03,
        -1.81783132e-03],
       [-1.98719474e-03, -3.09476277e-03,  2.36113642e-01,
        -1.80846829e-02,  6.96836031e-01,  3.60413001e-01,
        -2.68092342e-01,  3.87935257e-01,  2.37841288e-01,
         2.22477042e-01],
       [-5.11196436e-03,  1.06241811e-02,  8.84001248e-02,
        -2.63837264e-01, -1.02757366e-01,  6.17680496e-01,
         7.18235455e-01,  2.88186068e-02,  8.98298109e-02,
        -7.47393083e-02],
       [ 1.88315361e-03,  4.26599751e-03,  8.31787431e-03,
        -1.31083726e-01,  4.32616070e-01, -5.13094949e-01,
         5.28746574e-01,  1.16987175e-01, -3.84926798e-01,
         3.01455560e-01],
       [ 2.98639690e-02,  9.98713405e-01, -2.02516709e-02,
         8.17628540e-03,  8.61138804e-03, -1.36307198e-02,
        -9.27742925e-04,  5.55767489e-03,  2.85218571e-02,
        -9.87411070e-03]])
  1. Et on projette sur la nouvelle base :
In [116]:
Z_new = np.dot(base, Z.T)
In [62]:
#Z_new
Application simple (pour comprendre), d'une dim=2 à une dim=1 :¶
In [117]:
X, _ = make_classification(n_samples=200, n_features=2, n_classes=2, n_redundant=0, n_informative=2, random_state=1)
Z = X - np.mean(X)
S = np.cov(Z, rowvar=False)
ls, vs = eig(S)
idx = np.argsort(ls)[::-1]
ls_tries = ls[idx]
vs_tries = vs[:, idx]
print('pourcentage de variance gardée : {:2.2%}'.format(ls_tries[0]/np.sum(ls_tries)))
Z_new = np.dot(vs_tries[0], Z.T)
pourcentage de variance gardée : 60.26%
In [118]:
import matplotlib.pyplot as plt
plt.grid()
plt.scatter(Z[:,0], Z[:,1])
Out[118]:
<matplotlib.collections.PathCollection at 0xffff63a30390>
No description has been provided for this image
In [119]:
plt.grid()
plt.scatter(Z_new, [np.mean(Z_new) for i in range(len(Z_new))])
Out[119]:
<matplotlib.collections.PathCollection at 0xffff60f62150>
No description has been provided for this image
In [121]:
def droite_vd(x, v, p): # une fonction qui génère une droite de vecteur directeur v passant par le point p
    # rappel : l'équation est donnée par y = v2/v1 * (x - p0) + p1
    y = v[1] / v[0] * (x - p[0]) + p[1]
    return y
v = vs_tries[0]
print(v)
z_bar = np.mean(Z, axis=0)
print(z_bar)
#ZZ = np.dot(Z_new + 
y_new = droite_vd(Z, v, z_bar) 
len(y_new)
[-0.40421611 -0.91466351]
[ 0.01591383 -0.01591383]
Out[121]:
200
In [122]:
plt.grid()
plt.scatter(Z[:,0], Z[:,1])
#plt.scatter(Z_new, [np.mean(Z_new) for i in range(len(Z_new))])
plt.scatter(Z, y_new, marker='x')
Out[122]:
<matplotlib.collections.PathCollection at 0xffff60fed010>
No description has been provided for this image

ACP avec sklearn¶

Données¶

In [97]:
from sklearn.datasets import make_classification
X, _ = make_classification(n_samples=20000, n_features=10, n_classes=2, n_redundant=0, n_informative=2, random_state=1)
In [ ]:
 

Prétraitement¶

In [98]:
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
Z = scaler.fit_transform(X)

ACP¶

In [99]:
from sklearn.decomposition import PCA
pca = PCA()
pca.fit(Z)
pca.explained_variance_ratio_
Out[99]:
array([0.10369607, 0.10208325, 0.10154752, 0.10034768, 0.10011433,
       0.1000217 , 0.09963678, 0.09815812, 0.09770615, 0.09668838])

On cherche le nombre de composantes à garder, pour cela, nous utilisons le critère du coude :

In [100]:
import matplotlib.pyplot as plt
%matplotlib inline

plt.grid()
plt.plot(range(1, Z.shape[1] + 1), pca.explained_variance_) 
Out[100]:
[<matplotlib.lines.Line2D at 0xffff63c565d0>]
No description has been provided for this image
In [103]:
k = 5
pca = PCA(n_components=4)
pca.fit_transform(Z)
np.sum(pca.explained_variance_ratio_)
Out[103]:
0.40767452580168706
In [ ]: