Ananlyse en composantes principales (ACP) :¶

  • Combien de facteurs ? et comment
  • impact sur la classification ?

Charger les données :¶

On commence par charger le dataset :

In [1]:
from sklearn.datasets import load_wine
wine = load_wine()
print(wine.keys())
dict_keys(['data', 'target', 'frame', 'target_names', 'DESCR', 'feature_names'])
In [2]:
print(wine.DESCR)
.. _wine_dataset:

Wine recognition dataset
------------------------

**Data Set Characteristics:**

    :Number of Instances: 178
    :Number of Attributes: 13 numeric, predictive attributes and the class
    :Attribute Information:
 		- Alcohol
 		- Malic acid
 		- Ash
		- Alcalinity of ash  
 		- Magnesium
		- Total phenols
 		- Flavanoids
 		- Nonflavanoid phenols
 		- Proanthocyanins
		- Color intensity
 		- Hue
 		- OD280/OD315 of diluted wines
 		- Proline

    - class:
            - class_0
            - class_1
            - class_2
		
    :Summary Statistics:
    
    ============================= ==== ===== ======= =====
                                   Min   Max   Mean     SD
    ============================= ==== ===== ======= =====
    Alcohol:                      11.0  14.8    13.0   0.8
    Malic Acid:                   0.74  5.80    2.34  1.12
    Ash:                          1.36  3.23    2.36  0.27
    Alcalinity of Ash:            10.6  30.0    19.5   3.3
    Magnesium:                    70.0 162.0    99.7  14.3
    Total Phenols:                0.98  3.88    2.29  0.63
    Flavanoids:                   0.34  5.08    2.03  1.00
    Nonflavanoid Phenols:         0.13  0.66    0.36  0.12
    Proanthocyanins:              0.41  3.58    1.59  0.57
    Colour Intensity:              1.3  13.0     5.1   2.3
    Hue:                          0.48  1.71    0.96  0.23
    OD280/OD315 of diluted wines: 1.27  4.00    2.61  0.71
    Proline:                       278  1680     746   315
    ============================= ==== ===== ======= =====

    :Missing Attribute Values: None
    :Class Distribution: class_0 (59), class_1 (71), class_2 (48)
    :Creator: R.A. Fisher
    :Donor: Michael Marshall (MARSHALL%PLU@io.arc.nasa.gov)
    :Date: July, 1988

This is a copy of UCI ML Wine recognition datasets.
https://archive.ics.uci.edu/ml/machine-learning-databases/wine/wine.data

The data is the results of a chemical analysis of wines grown in the same
region in Italy by three different cultivators. There are thirteen different
measurements taken for different constituents found in the three types of
wine.

Original Owners: 

Forina, M. et al, PARVUS - 
An Extendible Package for Data Exploration, Classification and Correlation. 
Institute of Pharmaceutical and Food Analysis and Technologies,
Via Brigata Salerno, 16147 Genoa, Italy.

Citation:

Lichman, M. (2013). UCI Machine Learning Repository
[https://archive.ics.uci.edu/ml]. Irvine, CA: University of California,
School of Information and Computer Science. 

|details-start|
**References**
|details-split|

(1) S. Aeberhard, D. Coomans and O. de Vel, 
Comparison of Classifiers in High Dimensional Settings, 
Tech. Rep. no. 92-02, (1992), Dept. of Computer Science and Dept. of  
Mathematics and Statistics, James Cook University of North Queensland. 
(Also submitted to Technometrics). 

The data was used with many others for comparing various 
classifiers. The classes are separable, though only RDA 
has achieved 100% correct classification. 
(RDA : 100%, QDA 99.4%, LDA 98.9%, 1NN 96.1% (z-transformed data)) 
(All results using the leave-one-out technique) 

(2) S. Aeberhard, D. Coomans and O. de Vel, 
"THE CLASSIFICATION PERFORMANCE OF RDA" 
Tech. Rep. no. 92-01, (1992), Dept. of Computer Science and Dept. of 
Mathematics and Statistics, James Cook University of North Queensland. 
(Also submitted to Journal of Chemometrics).

|details-end|
In [3]:
X = wine.data
X.shape
Out[3]:
(178, 13)
In [4]:
import numpy as np
y = wine.target
print(np.unique(y))
[0 1 2]
In [5]:
print(wine.target_names)
['class_0' 'class_1' 'class_2']
In [6]:
print(wine.feature_names)
['alcohol', 'malic_acid', 'ash', 'alcalinity_of_ash', 'magnesium', 'total_phenols', 'flavanoids', 'nonflavanoid_phenols', 'proanthocyanins', 'color_intensity', 'hue', 'od280/od315_of_diluted_wines', 'proline']

On récapitule :¶

  • Il y a 178 instances.
  • Le nombre de features est de 13.
  • Chacune des instances appartient soit à la classe 0, 1 ou 2.

Pré-traitement des données :¶

On centre et on réduit les données initiales :

In [7]:
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
z = scaler.fit_transform(X)

Et on fait notre ACP :¶

In [8]:
from sklearn.decomposition import PCA
pca = PCA()
print(pca)
PCA()
In [9]:
pca.fit_transform(z)
Out[9]:
array([[ 3.31675081e+00, -1.44346263e+00, -1.65739045e-01, ...,
        -4.51563395e-01,  5.40810414e-01, -6.62386309e-02],
       [ 2.20946492e+00,  3.33392887e-01, -2.02645737e+00, ...,
        -1.42657306e-01,  3.88237741e-01,  3.63650247e-03],
       [ 2.51674015e+00, -1.03115130e+00,  9.82818670e-01, ...,
        -2.86672847e-01,  5.83573183e-04,  2.17165104e-02],
       ...,
       [-2.67783946e+00, -2.76089913e+00, -9.40941877e-01, ...,
         5.12492025e-01,  6.98766451e-01,  7.20776948e-02],
       [-2.38701709e+00, -2.29734668e+00, -5.50696197e-01, ...,
         2.99821968e-01,  3.39820654e-01, -2.18657605e-02],
       [-3.20875816e+00, -2.76891957e+00,  1.01391366e+00, ...,
        -2.29964331e-01, -1.88787963e-01, -3.23964720e-01]])
In [10]:
print(pca.n_components_)
13

Par défaut, l'ACP crée autant de composantes que de variables initiales :

In [11]:
print(pca.explained_variance_)
[4.73243698 2.51108093 1.45424187 0.92416587 0.85804868 0.64528221
 0.55414147 0.35046627 0.29051203 0.25232001 0.22706428 0.16972374
 0.10396199]

Et on a donc 100% de la variabilité qui est expliquée (bon c'est 99,99% mais bon ...)

In [12]:
print(pca.explained_variance_ratio_.sum())
1.0

On peut récupérer l'ensemble des valeurs propres (on se rappelle que les valeurs propres ne sont autre que les variances des axes) :

In [14]:
eigval = pca.explained_variance_
p = X.shape[1]
p
Out[14]:
13

On dessine alors l'eboulie des valeurs propres :

In [15]:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
#scree plot
plt.grid()
plt.plot(np.arange(1,p+1),eigval) 
plt.title("Scree plot") 
plt.ylabel("Eigen values") 
plt.xlabel("Factor number") 
plt.show()
No description has been provided for this image

pour voir que, si on applique le critère du coude, on devrait garder 4 axes :

In [16]:
n_components = 4
pca = PCA(n_components=n_components)
pca_wine = pca.fit_transform(z)
total_variance = pca.explained_variance_ratio_.sum()
print('Total Explained Variance: ', total_variance)
print(pca_wine.shape)
Total Explained Variance:  0.7359899907589927
(178, 4)

ce qui permet de garder plus que 73% de variance totale.

Néanmonoins, si on veut pouvoir visualiser les données, il ne faut gardet que deux axes :

In [17]:
n_components = 2
pca = PCA(n_components=n_components)
pca_wine = pca.fit_transform(z)
total_variance = pca.explained_variance_ratio_.sum()
print('Total Explained Variance: ', total_variance)
print(pca_wine.shape)
Total Explained Variance:  0.5540633835693528
(178, 2)

Et on se pose la question de savoir si on perd en classification ... On affiche alors le nuage de points :

In [19]:
plt.grid()
plt.scatter(pca_wine[:, 0], pca_wine[:, 1], c=y)
Out[19]:
<matplotlib.collections.PathCollection at 0xffff2eb3f9d0>
No description has been provided for this image

et on voit que les classes sont bien séparables ...

In [ ]: