CNN avec keras

Dans ce notebook, nous allons utiliser keras pour concevoir et entrainer un réseau de neurones avec une architecture CNN.

La chargement, le découpage et en général le prétraitement des données reste le même. Ce qui change essentiellement c'est l'architecture du réseau.

In [1]:
import tensorflow as tf
import keras
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from keras.datasets import mnist
Using TensorFlow backend.
In [2]:
(X_train, Y_train), (X_test, Y_test) = mnist.load_data()

#To input our values in our network Conv2D layer, we need to reshape the datasets, i.e.,
# pass from (60000, 28, 28) to (60000, 28, 28, 1) where 1 is the number of channels of our images
img_rows, img_cols = X_train.shape[1], X_train.shape[2]
X_train = X_train.reshape(X_train.shape[0], img_rows, img_cols, 1)
X_test = X_test.reshape(X_test.shape[0], img_rows, img_cols, 1)

X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
Y_train = Y_train.astype('float32')
Y_test = Y_test.astype('float32')
X_train  = X_train / 255
X_test  = X_test / 255

La commence réellement l'utilisation des CNN. Nous avons besoin d'importer un certain nombre d'autres éléments de la bibliothèque keras :

In [3]:
from keras.models import Sequential
from keras.layers import Dense, Dropout, Conv2D, MaxPooling2D, Flatten
from keras.optimizers import adam

Et on définit notre réseau :

In [4]:
num_classes = 10

#Convert class vectors to binary class matrices ("one hot encoding")
## Doc : https://keras.io/utils/#to_categorical
Y_train = keras.utils.to_categorical(Y_train, num_classes)
Y_test = keras.utils.to_categorical(Y_test, num_classes)
In [5]:
def cnn():
    model = Sequential()
    model.add(Conv2D(32,
                     kernel_size=(3,3),
                     activation='relu',
                     input_shape=(28, 28, 1)))
    model.add(Conv2D(64,
                     kernel_size=(3,3),
                     activation='relu'))
    model.add(MaxPooling2D(pool_size=(2,2)))
    model.add(Flatten())
    model.add(Dense(128, activation='relu'))
    model.add(Dense(num_classes, activation='softmax'))
    model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
    return model

On créee donc notre réseau :

In [6]:
model = cnn()
model.summary()
Model: "sequential_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_1 (Conv2D)            (None, 26, 26, 32)        320       
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 24, 24, 64)        18496     
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 12, 12, 64)        0         
_________________________________________________________________
flatten_1 (Flatten)          (None, 9216)              0         
_________________________________________________________________
dense_1 (Dense)              (None, 128)               1179776   
_________________________________________________________________
dense_2 (Dense)              (None, 10)                1290      
=================================================================
Total params: 1,199,882
Trainable params: 1,199,882
Non-trainable params: 0
_________________________________________________________________

et on l'antraine :

In [7]:
batch_size=64
epochs=10

hist = model.fit(X_train, Y_train,
            validation_data=(X_test, Y_test),
            epochs=epochs,
            batch_size=batch_size)            
Train on 60000 samples, validate on 10000 samples
Epoch 1/10
60000/60000 [==============================] - 62s 1ms/step - loss: 0.1229 - accuracy: 0.9623 - val_loss: 0.0491 - val_accuracy: 0.9835
Epoch 2/10
60000/60000 [==============================] - 65s 1ms/step - loss: 0.0389 - accuracy: 0.9878 - val_loss: 0.0344 - val_accuracy: 0.9885
Epoch 3/10
60000/60000 [==============================] - 65s 1ms/step - loss: 0.0230 - accuracy: 0.9926 - val_loss: 0.0434 - val_accuracy: 0.9879
Epoch 4/10
60000/60000 [==============================] - 70s 1ms/step - loss: 0.0168 - accuracy: 0.9946 - val_loss: 0.0314 - val_accuracy: 0.9898
Epoch 5/10
60000/60000 [==============================] - 71s 1ms/step - loss: 0.0111 - accuracy: 0.9964 - val_loss: 0.0310 - val_accuracy: 0.9904
Epoch 6/10
60000/60000 [==============================] - 70s 1ms/step - loss: 0.0092 - accuracy: 0.9967 - val_loss: 0.0423 - val_accuracy: 0.9877
Epoch 7/10
60000/60000 [==============================] - 66s 1ms/step - loss: 0.0061 - accuracy: 0.9979 - val_loss: 0.0451 - val_accuracy: 0.9889
Epoch 8/10
60000/60000 [==============================] - 62s 1ms/step - loss: 0.0066 - accuracy: 0.9978 - val_loss: 0.0430 - val_accuracy: 0.9894
Epoch 9/10
60000/60000 [==============================] - 58s 970us/step - loss: 0.0054 - accuracy: 0.9982 - val_loss: 0.0480 - val_accuracy: 0.9892
Epoch 10/10
60000/60000 [==============================] - 60s 994us/step - loss: 0.0051 - accuracy: 0.9983 - val_loss: 0.0475 - val_accuracy: 0.9887
In [8]:
score = model.evaluate(X_test, Y_test, verbose=0)
print('Test loss: ', score[0])
print('Test accuracy: ', score[1])
#plot accuracies
plt.plot(hist.history['accuracy'])
#plt.plot(hist.history['val_acc'])
plt.title('model accuracy')
plt.xlabel('epoch')
plt.ylabel('accuracy')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
Test loss:  0.04750595308612313
Test accuracy:  0.9886999726295471
In [ ]: