A decision tree is a tree structure where an internal node represents feature, the branch represents a decision rule, and each leaf node represents the outcome. The topmost node in a decision tree is known as the root node. It learns to partition on the basis of the attribute value. It partitions the tree in recursively manner call recursive partitioning. This tree structure helps you in decision making. It's visualization like a flowchart diagram which easily mimics the human level thinking. That is why decision trees are easy to understand and interpret.
For this lab, we will use pima-indians-diabetes dataset. It is well described in the following address:
As usual, we need to import necessary python modules:
import numpy as np
import pandas as pa
Two additional modules for data visualisation:
from matplotlib import pyplot as plt
import seaborn as sns
sns.set()
And the, load the dataset:
col_names = ['pregnant', 'glucose', 'bp', 'skin', 'insulin', 'bmi', 'pedigree', 'age', 'label']
pima = pa.read_csv('https://www.labri.fr/~zemmari/datasets/pima-indians-diabetes.csv',
header=None, names=col_names)
pima.head()
feature_cols = ['pregnant', 'insulin', 'bmi', 'age','glucose','bp','pedigree']
X = pima[feature_cols]
y = pima.label
Now, we will split the dataset into two subsets: one for the training and the other for the test. For this, we will import the necessary function:
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.3, random_state=109)
Now, we can train our supervised learning model using Decision Tree classifier:
import warnings
warnings.filterwarnings("ignore")
from sklearn import tree
from sklearn import metrics
dt = tree.DecisionTreeClassifier()
dt.fit(X_train, y_train)
Once the model is trained, we can use it to predict the values for the test subset:
y_pred = dt.predict(X_test)
print(y_pred.shape)
print(y_test.shape)
We can compute the confusion matrix:
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
print(cm)
End then, compute the accuracy of the model. Note that, this time we can use directly the sklearn module that can give us the accuracy of the model:
scores = metrics.accuracy_score(y_test, y_pred)
print('Accuracy: ','{:2.2%}'.format(scores))
We can also visualize the trained tree.
from sklearn.tree import export_graphviz
from sklearn.externals.six import StringIO
from IPython.display import Image
import pydotplus
dot_data = StringIO()
export_graphviz(dt, out_file=dot_data,
filled=True, rounded=True,
special_characters=True,feature_names = feature_cols,class_names=['0','1'])
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
graph.write_png('diabetes.png')
Image(graph.create_png())
Random forests is a supervised learning algorithm. It is the most flexible and easy to use algorithm. A forest is comprised of trees. It is said that the more trees it has, the more robust a forest is. Random forests creates decision trees on randomly selected data samples, gets prediction from each tree and selects the best solution by means of voting. It also provides a pretty good indicator of the feature importance.
In the previous section, we used a decision tree classifier and got an accuracy of 69.70%.
Let's try to do better and use random forest classifier:
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier()
rf.fit(X_train, y_train)
y_pred = rf.predict(X_test)
scores = metrics.accuracy_score(y_test, y_pred)
print('Accuracy: ','{:2.2%}'.format(scores))
Not so bad. We improved the accuracy of our model. Let's take a look at the parameters of our model:
rf
We can see that we have many parameters ... Let's change some of them and see what happens:
rf = RandomForestClassifier(bootstrap=True, max_depth=None)
rf.fit(X_train, y_train)
y_pred = rf.predict(X_test)
scores = metrics.accuracy_score(y_test, y_pred)
print('Accuracy: ','{:2.2%}'.format(scores))
We can try manually many parameters. However, this can be time consuming ... We can use grid search to search for the best parameters:
import numpy as np
from sklearn.model_selection import RandomizedSearchCV
# Number of trees in random forest
n_estimators = [int(x) for x in np.linspace(start = 200, stop = 2000, num = 10)]
# Number of features to consider at every split
max_features = ['auto', 'sqrt']
# Maximum number of levels in tree
max_depth = [int(x) for x in np.linspace(10, 110, num = 11)]
max_depth.append(None)
# Minimum number of samples required to split a node
min_samples_split = [2, 5, 10]
# Minimum number of samples required at each leaf node
min_samples_leaf = [1, 2, 4]
# Method of selecting samples for training each tree
bootstrap = [True, False]
# Create the random grid
random_grid = {'n_estimators': n_estimators,
'max_features': max_features,
'max_depth': max_depth,
'min_samples_split': min_samples_split,
'min_samples_leaf': min_samples_leaf,
'bootstrap': bootstrap}
print(random_grid)
# Use the random grid to search for best hyperparameters
# First create the base model to tune
rf = RandomForestClassifier()
# Random search of parameters, using 3 fold cross validation,
# search across 100 different combinations, and use all available cores
rf_random = RandomizedSearchCV(estimator = rf, param_distributions = random_grid, n_iter = 100,
cv = 3, verbose=2, random_state=42, n_jobs = -1)
rf_random.fit(X_train, y_train)
rf_random.best_params_
best_rf = RandomForestClassifier(bootstrap=True,
max_depth=80,
max_features='sqrt',
min_samples_leaf=4,
min_samples_split=5,
n_estimators=1400)
best_rf.fit(X_train, y_train)
y_pred = best_rf.predict(X_test)
scores = metrics.accuracy_score(y_test, y_pred)
print('Accuracy: ','{:2.2%}'.format(scores))