Building Scikit-Learn Linear Model: Elasticnet

 · 10 mins read

Introduction Go to Repository

In this notebook we will learn how to use scikit learn to build best linear regressor model. We used gapminder dataset which is related to population in different region and average life expectency. More about our dataset you will see below. We split our datasets into train and test set. We build our model using scikit learn elasticnet linear model. We evaluate our model using test set. We used gridsearch cross validation to optimized our model.

Importing Required Packages

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns

from sklearn.model_selection import cross_val_score
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import ElasticNet
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import GridSearchCV

Reading the data in

data = pd.read_csv('data/gm_2008_region.csv')
data.head()
populationfertilityHIVCO2BMI_maleGDPBMI_femalelifechild_mortalityRegion
034811059.02.730.13.32894524.5962012314.0129.904975.329.5Middle East & North Africa
119842251.06.432.01.47435322.250837103.0130.124758.3192.0Sub-Saharan Africa
240381860.02.240.54.78517027.5017014646.0118.891575.515.4America
32975029.01.400.11.80410625.355427383.0132.810872.520.0Europe & Central Asia
421370348.01.960.118.01631327.5637341312.0117.375581.55.2East Asia & Pacific

Data Exploration

Before we build our model Let’s have a descriptive exploration on our data. We will visualize how each feature is co-related to each other. Also we will visualize some features vs life of our dataset. Visualization will give us better understanding of our data and will help us to select appropriate feature to build our model.

# summerize our data
data.describe()
populationfertilityHIVCO2BMI_maleGDPBMI_femalelifechild_mortality
count1.390000e+02139.000000139.000000139.000000139.000000139.000000139.000000139.000000139.000000
mean3.549977e+073.0051081.9156124.45987424.62305416638.784173126.70191469.60287845.097122
std1.095121e+081.6153544.4089746.2683492.20936819207.2990834.4719979.12218945.724667
min2.773150e+051.2800000.0600000.00861820.397420588.000000117.37550045.2000002.700000
25%3.752776e+061.8100000.1000000.49619022.4481352899.000000123.23220062.2000008.100000
50%9.705130e+062.4100000.4000002.22379625.1569909938.000000126.51960072.00000024.000000
75%2.791973e+074.0950001.3000006.58915626.49757523278.500000130.27590076.85000074.200000
max1.197070e+097.59000025.90000048.70206228.456980126076.000000135.49200082.600000192.000000
# Let's select some of the features to explore more
chosen_data = data[['population', 'fertility', 'HIV', 'CO2', 'GDP', 'child_mortality', 'life']]
chosen_data.head()
populationfertilityHIVCO2GDPchild_mortalitylife
034811059.02.730.13.32894512314.029.575.3
119842251.06.432.01.4743537103.0192.058.3
240381860.02.240.54.78517014646.015.475.5
32975029.01.400.11.8041067383.020.072.5
421370348.01.960.118.01631341312.05.281.5
# We can plot each of these features 
chosen_data.hist()
plt.show()

png

Let’s now plot each of these features vs life, to their relation

plt.scatter(chosen_data.HIV, chosen_data.life)
plt.xlabel('HIV')
plt.ylabel('Life')
plt.show()

png

plt.scatter(chosen_data.GDP, chosen_data.life)
plt.xlabel('GDP')
plt.ylabel('Life')
plt.show()

png

plt.scatter(chosen_data.child_mortality, chosen_data.life)
plt.xlabel('Child Mortality')
plt.ylabel('Life')
plt.show()

png

plt.scatter(chosen_data.CO2, chosen_data.life)
plt.xlabel('CO2')
plt.ylabel('Life')
plt.show()

png

sns.heatmap(data.corr(), square=True, cmap='RdYlGn')

png

Data preprocessing

We will be using scikit learn to build our model as I mentioned earlier. We have to make sure that there is/are no missing value exist in our dataset. Also we we have to make sure all of our features data are numerical. As categorical data will not be entertained by scikit learn model. In the following we address and fix such issues.

# dertermining any missing value
data.count()
population         139
fertility          139
HIV                139
CO2                139
BMI_male           139
GDP                139
BMI_female         139
life               139
child_mortality    139
Region             139
dtype: int64
# determining andy null value
data.isna().sum()
population         0
fertility          0
HIV                0
CO2                0
BMI_male           0
GDP                0
BMI_female         0
life               0
child_mortality    0
Region             0
dtype: int64
# determing categorical value
data.Region.head()
0    Middle East & North Africa
1            Sub-Saharan Africa
2                       America
3         Europe & Central Asia
4           East Asia & Pacific
Name: Region, dtype: object
# Converting categorical to numerical 
data = pd.get_dummies(data, drop_first=True)
data.head()
populationfertilityHIVCO2BMI_maleGDPBMI_femalelifechild_mortalityRegion_East Asia & PacificRegion_Europe & Central AsiaRegion_Middle East & North AfricaRegion_South AsiaRegion_Sub-Saharan Africa
034811059.02.730.13.32894524.5962012314.0129.904975.329.500100
119842251.06.432.01.47435322.250837103.0130.124758.3192.000001
240381860.02.240.54.78517027.5017014646.0118.891575.515.400000
32975029.01.400.11.80410625.355427383.0132.810872.520.001000
421370348.01.960.118.01631327.5637341312.0117.375581.55.210000

Creating train and test dataset

Train/Test Split involves splitting the dataset into training and testing sets respectively, which are mutually exclusive. After which, we will train with the training set and test with the testing set. This will provide a more accurate evaluation on out-of-sample accuracy because the testing dataset is not part of the dataset that have been used to train the data. It is more realistic for real world problems.

This means that we know the outcome of each data point in this dataset, making it great to test with! And since this data has not been used to train the model, the model has no knowledge of the outcome of these data points. So, in essence, it is truly an out-of-sample testing.

# Choosing target feature
y = data[['life']]
print(y.head(3))

# Choosing input features
X = data.drop(data[['life']], axis=1)
print(X.head(3))
   life
0  75.3
1  58.3
2  75.5
   population  fertility  HIV       CO2  BMI_male      GDP  BMI_female  \
0  34811059.0       2.73  0.1  3.328945  24.59620  12314.0    129.9049   
1  19842251.0       6.43  2.0  1.474353  22.25083   7103.0    130.1247   
2  40381860.0       2.24  0.5  4.785170  27.50170  14646.0    118.8915   

   child_mortality  Region_East Asia & Pacific  Region_Europe & Central Asia  \
0             29.5                           0                             0   
1            192.0                           0                             0   
2             15.4                           0                             0   

   Region_Middle East & North Africa  Region_South Asia  \
0                                  1                  0   
1                                  0                  0   
2                                  0                  0   

   Region_Sub-Saharan Africa  
0                          0  
1                          1  
2                          0  
# Create train and test sets
X_train, X_test, y_train, y_test = train_test_split(X,y, test_size=0.4, random_state=42)

Building and Evaluating model

As mentioned at the beginning, we are using elasticnet and grid search Cross validation. You can find great explanation in scikit-learn documentation.

# Create the hyperparameter grid
l1_space = np.linspace(0, 1, 30)
param_grid = {'l1_ratio': l1_space}

# Instantiate the ElasticNet regressor: elastic_net
elastic_net = ElasticNet()

# Setup the GridSearchCV object: gm_cv
gm_cv = GridSearchCV(elastic_net, param_grid, cv=5)

# Fit it to the training data
gm_cv.fit(X_train, y_train)

# Predict on the test set and compute metrics
y_pred = gm_cv.predict(X_test)
r2 = gm_cv.score(X_test, y_test)
mse = mean_squared_error(y_test, y_pred)
print("Tuned ElasticNet l1 ratio: {}".format(gm_cv.best_params_))
print("Tuned ElasticNet R squared: {}".format(r2))
print("Tuned ElasticNet MSE: {}".format(mse))
Tuned ElasticNet l1 ratio: {'l1_ratio': 0.0}
Tuned ElasticNet R squared: 0.8697529413665848
Tuned ElasticNet MSE: 9.837193188072188