import pandas as pd
import numpy as np

pd.set_option('display.expand_frame_repr', False)

Baseline model

Replicate validation score

Replicate validation score (Exercise)

You've seen both validation and Public Leaderboard scores in the video. However, the code examples are available only for the test data. To get the validation scores you have to repeat the same process on the holdout set.

Throughout this chapter, you will work with New York City Taxi competition data. The problem is to predict the fare amount for a taxi ride in New York City. The competition metric is the root mean squared error.

The first goal is to evaluate the Baseline model on the validation data. You will replicate the simplest Baseline based on the mean of "fare_amount". Recall that as a validation strategy we used a 30% holdout split with validation_train as train and validation_test as holdout DataFrames. Both of them are available in your workspace.

Instructions:

  • Calculate the mean of "fare_amount" over the whole validation_train DataFrame.
  • Assign this naive prediction value to all the holdout predictions. Store them in the "pred" column.
import imp
from random import random
import pandas as pd
from sklearn.model_selection import train_test_split

train = pd.read_csv('./datasets/taxi_train_chapter_4.csv')
test = pd.read_csv('./datasets/taxi_test_chapter_4.csv')

validation_train, validation_test = train_test_split(train, test_size=0.3, random_state=123)
import numpy as np
from sklearn.metrics import mean_squared_error
from math import sqrt



# Calculate the mean fare_amount on the validation_train data
naive_prediction = np.mean(validation_train['fare_amount'])

# Assign naive prediction to all the holdout observations
validation_test['pred'] = naive_prediction

# Measure the local RMSE
rmse = sqrt(mean_squared_error(validation_test['fare_amount'], validation_test['pred']))
print('Validation RMSE for Baseline I model: {:.3f}'.format(rmse))
Validation RMSE for Baseline I model: 9.986

It's exactly the same number you've seen in the slides, well done! So, to avoid overfitting you should fully replicate your models using the validation data. Go forward to create a couple of other baselines!

Baseline based on the date

Baseline based on the date (Exercise) We've already built 3 different baseline models. To get more practice, let's build a couple more. The first model is based on the grouping variables. It's clear that the ride fare could depend on the part of the day. For example, prices could be higher during the rush hours.

Your goal is to build a baseline model that will assign the average "fare_amount" for the corresponding hour. For now, you will create the model for the whole train data and make predictions for the test dataset.

The train and test DataFrames are available in your workspace. Moreover, the "pickup_datetime" column in both DataFrames is already converted to a datetime object for you.

Instructions:

  • Get the hour from the "pickup_datetime" column for the train and test DataFrames.
  • Calculate the mean "fare_amount" for each hour on the train data.
  • Make test predictions using pandas' map() method and the grouping obtained.
  • Write predictions to the file.
train['pickup_datetime'] = pd.to_datetime(train['pickup_datetime'])
test['pickup_datetime'] = pd.to_datetime(test['pickup_datetime'])

# Get pickup hour from the pickup_datetime column
train['hour'] = train['pickup_datetime'].dt.hour
test['hour'] = test['pickup_datetime'].dt.hour
hour_groups = train.groupby('hour')['fare_amount'].mean()

# Make predictions on the test set
test['fare_amount'] = test.hour.map(hour_groups)

# Write predictions
test[['id','fare_amount']].to_csv('hour_mean_sub.csv', index=False)
train['pickup_datetime'] = pd.to_datetime(train['pickup_datetime'])
test['pickup_datetime'] = pd.to_datetime(test['pickup_datetime'])

validation_train, validation_test = train_test_split(train, test_size=0.3, random_state=123)

validation_train['hour'] = validation_train['pickup_datetime'].dt.hour
validation_test['hour'] = validation_test['pickup_datetime'].dt.hour

hour_groups_val = validation_train.groupby('hour')['fare_amount'].mean()

# Make predictions on the test set
validation_test['pred2'] = validation_test.hour.map(hour_groups_val)

# Measure the local RMSE
rmse2 = sqrt(mean_squared_error(validation_test['fare_amount'], validation_test['pred2']))
print('Validation RMSE for Baseline II model: {:.3f}'.format(rmse2))
Validation RMSE for Baseline II model: 9.985

Baseline based on the gradient boosting

Baseline based on the gradient boosting (Exercise).

Let's build a final baseline based on the Random Forest. You've seen a huge score improvement moving from the grouping baseline to the Gradient Boosting in the video. Now, you will use sklearn's Random Forest to further improve this score.

The goal of this exercise is to take numeric features and train a Random Forest model without any tuning. After that, you could make test predictions and validate the result on the Public Leaderboard. Note that you've already got an "hour" feature which could also be used as an input to the model.

Instructions:

  • dd the "hour" feature to the list of numeric features.
  • it the RandomForestRegressor on the train data with numeric features and "fare_amount" as a target.
  • se the trained Random Forest model to make predictions on the test data.
from sklearn.ensemble import RandomForestRegressor

# Select only numeric features
features = ['pickup_longitude', 'pickup_latitude', 'dropoff_longitude',
            'dropoff_latitude', 'passenger_count', 'hour']

# Train a Random Forest model
rf = RandomForestRegressor()
rf.fit(train[features], train.fare_amount)

# Make predictions on the test data
test['fare_amount'] = rf.predict(test[features])

# Write predictions
test[['id','fare_amount']].to_csv('rf_sub.csv', index=False)
from sklearn.ensemble import RandomForestRegressor

# Select only numeric features
features = ['pickup_longitude', 'pickup_latitude', 'dropoff_longitude',
            'dropoff_latitude', 'passenger_count', 'hour']

# Train a Random Forest model
rf2 = RandomForestRegressor()
rf2.fit(validation_train[features], validation_train.fare_amount)

validation_test['pred3'] = rf2.predict(validation_test[features])

# Measure the local RMSE
rmse3 = sqrt(mean_squared_error(validation_test['fare_amount'], validation_test['pred3']))
print('Validation RMSE for Baseline III model: {:.3f}'.format(rmse3))
Validation RMSE for Baseline III model: 5.510

Congratulations! This final baseline achieves the 1051st place on the Public Leaderboard which is slightly better than the Gradient Boosting from the video. So, now you know how to build fast and simple baseline models to validate your initial pipeline.

Hyperparameter tuning

Once we have the baseline models results, we can start creating new features.

Grid search (Exercise): Recall that we've created a baseline Gradient Boosting model in the previous lesson. Your goal now is to find the best max_depth hyperparameter value for this Gradient Boosting model. This hyperparameter limits the number of nodes in each individual tree. You will be using K-fold cross-validation to measure the local performance of the model for each hyperparameter value.

You're given a function get_cv_score(), which takes the train dataset and dictionary of the model parameters as arguments and returns the overall validation RMSE score over 3-fold cross-validation.

Instructions:

  • Specify the grid for possible max_depth values with 3, 6, 9, 12 and 15.
  • Pass each hyperparameter candidate in the grid to the model params dictionary.
from sklearn.model_selection import KFold
from sklearn.ensemble import GradientBoostingRegressor

def get_cv_score(train, params):
    # Create KFold object
    kf = KFold(n_splits=3, shuffle=True, random_state=123)

    rmse_scores = []
    
    # Loop through each split
    for train_index, test_index in kf.split(train):
        cv_train, cv_test = train.iloc[train_index], train.iloc[test_index]
    
        # Train a Gradient Boosting model
        gb = GradientBoostingRegressor(random_state=123, **params).fit(cv_train[features], cv_train.fare_amount)
    
        # Make predictions on the test data
        pred = gb.predict(cv_test[features])
    
        fold_score = np.sqrt(mean_squared_error(cv_test['fare_amount'], pred))
        rmse_scores.append(fold_score)
    
    return np.round(np.mean(rmse_scores) + np.std(rmse_scores), 5)
max_depth_grid = [3, 6, 9, 12, 15]
results = {}

# For each value in the grid
for max_depth_candidate in max_depth_grid:
    # Specify parameters for the model
    params = {'max_depth': max_depth_candidate}

    # Calculate validation score for a particular hyperparameter
    validation_score = get_cv_score(train, params)

    # Save the results for each max depth value
    results[max_depth_candidate] = validation_score   
print(results)
{3: 5.67296, 6: 5.36925, 9: 5.35641, 12: 5.50111, 15: 5.70245}

2D grid search(Exercise)

The drawback of tuning each hyperparameter independently is a potential dependency between different hyperparameters. The better approach is to try all the possible hyperparameter combinations. However, in such cases, the grid search space is rapidly expanding. For example, if we have 2 parameters with 10 possible values, it will yield 100 experiment runs.

Your goal is to find the best hyperparameter couple of max_depth and subsample for the Gradient Boosting model. subsample is a fraction of observations to be used for fitting the individual trees.

You're given a function get_cv_score(), which takes the train dataset and dictionary of the model parameters as arguments and returns the overall validation RMSE score over 3-fold cross-validation.

Instructions:

  • Specify the grids for possible max_depth and subsample values. For max_depth: 3, 5 and 7. For subsample: 0.8, 0.9 and 1.0.
  • Apply the product() function from the itertools package to the hyperparameter grids. It returns all possible combinations for these two grids.
  • Pass each hyperparameters candidate couple to the model params dictionary.
import itertools

# Hyperparameter grids
max_depth_grid = [3, 5 , 7]
subsample_grid = [0.8, 0.9 , 1.0]
results = {}

# For each couple in the grid
for max_depth_candidate, subsample_candidate in itertools.product(max_depth_grid, subsample_grid):
    params = {'max_depth': max_depth_candidate,
              'subsample': subsample_candidate}
    validation_score = get_cv_score(train, params)
    # Save the results for each couple
    results[(max_depth_candidate, subsample_candidate)] = validation_score   
print(results)
{(3, 0.8): 5.65813, (3, 0.9): 5.65228, (3, 1.0): 5.67296, (5, 0.8): 5.34947, (5, 0.9): 5.44506, (5, 1.0): 5.3132, (7, 0.8): 5.38994, (7, 0.9): 5.40631, (7, 1.0): 5.3591}
sorted(results.items(),key=lambda item:item[1]) # sort by value
[((5, 1.0), 5.3132),
 ((5, 0.8), 5.34947),
 ((7, 1.0), 5.3591),
 ((7, 0.8), 5.38994),
 ((7, 0.9), 5.40631),
 ((5, 0.9), 5.44506),
 ((3, 0.9), 5.65228),
 ((3, 0.8), 5.65813),
 ((3, 1.0), 5.67296)]

Model ensembling

Blending appoach is to find an average of multiple models predictions.

To demonstrate the above 6 steps:

Train 3 models A, B and C on part 1 set

Then use 3 models A, B and C to predict on both the train_validation (part 2) and test sets.

Model blending

Model blending (Exercise) You will start creating model ensembles with a blending technique.

Your goal is to train 2 different models on the New York City Taxi competition data. Make predictions on the test data and then blend them using a simple arithmetic mean.

The train and test DataFrames are already available in your workspace. features is a list of columns to be used for training and it is also available in your workspace. The target variable name is "fare_amount".

Instructions:

  • Train a Gradient Boosting model on the train data using features list, and the "fare_amount" column as a target variable.
  • Train a Random Forest model in the same manner.
  • Make predictions on the test data using both Gradient Boosting and Random Forest models.
  • Find the average of both models predictions.
from sklearn.ensemble import GradientBoostingRegressor, RandomForestRegressor
import pandas as pd 

train = pd.read_csv('./datasets/taxi_train_distance.csv')
test = pd.read_csv('./datasets/taxi_test_distance.csv')
features = ['pickup_longitude', 'pickup_latitude', 'dropoff_longitude', 'dropoff_latitude', 
            'passenger_count', 'distance_km', 'hour']
            
# Train a Gradient Boosting model
gb = GradientBoostingRegressor().fit(train[features], train.fare_amount)

# Train a Random Forest model
rf = RandomForestRegressor().fit(train[features], train.fare_amount)

# Make predictions on the test data
test['gb_pred'] = gb.predict(test[features])
test['rf_pred'] = rf.predict(test[features])

# Find mean of model predictions
test['blend'] = (test['gb_pred'] +test['rf_pred']) / 2
print(test[['gb_pred', 'rf_pred', 'blend']].head(3))
    gb_pred  rf_pred     blend
0  9.661374    9.313  9.487187
1  9.304288    8.238  8.771144
2  5.795140    4.835  5.315070

Blending allows you to get additional score improvements almost for free just by averaging multiple models predictions. Now, let's explore model stacking!

Model stacking I

Model stacking I (Exercise): Now it's time for stacking. To implement the stacking approach, you will follow the 6 steps we've discussed in the previous video:

  1. Split train data into two parts
  2. Train multiple models on Part 1
  3. Make predictions on Part 2
  4. Make predictions on the test data
  5. Train a new model on Part 2 using predictions as features
  6. Make predictions on the test data using the 2nd level model

train and test DataFrames are already available in your workspace. features is a list of columns to be used for training on the Part 1 data and it is also available in your workspace. Target variable name is "fare_amount".

Instructions:

  • Split the train DataFrame into two equal parts: part_1 and part_2. Use the train_test_split() function with test_size equal to 0.5.
  • Train Gradient Boosting and Random Forest models on the part_1 data.
from sklearn.model_selection import train_test_split
from sklearn.ensemble import GradientBoostingRegressor, RandomForestRegressor

# Split train data into two parts
part_1, part_2 = train_test_split(train, test_size=0.5, random_state=123)

# Train a Gradient Boosting model on Part 1
gb2 = GradientBoostingRegressor().fit(part_1[features], part_1.fare_amount)

# Train a Random Forest model on Part 1
rf2 = RandomForestRegressor().fit(part_1[features], part_1.fare_amount)
  • Make Gradient Boosting and Random Forest predictions on the part_2 data.
  • Make Gradient Boosting and Random Forest predictions on the test data.
part_2 = part_2.copy()
part_2['gb_pred'] = gb2.predict(part_2[features])
part_2['rf_pred'] = rf2.predict(part_2[features])

# Make predictions on the test data
test = test.copy()
test['gb_pred'] = gb2.predict(test[features])
test['rf_pred'] = rf2.predict(test[features])

Model stacking II

OK, what you've done so far in the stacking implementation:

  1. Split train data into two parts
  2. Train multiple models on Part 1
  3. Make predictions on Part 2
  4. Make predictions on the test data

Now, your goal is to create a second level model using predictions from steps 3 and 4 as features. So, this model is trained on Part 2 data and then you can make stacking predictions on the test data.

part_2 and test DataFrames are already available in your workspace. Gradient Boosting and Random Forest predictions are stored in these DataFrames under the names "gb_pred" and "rf_pred", respectively.

Instructions:

  • Train a Linear Regression model on the Part 2 data using Gradient Boosting and Random Forest models predictions as features.
  • Make predictions on the test data using Gradient Boosting and Random Forest models predictions as features.
from sklearn.linear_model import LinearRegression

# Create linear regression model without the intercept
lr = LinearRegression(fit_intercept=False)

# Train 2nd level model on the Part 2 data
lr.fit(part_2[['gb_pred', 'rf_pred']], part_2.fare_amount)

# Make stacking predictions on the test data
test['stacking'] = lr.predict(test[['gb_pred', 'rf_pred']])

# Look at the model coefficients
print(lr.coef_)
[0.14050404 0.86266408]

Congratulations, now your toolbox contains ensembling techniques! Usually, the 2nd level model is some simple model like Linear or Logistic Regressions. Also, note that you were not using intercept in the Linear Regression just to combine pure model predictions.

Final tips

Testing Kaggle forum ideas

Testing Kaggle forum ideas(Exercise) Unfortunately, not all the Forum posts and Kernels are necessarily useful for your model. So instead of blindly incorporating ideas into your pipeline, you should test them first.

You're given a function get_cv_score(), which takes a train dataset as an argument and returns the overall validation root mean squared error over 3-fold cross-validation. The train DataFrame is already available in your workspace.

You should try different suggestions from the Kaggle Forum and check whether they improve your validation score.

  • Suggestion 1: the passenger_count feature is useless. Let's see! Drop this feature and compare the scores.
from sklearn.model_selection import KFold
import numpy as np
from sklearn.metrics import mean_squared_error
def get_cv_score(train):
    features = ['pickup_longitude', 'pickup_latitude',
            'dropoff_longitude', 'dropoff_latitude',
            'passenger_count', 'distance_km', 'hour', 'weird_feature']
    
    features = [x for x in features if x in train.columns]
    
    # Create KFold object
    kf = KFold(n_splits=3, shuffle=True, random_state=123)

    rmse_scores = []
    
    # Loop through each split
    for train_index, test_index in kf.split(train):
        cv_train, cv_test = train.iloc[train_index], train.iloc[test_index]
    
        # Train a Gradient Boosting model
        gb = GradientBoostingRegressor(random_state=123).fit(cv_train[features], cv_train.fare_amount)
    
        # Make predictions on the test data
        pred = gb.predict(cv_test[features])
    
        fold_score = np.sqrt(mean_squared_error(cv_test['fare_amount'], pred))
        rmse_scores.append(fold_score)
    
    return np.round(np.mean(rmse_scores) + np.std(rmse_scores), 5)
new_train_1 = train.drop('passenger_count', axis=1)

# Compare validation scores
initial_score = get_cv_score(train)
new_score = get_cv_score(new_train_1)

print('Initial score is {} and the new score is {}'.format(initial_score, new_score))
Initial score is 6.49932 and the new score is 6.42315

This first suggestion worked.

  • Suggestion 2: Sum of pickup_latitude and distance_km is a good feature. Let's try it!
new_train_2 = train.copy()

# Find sum of pickup latitude and ride distance
new_train_2['weird_feature'] = new_train_2['pickup_latitude'] + new_train_2['distance_km']

# Compare validation scores
initial_score = get_cv_score(train)
new_score = get_cv_score(new_train_2)

print('Initial score is {} and the new score is {}'.format(initial_score, new_score))
Initial score is 6.49932 and the new score is 6.50495

Be aware that not all the ideas shared publicly could work for you! In this particular case, dropping the "passenger_count" feature helped, while finding the sum of pickup latitude and ride distance did not. The last action you perform in any Kaggle competition is selecting final submissions. Go on to practice it!

Select final submissions

Final thoughts