4 techniques of evaluating the performance of deep learning models using validation.

https://cdn-images-1.medium.com/max/2600/0*i0qXJ3f0EVwtvfIU

Original Source Here

4 techniques of evaluating the performance of deep learning models using validation.

How to to evaluate the performance of your model during training.

Photo by Markus Spiske on Unsplash

Validation is a technique in machine learning to evaluate the performance of models during learning. It is done by separating the data set into training and validating sets and then evaluating the performance of the model (deep neural network in this case) on the validation sets.

It is important to note that the validation set is quite different from the testing set, the validation set is commonly used in machine learning to evaluate the model’s performance while training the model while the test set is used in evaluating the model’s performance on data it has not seen before.

This article mainly covers different techniques used in validating the model.

Why should we validate models?

This is often a pertinent question, especially as the performance of a model can be simply (and truly )evaluated on the test data after training, but testing the model while training ( validation) is important for:

  1. Detecting overfitting and underfitting: Overfitting and underfitting can be detected by visualizing the performance of the model (using any chosen metric) during training (but that is an entirely different story).

2. Tuning the parameters for optimal performance: It is possible for models to be too too simple or complex for the input data, validating the model gives a clearer picture of parameter values that optimizes the model.

3. Getting more evaluation metrics : Test evaluation results in a single value of the models performance while model validation gives a list of values that shows the performance of the model on every epoch of training which gives a larger scope of the model’s performance

Model validation is also in line with the best practices of model evaluation in machine learning.

Now that you have seen reasons why we should validate our models, let’s discuss how we can validate deep learning models.

There are quite a few ways to do this but we will implement the common ones.

  1. Manually setting aside part of the training data for validation:

This is simply done by allocating part of the train data for validation, it is usually used for considerably large dataset.

We will be using the IMDB movie rating dataset for our implementation, you can see the implementation below.

With this large number of observations, we can afford to set apart some of the training data (train_data and train_label in this case) for validation and pass it to the validation_data parameter of the model’s fit function.

Here we have set apart 10,000 observations from the training data for validation (x_val, y_val) and passed it to the validation_data parameter.

Whatever accuracy metric (loss, MSE, MAE, etc.) you have passed during compiling will be displayed during training. The model also returns an history object which stores the data of everything that happened during, this object (a dictionary ) can be used to get a list of accuracy metric for the validation data across all epochs.

Getting a list of the training and validation loss from the history dictionary

This data can be used for any of the functions previously discussed.

2. Using the validation split parameter of the model’s fit function.

Rather than manually split the training data into training and validation sets, you can pass a float value between 0 and 1 (fraction of the training data to be used as validation data) to the validation_split parameter of the fit function. This automatically sets aside a part of the model for validation as in validation_data, this is similar to what train_test_split (of the Sklearn library) does, only in this case, the function automatically uses the split set for training and validation.

This is seen below.:

Using the validation_split parameter to generate validation set

3. Using the cross validation technique.

The two methods previously mentioned are good for validating large datasets, but would be poor if implemented for smaller datasets . This is because only a smaller part of the training data will be used for validation which might cause a high variance (different results depending on the part of the dataset used for validation)in the validation metrics thereby reducing the reliability of such results.

An example is the Boston Housing Price dataset that has only 404 training samples, even if we decide to use half of the sample data for validation, we would only have about 200 data points which is quite small.

A better approach to this problem is to use cross validation. This is a technique that allows us to train the model on subsets of the training data and using complementary subset for validation.

While there are quite a number of cross validation techniques, we will use the most common for smaller dataset, The k fold cross validation technique.

This technique separates the data into k (an integer ) folds and use one fold for validation and the rest for training, it then repeats the same process with a different fold for validation until all the the folds have been used for for both validation and training, thereby giving an average accuracy metric that properly evaluates the performance of the model on the data during training.

A visual representation of the Kfold cross validation is seen below:

credit: Mohamed Alameh

An implementation of this technique using Kfold cross validation from the Sklearn library is shown below:

Using Kfold cross validation from Sklearn library

4.Using a custom K Fold function.

Perhaps, rather than use the sklearn library’s Kfold function (due to various reasons), you would preferably build your own custom Kfold function, you could do something similar to the implementation below.

Conclusion

Validation is quite important for optimizing models and testing its reliability by evaluating its performance while training.

There a a number of ways to validate our models depending on the type and volume of data we have. You can also check out other types of the cross validation method and compare with the techniques used.

AI/ML

Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot



via WordPress https://ramseyelbasheer.wordpress.com/2021/01/31/4-techniques-of-evaluating-the-performance-of-deep-learning-models-using-validation/

Popular posts from this blog

I’m Sorry! Evernote Has A New ‘Home’ Now

Jensen Huang: Racism is one flywheel we must stop

Streamlit — Deploy your app in just a few minutes