Python Tutorial - Model complexity and overfitting

Deciding how complex a model should be is one of the most critical skills a data scientist must have. This subject is often covered in lessons on classification, and it's essential to understand that many classifiers have extra parameters that control their flexibility or complexity.

One example of this can be seen in the documentation of the random forest classifier. Upon inspection, you'll notice an entry for max_depth, which stands for maximum depth. A random forest classifier combines the predictions from a large number of decision trees using deeper trees, making it more complex. To illustrate this, let's start by fitting a classifier with depth two and one with depth four to the credit scoring data set from the previous lesson.

How would a typical tree from each classifier look like? We can access individual trees using the private estimators underscore attribute. Trees of depth to contain at most two nested decision rules, whereas depth 4 produces much deeper rules. Although these trees come from the same classifier family and data, they look very different. This highlights the importance of tuning a complexity parameter in the same way as model selection.

To do this, you need to split your data into training and test sets. You then fit several classifiers of different depths to the training data and pick the one with the best test performance. You can also keep a separate holdout dataset in order to get a fresh final estimate of the accuracy of the winning classifier. An alternative approach is cross-validation, which splits the data into several chunks and repeats the training-test step by picking a different chunk per round.

Using this technique, you use the test data shown here in yellow while using the remaining data for training shown in blue. Accuracy is averaged over all rounds making this technique more stable. Cross-validation is implemented as Crossville score in the psychic learn model selection module. The function takes as input a classifier instance and the full data x and y, which it then proceeds to split several times three times by default. The result are three estimates of accuracy one for each run that can be averaged using mean from numpy.

To easily optimize a hyperparameter like tree depth using cross-validation, you can use the function grid search CV. It takes as input a dictionary of parameters and values to try out and a classifier instance. The resulting object is fitted to the entire dataset and stores the best-performing values in an attribute called underscore best underscore params.

Now, let's review the accuracy of our random forest as the depth ranges from 1 to 10. Accuracy on the same data used for training known as in-sample accuracy is shown here in blue. As the trees become deeper, the classifier becomes so complex that it can now almost memorize the training data, this way it can reach a hundred percent in sample accuracy performance using cross-validation also known as out-of-sample accuracy.

However, out-of-sample accuracy is much lower than in-sample performance and a much more realistic estimate of future performance. The most important observation is that out-of-sample performance actually drops for depths greater than 10 due to overfitting, trying too hard to memorize the training data leads to worse performance on the test data. This also happens in real life if you memorize the answers to past exam questions, you will only do well on the exam if the same questions appear in exactly the same wording.

You're already wiser than the average data scientist because you know that complex models are not always better than simple ones. The exercises that follow let you develop more intuition about this subject inside.

"WEBVTTKind: captionsLanguage: encongratulations are deciding to continue with this course deciding how complex a model should be is one of the most critical skills a data scientist must have and is the subject of this lesson often classifiers have extra parameters that control their flexibility or complexity for example inspecting the documentation of the random forest classifier you will notice an entry for max underscore depth this stands for maximum depth a random forest classifier combines the predictions from a large number of decision trees using deeper trees makes the classifier more complex let's start by fitting a classifier with depth two and one with depth four to the credit scoring data set from the previous lesson how would a typical tree from each classifier look like we can access individual trees using the private estimators underscore attribute trees of depth to contain at most two nested decision rules whereas depth 4 produces much deeper rules although these trees come from the same classifier family and data they look very different tuning a complexity parameter is treated in the same way as model selection you need to split your data into training and test fit several classifiers of different depths to the training data and pick the one with the best test performance you can also keep a separate holdout dataset in order to get a fresh final estimate of the accuracy of the winning classifier an alternative approach is cross-validation which splits the data into several chunks and repeats the training test step by picking a different chunk per round to use us test data shown here in yellow while using the remaining data for training shown in blue accuracy is averaged over all rounds making this technique more stable cross-validation is implemented as Crossville score in the psychic learn model selection module the function takes as input a classifier instance and the full data x and y which it then proceeds to split several times three times by default the result are three estimates of accuracy one for each run that can be averaged using mean from numpy to easily optimize a hyper parameter like tree depth using cross-validation you can use the function grid search CV which takes as input a dictionary of parameters and values to try out and a classifier instance the resulting object is fitted to the entire dataset and stores the best-performing values in an attribute called underscore best underscore params let's now review the accuracy of our random forest as the depth ranges from 1 to 10 accuracy on the same data used for training known as in sample accuracy is shown here in blue as the trees become deeper the classifier becomes so complex that it can now almost memorize the training data this way it can reach a hundred percent in sample accuracy performance using cross-validation also known as out-of-sample accuracy is much lower than in sample performance and a much more realistic estimate of future performance the most important observation is that out-of-sample performance actually drops for depths greater than 10 due to overfitting trying too hard to memorize the training data leads to worse performance on the test data this also happens in real life if you memorize the answers to past exam questions you will only do well on the exam if the same questions appear in exactly the same wording you're already wiser than the average data scientist because you know that complex models are not always better than simple ones the exercises that follow lets you develop more intuition about this insidecongratulations are deciding to continue with this course deciding how complex a model should be is one of the most critical skills a data scientist must have and is the subject of this lesson often classifiers have extra parameters that control their flexibility or complexity for example inspecting the documentation of the random forest classifier you will notice an entry for max underscore depth this stands for maximum depth a random forest classifier combines the predictions from a large number of decision trees using deeper trees makes the classifier more complex let's start by fitting a classifier with depth two and one with depth four to the credit scoring data set from the previous lesson how would a typical tree from each classifier look like we can access individual trees using the private estimators underscore attribute trees of depth to contain at most two nested decision rules whereas depth 4 produces much deeper rules although these trees come from the same classifier family and data they look very different tuning a complexity parameter is treated in the same way as model selection you need to split your data into training and test fit several classifiers of different depths to the training data and pick the one with the best test performance you can also keep a separate holdout dataset in order to get a fresh final estimate of the accuracy of the winning classifier an alternative approach is cross-validation which splits the data into several chunks and repeats the training test step by picking a different chunk per round to use us test data shown here in yellow while using the remaining data for training shown in blue accuracy is averaged over all rounds making this technique more stable cross-validation is implemented as Crossville score in the psychic learn model selection module the function takes as input a classifier instance and the full data x and y which it then proceeds to split several times three times by default the result are three estimates of accuracy one for each run that can be averaged using mean from numpy to easily optimize a hyper parameter like tree depth using cross-validation you can use the function grid search CV which takes as input a dictionary of parameters and values to try out and a classifier instance the resulting object is fitted to the entire dataset and stores the best-performing values in an attribute called underscore best underscore params let's now review the accuracy of our random forest as the depth ranges from 1 to 10 accuracy on the same data used for training known as in sample accuracy is shown here in blue as the trees become deeper the classifier becomes so complex that it can now almost memorize the training data this way it can reach a hundred percent in sample accuracy performance using cross-validation also known as out-of-sample accuracy is much lower than in sample performance and a much more realistic estimate of future performance the most important observation is that out-of-sample performance actually drops for depths greater than 10 due to overfitting trying too hard to memorize the training data leads to worse performance on the test data this also happens in real life if you memorize the answers to past exam questions you will only do well on the exam if the same questions appear in exactly the same wording you're already wiser than the average data scientist because you know that complex models are not always better than simple ones the exercises that follow lets you develop more intuition about this inside\n"