Master LLMs with our FREE course in collaboration with Activeloop & Intel Disruptor Initiative. Join now!


How to Use scikit-learn ‘eli5’ Library to Compute Permutation Importance?
Latest   Machine Learning

How to Use scikit-learn ‘eli5’ Library to Compute Permutation Importance?

Last Updated on July 20, 2023 by Editorial Team

Author(s): Abhinav Prakash

Originally published on Towards AI.

Feature Permutation Importance with ‘eli5’ U+007C Towards AI

Understanding the workings of scikit-learn’s ‘eli5’ library to compute feature importance on a sample housing dataset and interpreting its results

cc: Forbes

Most of the Data Scientist(ML guys) treat their machine learning model as a black-box. They don’t know what are the things which are happening underhood.
They load their data, do manual data cleaning & prepare their data to fit it on ml modal. Then the train their model & predict the target values(regression problem).

But they don’t know, what features does their model think are important?

For answering the above question Permutation Importance comes into the picture.

What is it?

Permutation Importance is an algorithm that computes importance scores
for each of the feature variables of a dataset,
The importance measures are determined by computing the sensitivity of a model to random permutations of feature values.

How does it work?

The concept is really straightforward:
We measure the importance of a feature by calculating the increase in the model’s prediction error after permuting the feature.
A feature is “important” if shuffling its values increases the model error because in this case, the model relied on the feature for the prediction.
A feature is “unimportant” if shuffling its values leave the model error unchanged because in this case, the model ignored the feature for the prediction.

Should I compute importance on Training or Test data(validation data)?

The answer to this question is, we always measure permutation importance on test data.
permutation importance based on training data is garbage. The permutation importance based on training data makes us mistakenly believe that features are important for the predictions when in reality the model was just overfitting and the features were not important at all.

eli5 — a scikit-learn library:-

eli5 is a scikit learn library, used for computing permutation importance.

caution to take before using eli5:-

1. Permutation Importance is calculated after a model has been fitted.

2. We always compute permutation importance on test data(Validation Data).

3. The output of eli5 is in HTML format. So, we can only use it in the ipython notebook(i.e Jupiter notebook, google collab & kaggle kernel, etc).

Now, let us get some test of codes U+1F60B

I’ve built a rudimentary model(RandomForestRegressor) to predict the sale price of the housing data set.
This is a good dataset example for showing the Permutation Importance because this dataset has a lot of features.
So, we can see which features make an impact while predicting the values and which are not.

Now, we use the ‘eli5’ library to calculate Permutation importance.

you can see the output of the above code below:-

Interpreting Results:-

Features have decreasing importance in top-down order.
The first number in each row shows the reduction in model performance by the reshuffle of that feature.
The second number is a measure of the randomness of the performance reduction for different reshuffles of the feature column.
overallQual(overall quality) feature of the housing data set makes the biggest impact in the model while predicting the Sale Price.

You can get the housing-data set in .csv format from my GitHub profile

You can also get .ipnyb file(kaggle Kernel) file from my GitHub profile


If you enjoy my article then do claps and follow me U+2764️.

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓