Find features that really explains your data with Kydavra PValueSelector
Last Updated on September 27, 2020 by Editorial Team
Author(s): Vasile PΔpΔluΘΔ
Machine Learning
Features selection is a very important part of machine learning development, It allows you to keep your models as simple as possible keeping at the same time, as much information as possible. Unfortunately, sometimes it can require high mathematical knowledge and good practical skills in programming. However, at Sigmoid we decided to build a library that will make feature selection as easy as implementing models in sci-kitΒ learn.
Using PValueSelector from the KydavraΒ library.
For those that are there mostly just for the solution to their problem there are the commands and theΒ code:
So to install kydavra just write the next things in the commandΒ line:
pip install kydavra
After you cleaned the data, meaning NaN- value imputation, outlayers elimination and, others, you can apply the selector:
from kydavra import PValueSelector
selector = PValueSelector()
new_columns = selector.select(df, βtargetβ)
If we will test the result of PValueSelector on the Brazilian houses to rent dataset, we donβt see any growth in the performance of the algorithm. However new_columns contain only 4 columns so, It can be used also on an already well-performing algorithm, just to keep itΒ smaller.
raw_mean_squared_error - 1.0797894705743087
new_mean_sqared_error - 1.0620229254150797
So how itΒ works?
So, before we will dig deeper into what are p-values, we need to understand first what is the null hypothesis.
Null hypothesis is a general statement that there is no relationship between two measured phenomena (or also saying features).
So to find if features are related we need to see if we can reject the null hypothesis. For this we use p-values.
P-valueβββis the probability value for a given statistical model that, if the null hypothesis is true, a set of statistical observations, is greater than or equal in magnitude to the observedΒ results.
So using the notion above, we can express it more easily, as the probability of finding such observations out of our dataset. So if the p-value is big, then there is a little chance that using this feature in a production model will get good results. Thatβs why it can sometimes not improve our accuracy, but it can reduce the number of features, keeping our model as simple as possible.
Bonus!
You can see the process of selecting features you can plot it, justΒ running:
selector.plot_process(title=βP-valueβ)
It has the next parameters:
- title (default = βP-Value Plotβ)β the title of theΒ plot.
- save (default = False)β the boolean value, True meaning that it will save the plot, and False not. By default, it is set toΒ false.
- file_path (default = None)β the file path to the newly createdΒ plot.
If you want to dig deeper into the notions as Null hypothesis and p-values, or how this feature selection works, bellow you have a list ofΒ links.
If you have tried kydavra we invite you to share your impression by filling out thisΒ form.
Made with β€ byΒ Sigmoid.
Useful links:
- https://en.wikipedia.org/wiki/Null_hypothesis
- https://en.wikipedia.org/wiki/P-value
- https://towardsdatascience.com/feature-selection-correlation-and-p-value-da8921bfb3cf
Find features that really explains your data with Kydavra PValueSelector was originally published in Towards AIβββMultidisciplinary Science Journal on Medium, where people are continuing the conversation by highlighting and responding to this story.
Published via Towards AI