Join thousands of AI enthusiasts and experts at the Learn AI Community.

Publication

Machine Learning

Find features that really explains your data with Kydavra PValueSelector

Last Updated on September 27, 2020 by Editorial Team

Author(s): Vasile Pฤƒpฤƒluศ›ฤƒ

Machine Learning

Image created by Sigmoid public association.

Features selection is a very important part of machine learning development, It allows you to keep your models as simple as possible keeping at the same time, as much information as possible. Unfortunately, sometimes it can require high mathematical knowledge and good practical skills in programming. However, at Sigmoid we decided to build a library that will make feature selection as easy as implementing models in sci-kitย learn.

Using PValueSelector from the Kydavraย library.

For those that are there mostly just for the solution to their problem there are the commands and theย code:

So to install kydavra just write the next things in the commandย line:

pip install kydavra

After you cleaned the data, meaning NaN- value imputation, outlayers elimination and, others, you can apply the selector:

from kydavra import PValueSelector
selector = PValueSelector()
new_columns = selector.select(df, โ€˜targetโ€™)

If we will test the result of PValueSelector on the Brazilian houses to rent dataset, we donโ€™t see any growth in the performance of the algorithm. However new_columns contain only 4 columns so, It can be used also on an already well-performing algorithm, just to keep itย smaller.

raw_mean_squared_error - 1.0797894705743087
new_mean_sqared_error - 1.0620229254150797

So how itย works?

So, before we will dig deeper into what are p-values, we need to understand first what is the null hypothesis.

Null hypothesis is a general statement that there is no relationship between two measured phenomena (or also saying features).

So to find if features are related we need to see if we can reject the null hypothesis. For this we use p-values.

P-valueโ€Šโ€”โ€Šis the probability value for a given statistical model that, if the null hypothesis is true, a set of statistical observations, is greater than or equal in magnitude to the observedย results.

So using the notion above, we can express it more easily, as the probability of finding such observations out of our dataset. So if the p-value is big, then there is a little chance that using this feature in a production model will get good results. Thatโ€™s why it can sometimes not improve our accuracy, but it can reduce the number of features, keeping our model as simple as possible.

Bonus!

You can see the process of selecting features you can plot it, justย running:

selector.plot_process(title=โ€™P-valueโ€™)
The plot created with Kydavra PValueSelector on Brazilian houses for rentย dataset.

It has the next parameters:

  • title (default = โ€œP-Value Plotโ€)โ€” the title of theย plot.
  • save (default = False)โ€” the boolean value, True meaning that it will save the plot, and False not. By default, it is set toย false.
  • file_path (default = None)โ€” the file path to the newly createdย plot.

If you want to dig deeper into the notions as Null hypothesis and p-values, or how this feature selection works, bellow you have a list ofย links.

If you have tried kydavra we invite you to share your impression by filling out thisย form.

Made with โค byย Sigmoid.

Useful links:


Find features that really explains your data with Kydavra PValueSelector was originally published in Towards AIโ€Šโ€”โ€ŠMultidisciplinary Science Journal on Medium, where people are continuing the conversation by highlighting and responding to this story.

Published via Towards AI

Feedback โ†“