Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

Find features that really explains your data with Kydavra PValueSelector
Machine Learning

Find features that really explains your data with Kydavra PValueSelector

Last Updated on September 27, 2020 by Editorial Team

Author(s): Vasile Păpăluță

Machine Learning

Image created by Sigmoid public association.

Features selection is a very important part of machine learning development, It allows you to keep your models as simple as possible keeping at the same time, as much information as possible. Unfortunately, sometimes it can require high mathematical knowledge and good practical skills in programming. However, at Sigmoid we decided to build a library that will make feature selection as easy as implementing models in sci-kitΒ learn.

Using PValueSelector from the KydavraΒ library.

For those that are there mostly just for the solution to their problem there are the commands and theΒ code:

So to install kydavra just write the next things in the commandΒ line:

pip install kydavra

After you cleaned the data, meaning NaN- value imputation, outlayers elimination and, others, you can apply the selector:

from kydavra import PValueSelector
selector = PValueSelector()
new_columns = selector.select(df, β€˜target’)

If we will test the result of PValueSelector on the Brazilian houses to rent dataset, we don’t see any growth in the performance of the algorithm. However new_columns contain only 4 columns so, It can be used also on an already well-performing algorithm, just to keep itΒ smaller.

raw_mean_squared_error - 1.0797894705743087
new_mean_sqared_error - 1.0620229254150797

So how itΒ works?

So, before we will dig deeper into what are p-values, we need to understand first what is the null hypothesis.

Null hypothesis is a general statement that there is no relationship between two measured phenomena (or also saying features).

So to find if features are related we need to see if we can reject the null hypothesis. For this we use p-values.

P-valueβ€Šβ€”β€Šis the probability value for a given statistical model that, if the null hypothesis is true, a set of statistical observations, is greater than or equal in magnitude to the observedΒ results.

So using the notion above, we can express it more easily, as the probability of finding such observations out of our dataset. So if the p-value is big, then there is a little chance that using this feature in a production model will get good results. That’s why it can sometimes not improve our accuracy, but it can reduce the number of features, keeping our model as simple as possible.

Bonus!

You can see the process of selecting features you can plot it, justΒ running:

selector.plot_process(title=’P-value’)
The plot created with Kydavra PValueSelector on Brazilian houses for rentΒ dataset.

It has the next parameters:

  • title (default = β€œP-Value Plot”)β€” the title of theΒ plot.
  • save (default = False)β€” the boolean value, True meaning that it will save the plot, and False not. By default, it is set toΒ false.
  • file_path (default = None)β€” the file path to the newly createdΒ plot.

If you want to dig deeper into the notions as Null hypothesis and p-values, or how this feature selection works, bellow you have a list ofΒ links.

If you have tried kydavra we invite you to share your impression by filling out thisΒ form.

Made with ❀ by Sigmoid.

Useful links:


Find features that really explains your data with Kydavra PValueSelector was originally published in Towards AIβ€Šβ€”β€ŠMultidisciplinary Science Journal on Medium, where people are continuing the conversation by highlighting and responding to this story.

Published via Towards AI

Feedback ↓