Name: Towards AI
Legal Name: Towards AI, Inc.
Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world.
Phone Number: +1-650-246-9381
Email: [email protected]
228 Park Avenue SouthNew York,
NY10003United States
Name: Towards AI
Legal Name: Towards AI, Inc.
Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication.
Founders:
Roberto Iriondo,
Website,
Job Title: Co-founder and Advisor
Works for: Towards AI, Inc.
Follow Roberto:
X,
LinkedIn,
GitHub,
Google Scholar,
Towards AI Profile,
Medium,
ML@CMU,
FreeCodeCamp,
Crunchbase,
Bloomberg,
Roberto Iriondo, Generative AI Lab,
Generative AI LabDenis Piffaretti,
Job Title: Co-founder
Works for: Towards AI, Inc.Louie Peters,
Job Title: Co-founder
Works for: Towards AI, Inc.Louis-François Bouchard,
Job Title: Co-founder
Works for: Towards AI, Inc.
Cover:
Logo:
Areas Served: Worldwide
Alternate Name: Towards AI, Inc.
Alternate Name: Towards AI Co.
Alternate Name: towards ai
Alternate Name: towardsai
Alternate Name: towards.ai
Alternate Name: tai
Alternate Name: toward ai
Alternate Name: toward.ai
Alternate Name: Towards AI, Inc.
Alternate Name: towardsai.net
Alternate Name: pub.towardsai.net
Originally published on Towards AI the World’s Leading AI and Technology News and Media Company. If you are building an AI-related product or service, we invite you to consider becoming an AI sponsor. At Towards AI, we help scale AI and technology startups. Let us help you unleash your technology to the masses.
Bayesian modeling of FisherβsΒ dataset
The irisdataset must be the most used dataset ever. At least, to me, it is the dataset I see coming about whenever there is a new sort of technique that can be used, or somebody wants to show what they did in terms of modeling. So, I thought, let's continue the tradition and also use Iris for Bayesian modeling. If just for the fun of applying Bayes to a dataset constructed by Sir Ronald FisherβββMr. Frequentist himself.
The iris dataset does not need any explanation, and if it should, just Google it. Its aboutΒ flowers.A plot showing the relationship between three variables and their underlying Species. Not so hard to figure out why iris is often used to showcase decomposition algorithms, like principal component analysis or K-means clustering. The dataset truly is a beauty. However,, I want to apply regression to it. Just because IΒ can.Lets start with a simple linear regression, the old way, trying to connect petal.length, and species to sepal.length. The model looks surprisingly good in terms of its assumption, but assumptions are not predictions.
Now it's time to go Bayesian and I will start using the rstanarm package in which I indicate a prior R-squared of 0.75. It's an example I found somewhere else and I do not know why somebody would ever label such a prior, considering the R-squared value by itself is completely meaningless. You can have the same R-squared values for a range of relationships ranging from pretty decent to outright bad. Anyhow, let's see what happens and then move on to something better.
Posterior results coming from the BayesianΒ model.And the results are visualized.Effective sample size (neff) and rhat metrics. The neff should be as high as possible, and rhat should be circling around zero. Personally, I do not like these metrics. I prefer to look at the chains themselves.Looking good! REMEMBER: look at the chains for variation within boundaries. You want to see noise. For the rest, the likelihood (y) and posterior values (yrep) do NOT need to coincide. This is science, not some self-fulfilling prophecyΒ hunt.And the moment of truth. The posterior predictions do not even come close to the likelihood. Now, this is where most people panic, declare their model is false, and either change the prior to come very close to the likelihood, faint and use a non-informative prior or drop the Bayesian analysis. The second and third options are actually the same. Now, IF I believe my prior is CORRECT, given the current evidence-base, AND I believe I have sample new data in fashion I can defend THIS is JUST your result. Be happy! You have found something extremely interesting. Modeling is not color-by-numbers, it is painting.And more of the same plots, but then different.
Alright, so, just like the Maximum Likelihood models we see so often, we can also assess Bayesian models. But, the metrics are no longer called AIC or BIC (although BIC does stand for Bayesian Information Criterion), but the Pareto-k-diagnostic and the entwined expected log predictive density (elpd) which is obtained via leave-one-out (loo) cross-validation. Just like the AIC or BIC, the values mean little, and only when comparing (nested) models does it make sense to look atΒ them.
Looking good. The left plot shows no pattern, and so donβt the middel and right plot. Like I said, these metrics only make sense when comparing models. For in-model assessment, stick with chain assessment, and look at the distributions of your prior, likelihood, and posterior, and especially the changes betweenΒ them.Posterior distributions look good and stable, but when you compare predicted values to observed it is clear that the posterior draws from the model do not even come close to overlapping the observations. There is no problem in that, besides freaking out some people thinking your model is wrong. However, it could be that your model is correct, but that the data sampled from the latest dataset has a completely different mechanism or is coming from a completely different situation. The excitement!Posterior draws for each of the species for the response which is sepal.length.
Now it's time we get serious and through in some priors. No, the informative stuff, but real priors that have an effect and that say: βI know my evidenceβ. Here, I mathematically say to the model that my prior belief is that there is no link between sepal.length and petal.length or petal.width. For sepal.width I have no idea (which is nonsense, but still), and I believe there are different effects for versicolor and virginica compared toΒ setosa.
Prior as defined by model, and priors as defined by me. Never use the model priors. Bring yourΒ own!And the posterior results from theΒ model.Chains lookingΒ good.The draws lookΒ good.And the beautiful STANΒ code.Error distributions to the left and the calibration plot. Once again, deviations are NOTΒ bad.More sampling checks. It seems the gaussian distribution for the response is the correct one toΒ use.And the conditional distributions for each of the variables of interest.Prediction plots, look quite unwieldy. A bit too much if you askΒ me.Various distribution plots for each of the Species coming from the posterior draws. Perhaps overkill to show them all but take your pick. As long as the sampling shows no chaotic developments, the plots lookΒ good.Calibration plots.More calibration plots. Not all are useful, but you can makeΒ them.And the posterior, for each of the Species and predictors included.And, last but not least, posterior distributions of the difference for each of the Species for sepal.length.
So, this is a way to use Bayesian analysis on the famous Iris dataset. The codes are at the bottom. If you are interested, just copy and paste and run them all. There is more code in the bottom than I highlighted above, and I invite you to make yourΒ own.
Let me know if something isΒ amiss!
Good Old Iris was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.
Join thousands of data leaders on the AI newsletter. Itβs free, we donβt spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Established in Pittsburgh, Pennsylvania, USβββTowards AI Co. is the worldβs leading AI and technology publication focused on diversity, equity, and inclusion. We aim to publish unbiased AI and technology-related articles and be an impartial source of information. Read by thought-leaders and decision-makers around the world. We have thousands of contributing writers from university professors, researchers, graduate students, industry experts, and enthusiasts. We receive millions of visits per year, have several thousands of followers across social media, and thousands of subscribers. All of our articles are from their respective authors and may not reflect the views of Towards AI Co., its editors, or its other writers. | Information for authors β https://contribute.towardsai.net | Terms β https://towardsai.net/terms/ | Privacy β https://towardsai.net/privacy/ | Members β https://members.towardsai.net/ | Shop β https://ws.towardsai.net/shop | Is your company interested in working with Towards AI? β https://sponsors.towardsai.net