Master LLMs with our FREE course in collaboration with Activeloop & Intel Disruptor Initiative. Join now!

Publication

Good Old Iris
Latest

Good Old Iris

Last Updated on January 7, 2023 by Editorial Team

Author(s): Dr. Marc Jacobs

Originally published on Towards AI the World’s Leading AI and Technology News and Media Company. If you are building an AI-related product or service, we invite you to consider becoming an AI sponsor. At Towards AI, we help scale AI and technology startups. Let us help you unleash your technology to the masses.

Bayesian modeling of Fisher’s dataset

The iris dataset must be the most used dataset ever. At least, to me, it is the dataset I see coming about whenever there is a new sort of technique that can be used, or somebody wants to show what they did in terms of modeling. So, I thought, let's continue the tradition and also use Iris for Bayesian modeling. If just for the fun of applying Bayes to a dataset constructed by Sir Ronald Fisher — Mr. Frequentist himself.

The iris dataset does not need any explanation, and if it should, just Google it. Its about flowers.
A plot showing the relationship between three variables and their underlying Species. Not so hard to figure out why iris is often used to showcase decomposition algorithms, like principal component analysis or K-means clustering. The dataset truly is a beauty. However,, I want to apply regression to it. Just because I can.
Lets start with a simple linear regression, the old way, trying to connect petal.length, and species to sepal.length. The model looks surprisingly good in terms of its assumption, but assumptions are not predictions.

Now it's time to go Bayesian and I will start using the rstanarm package in which I indicate a prior R-squared of 0.75. It's an example I found somewhere else and I do not know why somebody would ever label such a prior, considering the R-squared value by itself is completely meaningless. You can have the same R-squared values for a range of relationships ranging from pretty decent to outright bad. Anyhow, let's see what happens and then move on to something better.

Posterior results coming from the Bayesian model.
And the results are visualized.
Effective sample size (neff) and rhat metrics. The neff should be as high as possible, and rhat should be circling around zero. Personally, I do not like these metrics. I prefer to look at the chains themselves.
Looking good! REMEMBER: look at the chains for variation within boundaries. You want to see noise. For the rest, the likelihood (y) and posterior values (yrep) do NOT need to coincide. This is science, not some self-fulfilling prophecy hunt.
And the moment of truth. The posterior predictions do not even come close to the likelihood. Now, this is where most people panic, declare their model is false, and either change the prior to come very close to the likelihood, faint and use a non-informative prior or drop the Bayesian analysis. The second and third options are actually the same. Now, IF I believe my prior is CORRECT, given the current evidence-base, AND I believe I have sample new data in fashion I can defend THIS is JUST your result. Be happy! You have found something extremely interesting. Modeling is not color-by-numbers, it is painting.
And more of the same plots, but then different.

Alright, so, just like the Maximum Likelihood models we see so often, we can also assess Bayesian models. But, the metrics are no longer called AIC or BIC (although BIC does stand for Bayesian Information Criterion), but the Pareto-k-diagnostic and the entwined expected log predictive density (elpd) which is obtained via leave-one-out (loo) cross-validation. Just like the AIC or BIC, the values mean little, and only when comparing (nested) models does it make sense to look at them.

Looking good. The left plot shows no pattern, and so don’t the middel and right plot. Like I said, these metrics only make sense when comparing models. For in-model assessment, stick with chain assessment, and look at the distributions of your prior, likelihood, and posterior, and especially the changes between them.
Posterior distributions look good and stable, but when you compare predicted values to observed it is clear that the posterior draws from the model do not even come close to overlapping the observations. There is no problem in that, besides freaking out some people thinking your model is wrong. However, it could be that your model is correct, but that the data sampled from the latest dataset has a completely different mechanism or is coming from a completely different situation. The excitement!
Posterior draws for each of the species for the response which is sepal.length.

Now it's time we get serious and through in some priors. No, the informative stuff, but real priors that have an effect and that say: “I know my evidence”. Here, I mathematically say to the model that my prior belief is that there is no link between sepal.length and petal.length or petal.width. For sepal.width I have no idea (which is nonsense, but still), and I believe there are different effects for versicolor and virginica compared to setosa.

Prior as defined by model, and priors as defined by me. Never use the model priors. Bring your own!
And the posterior results from the model.
Chains looking good.
The draws look good.
And the beautiful STAN code.
Error distributions to the left and the calibration plot. Once again, deviations are NOT bad.
More sampling checks. It seems the gaussian distribution for the response is the correct one to use.
And the conditional distributions for each of the variables of interest.
Prediction plots, look quite unwieldy. A bit too much if you ask me.
Various distribution plots for each of the Species coming from the posterior draws. Perhaps overkill to show them all but take your pick. As long as the sampling shows no chaotic developments, the plots look good.
Calibration plots.
More calibration plots. Not all are useful, but you can make them.
And the posterior, for each of the Species and predictors included.
And, last but not least, posterior distributions of the difference for each of the Species for sepal.length.

So, this is a way to use Bayesian analysis on the famous Iris dataset. The codes are at the bottom. If you are interested, just copy and paste and run them all. There is more code in the bottom than I highlighted above, and I invite you to make your own.

Let me know if something is amiss!


Good Old Iris was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.

Join thousands of data leaders on the AI newsletter. It’s free, we don’t spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓