Name: Towards AI
Legal Name: Towards AI, Inc.
Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world.
Phone Number: +1-650-246-9381
Email: pub@towardsai.net
228 Park Avenue SouthNew York,
NY10003United States
Name: Towards AI
Legal Name: Towards AI, Inc.
Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication.
Founders:
Roberto Iriondo,
Website,
Job Title: Co-founder and Advisor
Works for: Towards AI, Inc.
Follow Roberto:
X,
LinkedIn,
GitHub,
Google Scholar,
Towards AI Profile,
Medium,
ML@CMU,
FreeCodeCamp,
Crunchbase,
Bloomberg,
Roberto Iriondo, Generative AI Lab,
Generative AI LabDenis Piffaretti,
Job Title: Co-founder
Works for: Towards AI, Inc.Louie Peters,
Job Title: Co-founder
Works for: Towards AI, Inc.Louis-François Bouchard,
Job Title: Co-founder
Works for: Towards AI, Inc.
Cover:
Logo:
Areas Served: Worldwide
Alternate Name: Towards AI, Inc.
Alternate Name: Towards AI Co.
Alternate Name: towards ai
Alternate Name: towardsai
Alternate Name: towards.ai
Alternate Name: tai
Alternate Name: toward ai
Alternate Name: toward.ai
Alternate Name: Towards AI, Inc.
Alternate Name: towardsai.net
Alternate Name: pub.towardsai.net
Originally published on Towards AI the World’s Leading AI and Technology News and Media Company. If you are building an AI-related product or service, we invite you to consider becoming an AI sponsor. At Towards AI, we help scale AI and technology startups. Let us help you unleash your technology to the masses.
Bayesian modeling of Fisher’s dataset
The irisdataset must be the most used dataset ever. At least, to me, it is the dataset I see coming about whenever there is a new sort of technique that can be used, or somebody wants to show what they did in terms of modeling. So, I thought, let's continue the tradition and also use Iris for Bayesian modeling. If just for the fun of applying Bayes to a dataset constructed by Sir Ronald Fisher — Mr. Frequentist himself.
The iris dataset does not need any explanation, and if it should, just Google it. Its about flowers.A plot showing the relationship between three variables and their underlying Species. Not so hard to figure out why iris is often used to showcase decomposition algorithms, like principal component analysis or K-means clustering. The dataset truly is a beauty. However,, I want to apply regression to it. Just because I can.Lets start with a simple linear regression, the old way, trying to connect petal.length, and species to sepal.length. The model looks surprisingly good in terms of its assumption, but assumptions are not predictions.
Now it's time to go Bayesian and I will start using the rstanarm package in which I indicate a prior R-squared of 0.75. It's an example I found somewhere else and I do not know why somebody would ever label such a prior, considering the R-squared value by itself is completely meaningless. You can have the same R-squared values for a range of relationships ranging from pretty decent to outright bad. Anyhow, let's see what happens and then move on to something better.
Posterior results coming from the Bayesian model.And the results are visualized.Effective sample size (neff) and rhat metrics. The neff should be as high as possible, and rhat should be circling around zero. Personally, I do not like these metrics. I prefer to look at the chains themselves.Looking good! REMEMBER: look at the chains for variation within boundaries. You want to see noise. For the rest, the likelihood (y) and posterior values (yrep) do NOT need to coincide. This is science, not some self-fulfilling prophecy hunt.And the moment of truth. The posterior predictions do not even come close to the likelihood. Now, this is where most people panic, declare their model is false, and either change the prior to come very close to the likelihood, faint and use a non-informative prior or drop the Bayesian analysis. The second and third options are actually the same. Now, IF I believe my prior is CORRECT, given the current evidence-base, AND I believe I have sample new data in fashion I can defend THIS is JUST your result. Be happy! You have found something extremely interesting. Modeling is not color-by-numbers, it is painting.And more of the same plots, but then different.
Alright, so, just like the Maximum Likelihood models we see so often, we can also assess Bayesian models. But, the metrics are no longer called AIC or BIC (although BIC does stand for Bayesian Information Criterion), but the Pareto-k-diagnostic and the entwined expected log predictive density (elpd) which is obtained via leave-one-out (loo) cross-validation. Just like the AIC or BIC, the values mean little, and only when comparing (nested) models does it make sense to look at them.
Looking good. The left plot shows no pattern, and so don’t the middel and right plot. Like I said, these metrics only make sense when comparing models. For in-model assessment, stick with chain assessment, and look at the distributions of your prior, likelihood, and posterior, and especially the changes between them.Posterior distributions look good and stable, but when you compare predicted values to observed it is clear that the posterior draws from the model do not even come close to overlapping the observations. There is no problem in that, besides freaking out some people thinking your model is wrong. However, it could be that your model is correct, but that the data sampled from the latest dataset has a completely different mechanism or is coming from a completely different situation. The excitement!Posterior draws for each of the species for the response which is sepal.length.
Now it's time we get serious and through in some priors. No, the informative stuff, but real priors that have an effect and that say: “I know my evidence”. Here, I mathematically say to the model that my prior belief is that there is no link between sepal.length and petal.length or petal.width. For sepal.width I have no idea (which is nonsense, but still), and I believe there are different effects for versicolor and virginica compared to setosa.
Prior as defined by model, and priors as defined by me. Never use the model priors. Bring your own!And the posterior results from the model.Chains looking good.The draws look good.And the beautiful STAN code.Error distributions to the left and the calibration plot. Once again, deviations are NOT bad.More sampling checks. It seems the gaussian distribution for the response is the correct one to use.And the conditional distributions for each of the variables of interest.Prediction plots, look quite unwieldy. A bit too much if you ask me.Various distribution plots for each of the Species coming from the posterior draws. Perhaps overkill to show them all but take your pick. As long as the sampling shows no chaotic developments, the plots look good.Calibration plots.More calibration plots. Not all are useful, but you can make them.And the posterior, for each of the Species and predictors included.And, last but not least, posterior distributions of the difference for each of the Species for sepal.length.
So, this is a way to use Bayesian analysis on the famous Iris dataset. The codes are at the bottom. If you are interested, just copy and paste and run them all. There is more code in the bottom than I highlighted above, and I invite you to make your own.
Let me know if something is amiss!
Good Old Iris was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.
Join thousands of data leaders on the AI newsletter. It’s free, we don’t spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.
Established in Pittsburgh, Pennsylvania, US — Towards AI Co. is the world’s leading AI and technology publication focused on diversity, equity, and inclusion. We aim to publish unbiased AI and technology-related articles and be an impartial source of information. Read by thought-leaders and decision-makers around the world. We have thousands of contributing writers from university professors, researchers, graduate students, industry experts, and enthusiasts. We receive millions of visits per year, have several thousands of followers across social media, and thousands of subscribers. All of our articles are from their respective authors and may not reflect the views of Towards AI Co., its editors, or its other writers. | Information for authors → https://contribute.towardsai.net | Terms → https://towardsai.net/terms/ | Privacy → https://towardsai.net/privacy/ | Members → https://members.towardsai.net/ | Shop → https://ws.towardsai.net/shop | Is your company interested in working with Towards AI? → https://sponsors.towardsai.net
`;
} else {
console.error('Element with id="subscribe" not found within the page with class "home".');
}
}
});
// Remove duplicate text from articles
/* Backup: 09/11/24
function removeDuplicateText() {
const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements
const seenTexts = new Set(); // A set to keep track of seen texts
const tagCounters = {}; // Object to track instances of each tag
elements.forEach(el => {
const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.)
// Initialize a counter for each tag if not already done
if (!tagCounters[tagName]) {
tagCounters[tagName] = 0;
}
// Only process the first 10 elements of each tag type
if (tagCounters[tagName] >= 2) {
return; // Skip if the number of elements exceeds 10
}
const text = el.textContent.trim(); // Get the text content
const words = text.split(/\s+/); // Split the text into words
if (words.length >= 4) { // Ensure at least 4 words
const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching
// Check if the text (not the tag) has been seen before
if (seenTexts.has(significantPart)) {
// console.log('Duplicate found, removing:', el); // Log duplicate
el.remove(); // Remove duplicate element
} else {
seenTexts.add(significantPart); // Add the text to the set
}
}
tagCounters[tagName]++; // Increment the counter for this tag
});
}
removeDuplicateText();
*/
// Remove duplicate text from articles
function removeDuplicateText() {
const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements
const seenTexts = new Set(); // A set to keep track of seen texts
const tagCounters = {}; // Object to track instances of each tag
// List of classes to be excluded
const excludedClasses = ['medium-author', 'post-widget-title'];
elements.forEach(el => {
// Skip elements with any of the excluded classes
if (excludedClasses.some(cls => el.classList.contains(cls))) {
return; // Skip this element if it has any of the excluded classes
}
const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.)
// Initialize a counter for each tag if not already done
if (!tagCounters[tagName]) {
tagCounters[tagName] = 0;
}
// Only process the first 10 elements of each tag type
if (tagCounters[tagName] >= 10) {
return; // Skip if the number of elements exceeds 10
}
const text = el.textContent.trim(); // Get the text content
const words = text.split(/\s+/); // Split the text into words
if (words.length >= 4) { // Ensure at least 4 words
const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching
// Check if the text (not the tag) has been seen before
if (seenTexts.has(significantPart)) {
// console.log('Duplicate found, removing:', el); // Log duplicate
el.remove(); // Remove duplicate element
} else {
seenTexts.add(significantPart); // Add the text to the set
}
}
tagCounters[tagName]++; // Increment the counter for this tag
});
}
removeDuplicateText();
//Remove unnecessary text in blog excerpts
document.querySelectorAll('.blog p').forEach(function(paragraph) {
// Replace the unwanted text pattern for each paragraph
paragraph.innerHTML = paragraph.innerHTML
.replace(/Author\(s\): [\w\s]+ Originally published on Towards AI\.?/g, '') // Removes 'Author(s): XYZ Originally published on Towards AI'
.replace(/This member-only story is on us\. Upgrade to access all of Medium\./g, ''); // Removes 'This member-only story...'
});
//Load ionic icons and cache them
if ('localStorage' in window && window['localStorage'] !== null) {
const cssLink = 'https://code.ionicframework.com/ionicons/2.0.1/css/ionicons.min.css';
const storedCss = localStorage.getItem('ionicons');
if (storedCss) {
loadCSS(storedCss);
} else {
fetch(cssLink).then(response => response.text()).then(css => {
localStorage.setItem('ionicons', css);
loadCSS(css);
});
}
}
function loadCSS(css) {
const style = document.createElement('style');
style.innerHTML = css;
document.head.appendChild(style);
}
//Remove elements from imported content automatically
function removeStrongFromHeadings() {
const elements = document.querySelectorAll('h1, h2, h3, h4, h5, h6, span');
elements.forEach(el => {
const strongTags = el.querySelectorAll('strong');
strongTags.forEach(strongTag => {
while (strongTag.firstChild) {
strongTag.parentNode.insertBefore(strongTag.firstChild, strongTag);
}
strongTag.remove();
});
});
}
removeStrongFromHeadings();
"use strict";
window.onload = () => {
/*
//This is an object for each category of subjects and in that there are kewords and link to the keywods
let keywordsAndLinks = {
//you can add more categories and define their keywords and add a link
ds: {
keywords: [
//you can add more keywords here they are detected and replaced with achor tag automatically
'data science',
'Data science',
'Data Science',
'data Science',
'DATA SCIENCE',
],
//we will replace the linktext with the keyword later on in the code
//you can easily change links for each category here
//(include class="ml-link" and linktext)
link: 'linktext',
},
ml: {
keywords: [
//Add more keywords
'machine learning',
'Machine learning',
'Machine Learning',
'machine Learning',
'MACHINE LEARNING',
],
//Change your article link (include class="ml-link" and linktext)
link: 'linktext',
},
ai: {
keywords: [
'artificial intelligence',
'Artificial intelligence',
'Artificial Intelligence',
'artificial Intelligence',
'ARTIFICIAL INTELLIGENCE',
],
//Change your article link (include class="ml-link" and linktext)
link: 'linktext',
},
nl: {
keywords: [
'NLP',
'nlp',
'natural language processing',
'Natural Language Processing',
'NATURAL LANGUAGE PROCESSING',
],
//Change your article link (include class="ml-link" and linktext)
link: 'linktext',
},
des: {
keywords: [
'data engineering services',
'Data Engineering Services',
'DATA ENGINEERING SERVICES',
],
//Change your article link (include class="ml-link" and linktext)
link: 'linktext',
},
td: {
keywords: [
'training data',
'Training Data',
'training Data',
'TRAINING DATA',
],
//Change your article link (include class="ml-link" and linktext)
link: 'linktext',
},
ias: {
keywords: [
'image annotation services',
'Image annotation services',
'image Annotation services',
'image annotation Services',
'Image Annotation Services',
'IMAGE ANNOTATION SERVICES',
],
//Change your article link (include class="ml-link" and linktext)
link: 'linktext',
},
l: {
keywords: [
'labeling',
'labelling',
],
//Change your article link (include class="ml-link" and linktext)
link: 'linktext',
},
pbp: {
keywords: [
'previous blog posts',
'previous blog post',
'latest',
],
//Change your article link (include class="ml-link" and linktext)
link: 'linktext',
},
mlc: {
keywords: [
'machine learning course',
'machine learning class',
],
//Change your article link (include class="ml-link" and linktext)
link: 'linktext',
},
};
//Articles to skip
let articleIdsToSkip = ['post-2651', 'post-3414', 'post-3540'];
//keyword with its related achortag is recieved here along with article id
function searchAndReplace(keyword, anchorTag, articleId) {
//selects the h3 h4 and p tags that are inside of the article
let content = document.querySelector(`#${articleId} .entry-content`);
//replaces the "linktext" in achor tag with the keyword that will be searched and replaced
let newLink = anchorTag.replace('linktext', keyword);
//regular expression to search keyword
var re = new RegExp('(' + keyword + ')', 'g');
//this replaces the keywords in h3 h4 and p tags content with achor tag
content.innerHTML = content.innerHTML.replace(re, newLink);
}
function articleFilter(keyword, anchorTag) {
//gets all the articles
var articles = document.querySelectorAll('article');
//if its zero or less then there are no articles
if (articles.length > 0) {
for (let x = 0; x < articles.length; x++) {
//articles to skip is an array in which there are ids of articles which should not get effected
//if the current article's id is also in that array then do not call search and replace with its data
if (!articleIdsToSkip.includes(articles[x].id)) {
//search and replace is called on articles which should get effected
searchAndReplace(keyword, anchorTag, articles[x].id, key);
} else {
console.log(
`Cannot replace the keywords in article with id ${articles[x].id}`
);
}
}
} else {
console.log('No articles found.');
}
}
let key; //not part of script, added
for (key in keywordsAndLinks) {
//key is the object in keywords and links object i.e ds, ml, ai
for (let i = 0; i < keywordsAndLinks[key].keywords.length; i++) {
//keywordsAndLinks[key].keywords is the array of keywords for key (ds, ml, ai)
//keywordsAndLinks[key].keywords[i] is the keyword and keywordsAndLinks[key].link is the link
//keyword and link is sent to searchreplace where it is then replaced using regular expression and replace function
articleFilter(
keywordsAndLinks[key].keywords[i],
keywordsAndLinks[key].link
);
}
}
function cleanLinks() {
// (making smal functions is for DRY) this function gets the links and only keeps the first 2 and from the rest removes the anchor tag and replaces it with its text
function removeLinks(links) {
if (links.length > 1) {
for (let i = 2; i < links.length; i++) {
links[i].outerHTML = links[i].textContent;
}
}
}
//arrays which will contain all the achor tags found with the class (ds-link, ml-link, ailink) in each article inserted using search and replace
let dslinks;
let mllinks;
let ailinks;
let nllinks;
let deslinks;
let tdlinks;
let iaslinks;
let llinks;
let pbplinks;
let mlclinks;
const content = document.querySelectorAll('article'); //all articles
content.forEach((c) => {
//to skip the articles with specific ids
if (!articleIdsToSkip.includes(c.id)) {
//getting all the anchor tags in each article one by one
dslinks = document.querySelectorAll(`#${c.id} .entry-content a.ds-link`);
mllinks = document.querySelectorAll(`#${c.id} .entry-content a.ml-link`);
ailinks = document.querySelectorAll(`#${c.id} .entry-content a.ai-link`);
nllinks = document.querySelectorAll(`#${c.id} .entry-content a.ntrl-link`);
deslinks = document.querySelectorAll(`#${c.id} .entry-content a.des-link`);
tdlinks = document.querySelectorAll(`#${c.id} .entry-content a.td-link`);
iaslinks = document.querySelectorAll(`#${c.id} .entry-content a.ias-link`);
mlclinks = document.querySelectorAll(`#${c.id} .entry-content a.mlc-link`);
llinks = document.querySelectorAll(`#${c.id} .entry-content a.l-link`);
pbplinks = document.querySelectorAll(`#${c.id} .entry-content a.pbp-link`);
//sending the anchor tags list of each article one by one to remove extra anchor tags
removeLinks(dslinks);
removeLinks(mllinks);
removeLinks(ailinks);
removeLinks(nllinks);
removeLinks(deslinks);
removeLinks(tdlinks);
removeLinks(iaslinks);
removeLinks(mlclinks);
removeLinks(llinks);
removeLinks(pbplinks);
}
});
}
//To remove extra achor tags of each category (ds, ml, ai) and only have 2 of each category per article
cleanLinks();
*/
//Recommended Articles
var ctaLinks = [
/*
' ' +
'
Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!
' +
'' +
'' +
'' +
'
Note: Content contains the views of the contributing authors and not Towards AI. Disclosure: This website may contain sponsored content and affiliate links.
Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 10,000 live jobs today with Towards AI Jobs!
Join thousands of data leaders on the AI newsletter. It’s free, we don’t spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.