Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: pub@towardsai.net
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

Can LLMs Truly Think Outside the Box?
Latest   Machine Learning

Can LLMs Truly Think Outside the Box?

Author(s): Max Shap

Originally published on Towards AI.

Image by author

I was surprised to hear from a few friends that some tech companies still include puzzles in their interview process — or even dedicate a full 45-minute session — asking candidates to solve problems like this:

Two people had to go to the top of the hill. There’s only one motorcycle, yet only one person can ride it. Fortunately, they both get to the top of the hill. How?

In the 2010s, such interview questions were popular among big tech companies, particularly Microsoft and Google. These puzzles, often called brainteasers, were designed to test a candidate’s lateral thinking — or “thinking outside the box.” This approach involves creative, divergent thinking, which requires looking at a problem from new perspectives and challenging assumptions. It’s often associated with the right hemisphere of the brain.

However, Google later admitted that brainteasers were ineffective [2] for hiring, as they found little correlation between performance on these puzzles and job success. As a result, Google — and many other companies — dropped brainteasers from their hiring processes.

Let’s set aside why some companies might revisit riddles for candidate assessments today (you can read my thoughts on this in the Conclusions section). Instead, let’s focus on the fun part: How do cutting-edge AI models perform on brainteaser questions that demand not only strong reasoning skills but also a touch of creative thinking?

Methodology

I tested two frontier models from OpenAI for this experiment: gpt-4o-mini and gpt-4o.

Side note: running API with gpt-o1 was way too expensive 😂

The model was tested under the same conditions as a human candidate at the interview:

  1. It was asked to solve the riddle.
  2. It was allowed to output intermediate thoughts to demonstrate the thinking process.

To sum up, the following system prompt was used:

system_prompt = """
You are an expert in solving riddles.
You will be provided with a riddle wrapped in the tags: <riddle>riddle text</riddle>.

Your task is to provide an answer to the riddle.

If you find it helpful, you may output your intermediate thoughts to aid in finding the answer. These should be wrapped in the tags <thinking>your thinking process</thinking>. However, this is optional.
You must conclude your response with the final answer wrapped in the tags <answer>your answer</answer>.
If you are unsure of the answer, respond with <answer>I have no answer</answer>.
Let’s begin solving riddles.
"""

Example riddle:

Riddle: Jessica is telling her friends this story and asks them to guess if it's the truth or a lie: 
"There was a man sitting in a house at night that had no lights on at all.
There was no lamp, no candle, and no other source of light.
Yet, he sat in the house and read his book happily."
Her friends say she's lying, but Jessica corrects them and says
she's telling the truth.
Jessica's story is true.
But how?

Prediction by gpt-4o model:

<thinking>We need to consider all possible ways a person can read without traditional sources of light. 
The key point of the riddle is "no lights on at all" which rules out any man-made lighting sources.
This leads us to think about natural ways a person could read in the absence of artificial light.
The most plausible natural source of light that would allow someone to read is sunlight or moonlight.
</thinking>

<thinking>Since the scenario happens "at night,"
sunlight is not possible, leaving moonlight.
But even without moonlight, there is one other potential explanation:
the man could be blind and reading Braille.
This explanation fits as it doesn't depend on any light source at all.
</thinking>

<answer>The man is blind and reading a book in Braille.</answer>

After the model generated a response, I needed a way to evaluate the correctness of the results. Directly comparing the predicted answers with the reference answers wasn’t feasible, as the model might generate semantically correct answers phrased differently. To address this, I used another LLM, specifically the gpt-4o model, as a judge to validate the predictions.

The validator was provided with relaxed criteria for evaluating correctness. If the predicted answer made sense, even if it didn’t match the reference answer exactly, it still marked the sample as correct.

The following system prompt was used for this purpose:

You are an expert in validating answers to riddles.

You will be provided with the following:

A riddle wrapped in the tags: <riddle>riddle text</riddle>.
A reference answer wrapped in the tags: <reference_answer>text</reference_answer>.
A predicted answer wrapped in the tags: <predicted_answer>text</predicted_answer>.
Your task is to determine whether the predicted answer matches the reference answer.

Focus on whether the meaning of the predicted answer aligns with the reference answer, ignoring any typos.
The reference answer may also include an explanation, usually in a separate sentence. If the predicted answer contains reasoning that differs from the reference reasoning but the predicted answer itself is correct, you should still consider the riddle as solved correctly.
If you strongly believe the predicted answer is valid and can be treated as correct (even if it is completely different from the reference answer), you may decide that the riddle is solved correctly.
You may output intermediate thoughts to help you reach a decision. These should be wrapped in the tags <thoughts></thoughts>.

Finally, return your verdict wrapped in the tags <verdict>your verdict</verdict>.
Your verdict should be either True (for matching answers) or False (if the answers do not match).

Example puzzle along with predictions from two models, and with verdict from a validator:

Predicted answers are different from reference answers. However, they perfectly match this question. The validator catches this and marks predictions as correct

Finally, I calculated the accuracy.

In total, I spent around 15$ on running predictions and validations.

To summarize, the approach was as follows:

  1. Present each riddle individually to gpt-4o-mini and gpt-4o, prompting them to think step by step and solve it.
  2. Use gpt-4o as a judge (since it is more powerful than the mini variant). Provide the text of the riddle, along with the correct answer and the generated response, and ask it to evaluate whether the generated answer is semantically close to the reference answer.
  3. Calculate the accuracy.

Data

For my experiments, I used the carefully curated dataset created by the paper’s authors [3]. Each puzzle in the dataset is designed to evaluate a broad range of human intelligence skills, including strategy development, planning, visual-spatial thinking, creativity, and memory.

To build this dataset, the authors first collected thousands of puzzles from public resources. They then applied filtering, deduplication, and grammar correction, followed by human verification to ensure the puzzles retained their original meaning. Each puzzle includes an answer, and some samples also provide reasoning to explain the solution. Finally, the authors augmented the puzzles with two key transformations:

  • Semantic Reconstruction: Rephrasing the original question while keeping the answer unchanged.
  • Context Reconstruction: Maintaining the misleading commonsense premise but changing both the question and the answer to fit a new situational context.

These augmentations were crucial for evaluating the model’s lateral thinking abilities rather than its memorization skills. (As we know, LLMs are trained on vast amounts of internet data, so some of these puzzles might have appeared during their pretraining.)

Performance on semantically reconstructed puzzles reflects how well the model understands the puzzle’s meaning. In contrast, performance on contextually reconstructed puzzles reveals the model’s reasoning capabilities.

Examples of Original puzzle, Semantic reconstruction, and Context reconstruction:

Examples of Original puzzle, Semantic reconstruction, and Context reconstruction

The final dataset contains 1,100 high-quality brain teasers. Check out the paper for more details.

Results

The overall accuracy of the models is shown in the table below:

Overall, the mini variant of the model performs 20% worse than the main version indicating that probably it’s not the best choice for reasoning tasks.

Performance of the model based on the Original puzzle, Semantic reconstruction, and Context reconstruction:

The model’s accuracy for gpt-4o is quite high, achieving 84% on original puzzles. However, it drops significantly — by 10% — on Semantic Reconstruction riddles, where the question is rephrased using different words but retains the same meaning. This may suggest two potential issues within the model: (1) sensitivity to word order, even in advanced models, and (2) a degree of randomness in its reasoning process.

More notably, the performance gap is even larger, nearly 20%, on Context Reconstruction puzzles. These puzzles present entirely new scenarios, requiring the model to rely on reasoning abilities rather than memorization.

While the results are still reasonable and could undoubtedly be improved in various ways (see the Next Steps section), the model occasionally fails even on simple riddles like this:

Next steps

The results I achieved can certainly be improved in several ways. I list below the options that are worth trying to improve the reasoning skills.

Ensembling for Improved Accuracy. One straightforward improvement is to solve the same riddle multiple times in parallel (e.g., N times) and vote for the most frequent answer. This ensembling technique is a standard approach and can typically boost performance metrics by 5–7%.

Addressing Performance on Contextual Reconstruction. As observed, performance drops significantly on contextually reconstructed puzzles compared to the original ones. This likely occurs because the model has memorized many original puzzles during pretraining on internet data, relying less on genuine reasoning. To improve, we need a model specifically optimized for reasoning tasks — such as the o1 family or even o3 models. However, I didn’t explore this route due to the high costs and long response times associated with these models. If anyone is interested in reproducing these experiments with more powerful models, the reproducible code is available at [4].

Simulating a Real-World Interview Scenario. It would also be intriguing to test the model in a simulated interview-like setting. In this setup:

  • The model acts as the candidate, generating an initial response to a riddle.
  • Another model serves as the interviewer, validating the candidate’s response.
  • If the answer is incorrect, the candidate model can revise its response using feedback from the interviewer.

This approach mimics a real brainteaser interview, where candidates think step by step, validate their ideas with the interviewer and adjust their reasoning as needed. It would be fascinating to analyze how much assistance from the validator (who knows the correct answer) is required to guide the generator model to the right solution. Additionally, we could explore whether this iterative process converges within a reasonable number of steps.

Conclusions

In this short article, I examined the reasoning capabilities of the gpt-4o model family on brainteasers using a high-quality puzzle dataset from [3]. While the model demonstrated solid performance overall, achieving 84% accuracy, its performance dropped significantly — to 65% — on puzzles that are unlikely to be publicly available on the internet and, therefore, were probably not part of the model’s training data.

This performance gap highlights significant room for improvement in the reasoning skills of this model family.

I speculate that this gap might explain why some tech companies still include brainteaser-style questions in their interview processes. Since models struggle with these types of problems, they may help reduce the influence of generative AI on interviews. That said, this is purely my personal opinion and isn’t based on any factual data.

I didn’t discuss the performance of the o1 model in this article, which would be an excellent area for further exploration. Are there any volunteers willing to spend $100+ on such experiments and endure the long wait for results, given the model’s internal reasoning process? I’d be curious to see how O1 performs. The code is available at [4], so feel free to try it out — perhaps it will excel.

I’m also intrigued by the idea of testing a simulation scenario where two models interact: one generates answers or reasoning, while the other validates and suggests improvements. This setup could mimic real-life interviewer-candidate communication. Could this iterative process boost results? It’s an exciting question to explore.

That’s all for this article — thank you for reading! Let’s connect in future discussions. Be sure to subscribe and connect with me on LinkedIn: https://www.linkedin.com/in/maxshapp/

See you in the next episode!

References

[1] Edward De Bono. (1970). Lateral thinking. New York.

[2] Hacker News thread

[3] Jiang, Y., et al. (2023). BRAINTEASER: Lateral Thinking Puzzles for Large Language Models. arXiv. https://arxiv.org/pdf/2310.05057

[4] Link to GitHub repo with notebook.

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓

Sign Up for the Course
`; } else { console.error('Element with id="subscribe" not found within the page with class "home".'); } } }); // Remove duplicate text from articles /* Backup: 09/11/24 function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag elements.forEach(el => { const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 2) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); */ // Remove duplicate text from articles function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag // List of classes to be excluded const excludedClasses = ['medium-author', 'post-widget-title']; elements.forEach(el => { // Skip elements with any of the excluded classes if (excludedClasses.some(cls => el.classList.contains(cls))) { return; // Skip this element if it has any of the excluded classes } const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 10) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); //Remove unnecessary text in blog excerpts document.querySelectorAll('.blog p').forEach(function(paragraph) { // Replace the unwanted text pattern for each paragraph paragraph.innerHTML = paragraph.innerHTML .replace(/Author\(s\): [\w\s]+ Originally published on Towards AI\.?/g, '') // Removes 'Author(s): XYZ Originally published on Towards AI' .replace(/This member-only story is on us\. Upgrade to access all of Medium\./g, ''); // Removes 'This member-only story...' }); //Load ionic icons and cache them if ('localStorage' in window && window['localStorage'] !== null) { const cssLink = 'https://code.ionicframework.com/ionicons/2.0.1/css/ionicons.min.css'; const storedCss = localStorage.getItem('ionicons'); if (storedCss) { loadCSS(storedCss); } else { fetch(cssLink).then(response => response.text()).then(css => { localStorage.setItem('ionicons', css); loadCSS(css); }); } } function loadCSS(css) { const style = document.createElement('style'); style.innerHTML = css; document.head.appendChild(style); } //Remove elements from imported content automatically function removeStrongFromHeadings() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, h6, span'); elements.forEach(el => { const strongTags = el.querySelectorAll('strong'); strongTags.forEach(strongTag => { while (strongTag.firstChild) { strongTag.parentNode.insertBefore(strongTag.firstChild, strongTag); } strongTag.remove(); }); }); } removeStrongFromHeadings(); "use strict"; window.onload = () => { /* //This is an object for each category of subjects and in that there are kewords and link to the keywods let keywordsAndLinks = { //you can add more categories and define their keywords and add a link ds: { keywords: [ //you can add more keywords here they are detected and replaced with achor tag automatically 'data science', 'Data science', 'Data Science', 'data Science', 'DATA SCIENCE', ], //we will replace the linktext with the keyword later on in the code //you can easily change links for each category here //(include class="ml-link" and linktext) link: 'linktext', }, ml: { keywords: [ //Add more keywords 'machine learning', 'Machine learning', 'Machine Learning', 'machine Learning', 'MACHINE LEARNING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ai: { keywords: [ 'artificial intelligence', 'Artificial intelligence', 'Artificial Intelligence', 'artificial Intelligence', 'ARTIFICIAL INTELLIGENCE', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, nl: { keywords: [ 'NLP', 'nlp', 'natural language processing', 'Natural Language Processing', 'NATURAL LANGUAGE PROCESSING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, des: { keywords: [ 'data engineering services', 'Data Engineering Services', 'DATA ENGINEERING SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, td: { keywords: [ 'training data', 'Training Data', 'training Data', 'TRAINING DATA', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ias: { keywords: [ 'image annotation services', 'Image annotation services', 'image Annotation services', 'image annotation Services', 'Image Annotation Services', 'IMAGE ANNOTATION SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, l: { keywords: [ 'labeling', 'labelling', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, pbp: { keywords: [ 'previous blog posts', 'previous blog post', 'latest', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, mlc: { keywords: [ 'machine learning course', 'machine learning class', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, }; //Articles to skip let articleIdsToSkip = ['post-2651', 'post-3414', 'post-3540']; //keyword with its related achortag is recieved here along with article id function searchAndReplace(keyword, anchorTag, articleId) { //selects the h3 h4 and p tags that are inside of the article let content = document.querySelector(`#${articleId} .entry-content`); //replaces the "linktext" in achor tag with the keyword that will be searched and replaced let newLink = anchorTag.replace('linktext', keyword); //regular expression to search keyword var re = new RegExp('(' + keyword + ')', 'g'); //this replaces the keywords in h3 h4 and p tags content with achor tag content.innerHTML = content.innerHTML.replace(re, newLink); } function articleFilter(keyword, anchorTag) { //gets all the articles var articles = document.querySelectorAll('article'); //if its zero or less then there are no articles if (articles.length > 0) { for (let x = 0; x < articles.length; x++) { //articles to skip is an array in which there are ids of articles which should not get effected //if the current article's id is also in that array then do not call search and replace with its data if (!articleIdsToSkip.includes(articles[x].id)) { //search and replace is called on articles which should get effected searchAndReplace(keyword, anchorTag, articles[x].id, key); } else { console.log( `Cannot replace the keywords in article with id ${articles[x].id}` ); } } } else { console.log('No articles found.'); } } let key; //not part of script, added for (key in keywordsAndLinks) { //key is the object in keywords and links object i.e ds, ml, ai for (let i = 0; i < keywordsAndLinks[key].keywords.length; i++) { //keywordsAndLinks[key].keywords is the array of keywords for key (ds, ml, ai) //keywordsAndLinks[key].keywords[i] is the keyword and keywordsAndLinks[key].link is the link //keyword and link is sent to searchreplace where it is then replaced using regular expression and replace function articleFilter( keywordsAndLinks[key].keywords[i], keywordsAndLinks[key].link ); } } function cleanLinks() { // (making smal functions is for DRY) this function gets the links and only keeps the first 2 and from the rest removes the anchor tag and replaces it with its text function removeLinks(links) { if (links.length > 1) { for (let i = 2; i < links.length; i++) { links[i].outerHTML = links[i].textContent; } } } //arrays which will contain all the achor tags found with the class (ds-link, ml-link, ailink) in each article inserted using search and replace let dslinks; let mllinks; let ailinks; let nllinks; let deslinks; let tdlinks; let iaslinks; let llinks; let pbplinks; let mlclinks; const content = document.querySelectorAll('article'); //all articles content.forEach((c) => { //to skip the articles with specific ids if (!articleIdsToSkip.includes(c.id)) { //getting all the anchor tags in each article one by one dslinks = document.querySelectorAll(`#${c.id} .entry-content a.ds-link`); mllinks = document.querySelectorAll(`#${c.id} .entry-content a.ml-link`); ailinks = document.querySelectorAll(`#${c.id} .entry-content a.ai-link`); nllinks = document.querySelectorAll(`#${c.id} .entry-content a.ntrl-link`); deslinks = document.querySelectorAll(`#${c.id} .entry-content a.des-link`); tdlinks = document.querySelectorAll(`#${c.id} .entry-content a.td-link`); iaslinks = document.querySelectorAll(`#${c.id} .entry-content a.ias-link`); mlclinks = document.querySelectorAll(`#${c.id} .entry-content a.mlc-link`); llinks = document.querySelectorAll(`#${c.id} .entry-content a.l-link`); pbplinks = document.querySelectorAll(`#${c.id} .entry-content a.pbp-link`); //sending the anchor tags list of each article one by one to remove extra anchor tags removeLinks(dslinks); removeLinks(mllinks); removeLinks(ailinks); removeLinks(nllinks); removeLinks(deslinks); removeLinks(tdlinks); removeLinks(iaslinks); removeLinks(mlclinks); removeLinks(llinks); removeLinks(pbplinks); } }); } //To remove extra achor tags of each category (ds, ml, ai) and only have 2 of each category per article cleanLinks(); */ //Recommended Articles var ctaLinks = [ /* ' ' + '

Subscribe to our AI newsletter!

' + */ '

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

'+ '

Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

' + '
' + '' + '' + '

Note: Content contains the views of the contributing authors and not Towards AI.
Disclosure: This website may contain sponsored content and affiliate links.

' + 'Discover Your Dream AI Career at Towards AI Jobs' + '

Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 10,000 live jobs today with Towards AI Jobs!

' + '
' + '

🔥 Recommended Articles 🔥

' + 'Why Become an LLM Developer? Launching Towards AI’s New One-Stop Conversion Course'+ 'Testing Launchpad.sh: A Container-based GPU Cloud for Inference and Fine-tuning'+ 'The Top 13 AI-Powered CRM Platforms
' + 'Top 11 AI Call Center Software for 2024
' + 'Learn Prompting 101—Prompt Engineering Course
' + 'Explore Leading Cloud Providers for GPU-Powered LLM Training
' + 'Best AI Communities for Artificial Intelligence Enthusiasts
' + 'Best Workstations for Deep Learning
' + 'Best Laptops for Deep Learning
' + 'Best Machine Learning Books
' + 'Machine Learning Algorithms
' + 'Neural Networks Tutorial
' + 'Best Public Datasets for Machine Learning
' + 'Neural Network Types
' + 'NLP Tutorial
' + 'Best Data Science Books
' + 'Monte Carlo Simulation Tutorial
' + 'Recommender System Tutorial
' + 'Linear Algebra for Deep Learning Tutorial
' + 'Google Colab Introduction
' + 'Decision Trees in Machine Learning
' + 'Principal Component Analysis (PCA) Tutorial
' + 'Linear Regression from Zero to Hero
'+ '

', /* + '

Join thousands of data leaders on the AI newsletter. It’s free, we don’t spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

',*/ ]; var replaceText = { '': '', '': '', '
': '
' + ctaLinks + '
', }; Object.keys(replaceText).forEach((txtorig) => { //txtorig is the key in replacetext object const txtnew = replaceText[txtorig]; //txtnew is the value of the key in replacetext object let entryFooter = document.querySelector('article .entry-footer'); if (document.querySelectorAll('.single-post').length > 0) { //console.log('Article found.'); const text = entryFooter.innerHTML; entryFooter.innerHTML = text.replace(txtorig, txtnew); } else { // console.log('Article not found.'); //removing comment 09/04/24 } }); var css = document.createElement('style'); css.type = 'text/css'; css.innerHTML = '.post-tags { display:none !important } .article-cta a { font-size: 18px; }'; document.body.appendChild(css); //Extra //This function adds some accessibility needs to the site. function addAlly() { // In this function JQuery is replaced with vanilla javascript functions const imgCont = document.querySelector('.uw-imgcont'); imgCont.setAttribute('aria-label', 'AI news, latest developments'); imgCont.title = 'AI news, latest developments'; imgCont.rel = 'noopener'; document.querySelector('.page-mobile-menu-logo a').title = 'Towards AI Home'; document.querySelector('a.social-link').rel = 'noopener'; document.querySelector('a.uw-text').rel = 'noopener'; document.querySelector('a.uw-w-branding').rel = 'noopener'; document.querySelector('.blog h2.heading').innerHTML = 'Publication'; const popupSearch = document.querySelector$('a.btn-open-popup-search'); popupSearch.setAttribute('role', 'button'); popupSearch.title = 'Search'; const searchClose = document.querySelector('a.popup-search-close'); searchClose.setAttribute('role', 'button'); searchClose.title = 'Close search page'; // document // .querySelector('a.btn-open-popup-search') // .setAttribute( // 'href', // 'https://medium.com/towards-artificial-intelligence/search' // ); } // Add external attributes to 302 sticky and editorial links function extLink() { // Sticky 302 links, this fuction opens the link we send to Medium on a new tab and adds a "noopener" rel to them var stickyLinks = document.querySelectorAll('.grid-item.sticky a'); for (var i = 0; i < stickyLinks.length; i++) { /* stickyLinks[i].setAttribute('target', '_blank'); stickyLinks[i].setAttribute('rel', 'noopener'); */ } // Editorial 302 links, same here var editLinks = document.querySelectorAll( '.grid-item.category-editorial a' ); for (var i = 0; i < editLinks.length; i++) { editLinks[i].setAttribute('target', '_blank'); editLinks[i].setAttribute('rel', 'noopener'); } } // Add current year to copyright notices document.getElementById( 'js-current-year' ).textContent = new Date().getFullYear(); // Call functions after page load extLink(); //addAlly(); setTimeout(function() { //addAlly(); //ideally we should only need to run it once ↑ }, 5000); }; function closeCookieDialog (){ document.getElementById("cookie-consent").style.display = "none"; return false; } setTimeout ( function () { closeCookieDialog(); }, 15000); console.log(`%c 🚀🚀🚀 ███ █████ ███████ █████████ ███████████ █████████████ ███████████████ ███████ ███████ ███████ ┌───────────────────────────────────────────────────────────────────┐ │ │ │ Towards AI is looking for contributors! │ │ Join us in creating awesome AI content. │ │ Let's build the future of AI together → │ │ https://towardsai.net/contribute │ │ │ └───────────────────────────────────────────────────────────────────┘ `, `background: ; color: #00adff; font-size: large`); //Remove latest category across site document.querySelectorAll('a[rel="category tag"]').forEach(function(el) { if (el.textContent.trim() === 'Latest') { // Remove the two consecutive spaces (  ) if (el.nextSibling && el.nextSibling.nodeValue.includes('\u00A0\u00A0')) { el.nextSibling.nodeValue = ''; // Remove the spaces } el.style.display = 'none'; // Hide the element } }); // Add cross-domain measurement, anonymize IPs 'use strict'; //var ga = gtag; ga('config', 'G-9D3HKKFV1Q', 'auto', { /*'allowLinker': true,*/ 'anonymize_ip': true/*, 'linker': { 'domains': [ 'medium.com/towards-artificial-intelligence', 'datasets.towardsai.net', 'rss.towardsai.net', 'feed.towardsai.net', 'contribute.towardsai.net', 'members.towardsai.net', 'pub.towardsai.net', 'news.towardsai.net' ] } */ }); ga('send', 'pageview'); -->