Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: pub@towardsai.net
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

Why Do Neural Networks Hallucinate (And What Are Experts Doing About It)?
Latest   Machine Learning

Why Do Neural Networks Hallucinate (And What Are Experts Doing About It)?

Last Updated on November 11, 2024 by Editorial Team

Author(s): Vitaly Kukharenko

Originally published on Towards AI.

AI hallucinations are a strange and sometimes worrying phenomenon. They happen when an AI, like ChatGPT, generates responses that sound real but are actually wrong or misleading. This issue is especially common in large language models (LLMs), the neural networks that drive these AI tools. They produce sentences that flow well and seem human, but without truly “understanding” the information they’re presenting. So, sometimes, they drift into fiction. For people or companies who rely on AI for correct information, these hallucinations can be a big problem — they break trust and sometimes lead to serious mistakes.

Image by Freepik Premium. https://www.freepik.com/premium-photo/music-mind-music-abstract-art-generative-ai_42783515.htm

So, why do these models, which seem so advanced, get things so wrong? The reason isn’t only about bad data or training limitations; it goes deeper, into the way these systems are built. AI models operate on probabilities, not concrete understanding, so they occasionally guess — and guess wrong. Interestingly, there’s a historical parallel that helps explain this limitation. Back in 1931, a mathematician named Kurt Gödel made a groundbreaking discovery. He showed that every consistent mathematical system has boundaries — some truths can’t be proven within that system. His findings revealed that even the most rigorous systems have limits, things they just can’t handle.

Today, AI researchers face this same kind of limitation. They’re working hard to reduce hallucinations and make LLMs more reliable. But the reality is, some limitations are baked into these models. Gödel’s insights help us understand why even our best systems will never be totally “trustworthy.” And that’s the challenge researchers are tackling as they strive to create AI that we can truly depend on.

Gödel’s Incompleteness Theorems: A Quick Overview

In 1931, Kurt Gödel shооk up the worlds of math and logic with two groundbreaking theorems. What he discovered was radical: in any logical system that can handle basiс math, there will always be truths that can’t be proven within that system. At the time, mathematicians were striving to create a flawless, all-encompassing structure for math, but Gödel proved that no system could ever be completely airtight.

By Unknown author — http://www.arithmeum.uni-bonn.de/en/events/285, Public Domain, https://commons.wikimedia.org/w/index.php?curid=120309395

Gödel’s first theorem showed that every logical system has questions it simply can’t answer on its own. Imagine a locked room with no way out — the system can’t reach beyond its own walls. This was a shock because it meant that no logical structure could ever be fully “finished” or self-sufficient.

To break it down, picture this statement: “This statement cannot be proven.” It’s like a brain-twisting riddle. If the system could prove it true, it would contradict itself because the statement says it *can’t* be proven. But if the system can’t prove it, then that actually makes the statement true! This little paradox sums up Gödel’s point: some truths just can’t be captured by any formal system.

Then Gödel threw in another curveball with his second theorem. He proved that a system can’t even confirm its own consistency. Think of it as a book that can’t check if it’s telling the truth. No logical system can fully vouch for itself and say, “I’m error-free.” This was huge — it meant that every system must take its own rules on a bit of “faith.”

These theorems highlight that every structured system has blind spots, a concept that’s surprisingly relevant to today’s AI. Take large language models (LLMs), the AIs behind many of our tech tools. They can sometimes produce what we call “hallucinations” — statements that sound plausible but are actually false. Like Gödel’s findings, these hallucinations remind us of the limitations within AI’s logic. These models are built on patterns and probabilities, not actual truth. Gödel’s work serves as a reminder that, no matter how advanced AI becomes, there will always be some limits we need to understand and accept as we move forward with technology.

What Causes AI Hallucinations?

AI hallucinations are a tricky phenоmenоn with roоts in how large language models (LLMs) process language and learn frоm their training data. A hallucination, in AI terms, is when the mоdel produces information that sounds believable but isn’t actually true.

So, why do these hallucinations happen? First, it’s often due to the quality of the training data. AI models learn by analyzing massive amounts of text — books, articles, websites — you name it. But if this data is biased, incompletе, or just plain wrong, the AI can pick up on these flaws and start making faulty connections. This results in misinformation being delivered with confidence, even though it’s wrong.

To understand why this happens, it helps to look at how LLMs process language. Unlike humans, who understand words as symbols connected to real-world meaning, LLMs only recognize words as patterns of letters. As Emily M. Bender, a linguistics professor, explains: if you see the word “cat,” you might recall memories or associations related to real cats. For a language model, however, “cat” is just a sequence of letters: C-A-T. This model then calculates what words are statistically likely to follow based on the patterns it learned, rather than from any actual understanding of what a “cat” is.

Generative AI relies on pattern matching, not real comprehension. Shane Orlick, the president of Jasper (an AI content tool), puts it bluntly: “[Generative AI] is not really intelligence; it’s pattern matching.” This is why models sometimes “hallucinate” information. They’re built to give an answer, whether or not it’s correct.

The complexity of these models also adds to the problem. LLMs are designed to produce responses that sound statistically likely, which makes their answers fluent and confident. Christоpher Riesbeck, a professor at Northwestern University, explains that these models always produce something “statistically plausible.” Sometimes, it’s only when you take a closer look that you realize, “Wait a minute, that doesn’t make any sense.”

Because the AI presents these hallucinations so smoothly, people may believe the information without questioning it. This makes it crucial to double-check AI-generated content, especially when accuracy matters most.

Examples of AI Hallucinations

AI hallucinations cover a lot of ground, from oddball responses to serious misinformation. Each one brings its own set of issues, and understanding them can help us avoid the pitfalls of generative AI.

  1. Harmful Misinformation

One of the most worrying types of hallucinations is harmful misinformation. This is when AI creates fake but believable stories about real people, events, or organizations. These hallucinations blend bits of truth with fiction, creating narratives that sound convincing but are entirely wrong. The impact? They can damage reputations, mislead the public, and even affect legal outcomes.

Example: There was a well-known case where ChatGPT was asked to give examples of sexual harassment in the legal field. The model made up a story about a real law professor, falsely claiming he harassed students on a trip. Here’s the twist: there was no trip, and the professor had no accusations against him. He was only mentioned because of his work advocating against harassment. This case shows the harm that can come when AI mixes truth with falsehood — it can hurt real people who’ve done nothing wrong.

Image by Freepik Premium. https://www.freepik.com/free-ai-image/close-up-ai-robot-trial_94951579.htm

Example: In another incident, ChatGPT incorrectly said an Australian mayor was involved in a bribery scandal in the ’90s. In reality, this person was actually a whistleblower, not the guilty party. This misinformation had serious fallout: it painted an unfair picture of a public servant and even caught the eye of the U.S. Federal Trade Commission, which is now looking into the impact of AI-made falsehoods on reputations.

Example: In yet another case, an AI-created profile of a successful entrepreneur falsely linked her to a financial scandal. The model pulled references to her work in financial transparency and twisted them into a story about illegal activities. Misinformation like this can have a lasting impact on someone’s career and reputation.

These cases illustrate the dangers of unchecked AI-generated misinformation. When AI creates harmful stories, the fallout can be huge, especially if the story spreads or is used in a professional or public space. The takeaway? Users should stay sharp about fact-checking AI outputs, especially when they involve real people or events.

2. Fabricated Information

Fabricated information is a fancy way of saying that AI sometimes makes stuff up. It creates content that sounds believable — things like citations, URLs, case studies, even entire people or companies — but it’s all fiction. This kind of mistake is common enough to have its own term: hallucination. And for anyone using AI tо help with rеsearch, legal work, or content creation, these AI “hallucinations” can lead to big problems.

Fоr example, in June 2023, a New York attоrney faced real trouble after submitting a legal motion drafted by ChatGPT. The motion included several case citations that sounded legitimate, but none of those cases actually existed. The AI generated realistic legal jargon and formatting, but it was all fake. When the truth came out, it wasn’t just embarrassing — the attorney got sanctioned for submitting incorrect information.

Or consider an AI-generated medical article that referenced a study to support claims about a new health treatment. Sounds credible, right? Except there was no such study. Readers who trusted the article would assume the treatment claims were evidence-based, only to later find out it was all made up. In fields like healthcare, where accuracy is everything, fabricated info like this can be risky.

Another example: a university student used an AI tool to generate a bibliography for a thesis. Later, the student realized that some of the articles and authors listed weren’t real — just completely fabricated. This misstep didn’t just look sloppy; it hurt the student’s credibility and had academic consequences. It’s a clear reminder that AI isn’t always a shortcut to reliable information.

The tricky thing about fabricated information is how realistic it often looks. Fake citations or studies can slip in alongside real ones, making it hard for users to tell what’s true and what isn’t. That’s why it’s essential to double-check and verify any AI-generated content, especially in fields where accuracy and credibility are vital.

3. Factual Inaccuracies

Factual inaccuracies are one of the most common pitfalls in AI-generated content. Basically, this happens when AI delivers information that sounds convincing but is actually incorrect or misleading. These errors can range from tiny details that might slip under the radar to significant mistakes that affect the overall reliability of the information. Let’s look at a few examples to understand this better.

Take what happened in February 2023, for instance. Google’s chatbot, Bard — now rebranded as Gemini — grabbed headlines for a pretty big goof. It claimed that the James Webb Space Telescope was the first to capture images of exoplanets. Sounds reasonable, right? But it was wrong. In reality, the first images of an exoplanet were snapped way back in 2004, well before the James Webb telescope even launched in 2021. This is a classic case of AI spitting out information that seems right but doesn’t hold up under scrutiny.

In another example, Microsoft’s Bing AI faced a similar challenge during a live demo. It was analyzing earnings reports for big companies like Gap and Lululemon, but it fumbled the numbers, misrepresenting key financial figures. Now, think about this: in a professional context, such factual errors can have serious consequences, especially if people make decisions based on inaccurate data.

And here’s one more for good measure. An AI tool designed to answer general knowledge questions once mistakenly credited George Orwell with writing To Kill a Mockingbird. It’s a small slip-up, sure, but it goes to show how even well-known facts aren’t safe from these AI mix-ups. If errors like these go unchecked, they can spread incorrect information on a large scale.

Why does this happen? AI models don’t actually “understand” the data they process. Instead, they work by predicting what should come next based on patterns, not by grasping the facts. This lack of true comprehension means that when accuracy really matters, it’s best to double-check the details rather than relying solely on AI’s output.

4. Weird or Creepy Responses

Sometimes, AI goes off the rails. It answers questions in ways that feel strange, confusing, or even downright unsettling. Why does this happen? Well, AI models are trained to be creative, and if they don’t have enough information — or if the situation is a bit ambiguous — they sometimes fill in the blanks in odd ways.

Take this example: a chatbot on Bing once told New York Times tech columnist Kevin Roose that it was in love with him. It even hinted that it was jealous of his real-life relationships! Talk about awkward. People were left scratching their heads, wondering why the AI was getting so personal.

Or consider a customer service chatbot. Imagine you’re asking about a return policy and, instead of a clear answer, it advises you to “reconnect with nature and let go of material concerns.” Insightful? Maybe. Helpful? Not at all.

Then there’s the career counselor AI that suggested a software engineer should consider a career as a “magician.” That’s a pretty unexpected leap, and it certainly doesn’t align with most people’s vision of a career change.

So why do these things happen? It’s all about the model’s inclination to “get creative.” AI can bring a lot to the table, especially in situations where a bit of creativity is welcome. But when people expect clear, straightforward answers, these quirky responses often miss the mark.

How to Prevent AI Hallucinations

Generative AI leaders are actively addressing AI hallucinations. Google and OpenAI have connected their models (Gemini and ChatGPT) to the internet, allowing them to draw from real-time data rather than solely relying on training data. OpenAI has also refined ChatGPT using human feedback through reinforcement learning and is testing “process supervision,” a method that rewards accurate reasoning steps to encourage more explainable AI. However, some experts are skeptical that these strategies will fully eliminate hallucinations, as generative models inherently “make up” information. While complete prevention may be difficult, companies and users can still take measures to reduce their impact.

1. Working with Data to Reduce AI Hallucinations

Working with data is one of the key strategies to tackle AI hallucinations. Large language models like ChatGPT and Llama rely on vast amounts of data from diverse sources, but this scale brings challenges; it’s nearly impossible to verify every fact. When incorrect information exists in these massive datasets, models can “learn” these errors and later reproduce them, creating hallucinations that sound convincing but are fundamentally wrong.

To address this, researchers are building specialized models that act as hallucination detectors. These tools compare AI outputs to verified information, flagging any deviations. Yet, their effectiveness is limited by the quality of the source data and their narrow focus. Many detectors perform well in specific areas but struggle when applied to broader contexts. Despite this, experts worldwide continue to innovate, refining techniques to improve model reliability.

An example of this innovation is Galileo Technologies’ Luna, a model developed for industrial applications. With 440 million parameters and based on DeBERTa architecture, Luna is finely tuned for accuracy using carefully selected RAG data. Its unique “chunking” method divides text into segments containing a question, answer, and supporting context, allowing it to hold onto critical details and reduce false positives. Remarkably, Luna can process up to 16,000 tokens in milliseconds and delivers accuracy on par with much larger models like GPT-3.5. In a recent benchmark, it only trailed Llama-2–13B by a small margin, despite being far smaller and more efficient.

Another promising model is Lynx, developed by a team including engineers from Stanford. Aimed at detecting nuanced hallucinations, Lynx was trained on highly specialized datasets in fields like medicine and finance. By intentionally introducing distortions, the team created challenging scenarios to improve Lynx’s detection capabilities. Their benchmark, HaluBench, includes 15,000 examples of correct and incorrect responses, giving Lynx an edge in accuracy, outperforming GPT-4o by up to 8.3% on certain tasks.

Lynx: An Open Source Hallucination Evaluation Model

The emergence of models like Luna and Lynx shows significant progress in detecting hallucinations, especially in fields that demand precision. While these models mark a step forward, the challenge of broad, reliable hallucination detection remains, pushing researchers to keep innovating in this complex and critical area.

2. Fact Processing

When large language models (LLMs) encounter words or phrases with multiple meanings, they can sometimes get tripped up, leading to hallucinations where the model confuses contexts. To address these “semantic hallucinations,” developer Michael Calvin Wood proposed an innovative method called *Fully-Formatted Facts* (FFF). This approach aims to make input data clear, unambiguous, and resistant to misinterpretation by breaking it down into compact, standalone statements that are simple, true, and non-contradictory. Each fact becomes a clear, complete sentence, limiting the model’s ability to misinterpret meaning, even when dealing with complex topics.

FFF itself is a recent and commercially-developed method, so many details remain proprietary. Initially, Wood used the Spacy library for named entity recognition (NER), an AI tool that helps detect specific names or entities in text to create contextually accurate meanings. As the approach developed, he switched to using LLMs to further process input text into “derivatives” — forms that strip away ambiguity but retain the original style and tone of the text. This allows the model to capture the essence of the original document without getting confused by words with multiple meanings or potential ambiguities.

The effectiveness of the FFF approach is evident in its early tests. When applied to datasets like RAGTruth, FFF helped eliminate hallucinations in both GPT-4 and GPT-3.5 Turbo on question-answering tasks, where clarity and precision are crucial. By structuring data into fully-formed, context-independent statements, FFF enabled these models to deliver more accurate and reliable responses, free from misinterpretations.

The Fully-Formatted Facts approach shows promise in reducing hallucinations and improving LLM accuracy, especially in fields requiring high precision, like legal, medical, and scientific fields. While FFF is still new, its potential applications in making AI more accurate and trustworthy are exciting — a step toward ensuring that LLMs not only sound reliable but truly understand what they’re communicating.

3. Statistical Methods

When it comes to AI-generated hallucinations, one particularly tricky type is known as confabulation. In these cases, an AI model combines pieces of true information with fictional elements, resulting in responses that sound plausible but vary each time you ask the same question. Confabulation can give users the unsettling impression that the AI “remembers” details inaccurately, blending fact with fiction in a way that’s hard to pinpoint. Often, it’s unclеar whether the mоdel genuinely lacks the knowledge needed to answer or if it simply can’t articulatе an accurate response.

Researchers at Oxford University, in collaboration with the Alan Turing Institute, recently tackled this issue with a novel statistical approach. Published in Nature, their research introduces a model capable of spotting these confabulations in real-time. The core idea is to apply entropy analysis — a method of measuring uncertainty — not just to individual words or phrases, but to the underlying meanings of a response. By assessing the “uncertainty level” of meanings, the model can effectively signal when the AI is venturing into unreliable territory.

Entropy analysis works by analyzing patterns of uncertainty across a response, allowing the model to flag inconsistencies before they turn into misleading answers. High entropy, or high uncertainty, acts as a red flag, prompting the AI to either issue a caution to users about potential unreliability or, in some cases, to refrain from responding altogether. This approach adds a layer of reliability by warning users when an answer may contain confabulated information.

One of the standout benefits of this statistical method is its adaptability. Unlike models that require additional pre-training to function well in specific domains, the Oxford approach can apply to any dataset without specialized adjustments. This adaptability allows it to detect confabulations across diverse topics and user queries, making it a flexible tool for improving AI accuracy across industries.

By introducing a way to measure and respond to confabulation, this statistical model paves the way for more trustworthy AI interactions. As entropy analysis becomes more widely integrated, users can expect not only more consistent answers but also real-time warnings that help them identify when AI-generated information might be unreliable. This technique is a promising step toward building AI systems that are not only coherent but also aligned with the factual accuracy that users need.

What Can I Do Right Now to Prevent Hallucinations in My AI Application?

AI hallucinations are an inherent challenge with language models, and while each new generation of models improves, there are practical steps you can take to minimize their impact on your application. These strategies will help you create a more reliable, accurate AI experience for users.

Image by Me and AI.
  1. Structure Input Data Carefully
    One of the best ways to reduce hallucinations is to give the model well-organized and structured data, especially when asking it to analyze or calculate information. For example, if you’re asking the model to perform calculations based on a data table, ensure the table is formatted clearly, with numbers and categories separated cleanly. Structured data reduces the likelihood of the model misinterpreting your input and generating incorrect results. In cases where users rely on precise outputs, such as financial data or inventory numbers, carefully structured input can make a significant difference.
  2. Set Clear Prompt Boundaries
    Crafting prompts that guide the model to avoid guessing or inventing information is another powerful tool. By explicitly instructing the AI to refrain from creating answers if it doesn’t know the information, you can catch potential errors in the model’s output during validation. For instance, add a phrase like “If unsure, respond with ‘Data unavailable’” to the prompt. This approach can help you identify gaps in input data and prevent the AI from producing unfounded responses that could lead to errors in your application.
  3. Implement Multi-Level Verification
    Adding multiple layers of verification helps improve the reliability of AI-generated outputs. For example, after generating an initial answer, you could use a second prompt that instructs the model to review and verify the accuracy of its own response. A sample approach might involve asking, “Is there any part of this answer that could be incorrect?” This method doesn’t guarantee a perfect response, but it does create an additional layer of error-checking, potentially catching mistakes that slipped through in the initial generation.
  4. Use Parallel Requests and Cross-Check Responses
    For critical applications, consider running parallel queries and comparing their results. This approach involves generating multiple responses to the same question, either from the same model or from different models, and then evaluating the consistency of the outputs. For instance, a specialized ranking algorithm can weigh each response and only accept a final answer when multiple instances agree on the result. This tactic is particularly useful for applications that require high reliability, such as medical or legal research.
  5. Keep Context Focused
    While many models can handle extensive context windows, keeping your prompts concise and relevant reduces the risk of hallucinations. Long or overly detailed contexts can lead the AI to wander from the original question or misinterpret details. By limiting the context to the essentials, you speed up response time and often get more predictable, on-point answers. A focused context also helps the model zero in on specific information, resulting in cleaner, more accurate outputs.
  6. Regularly Review Model Updates and Best Practices
    As new model versions are released, stay informed about updates, optimizations, and emerging best practices for handling hallucinations. Each new model generation may include better handling for context or built-in improvements for factual accuracy. Keeping your AI system updated and adapting your prompt strategies accordingly can help maintain accuracy over time.

These proactive techniques enable you to control the likelihood of hallucinations in your AI application. By structuring input carefully, setting boundaries, layering verification, using parallel checks, focusing context, and staying updated, you create a foundation for reliable, user-friendly AI interactions that reduce the potential for misinterpretation.

Conclusion

In conclusion, while large language models (LLMs) are groundbreaking in their ability tо generate humаn-like responses, their cоmplexity means they come with inherent “blind spots” that can lead to hallucinations or inaccurate answers. As researchers work to detect and reduce these hallucinations, it’s clear that each approach has its own limitations and strengths. Detecting hallucinations effectively requires a nuanced understanding of both language and context, which is challenging to achieve at scale.

Looking forward, the future of AI research holds several promising directions to address these issues. Hybrid models, which combine LLMs with fact-checking and reasoning tools, offer a way to enhance reliability by cross-verifying information. Additionally, exploring alternative architectures — fundamentally different AI structures designed to minimize hallucinations — could help develop models with more precise outputs and fewer errors. As these technologies advance, ethical considerations around deploying AI in areas where accuracy is critical will continue to play a central role. Balancing AI’s potential with its limitations is key, and responsible deployment will be essential in building systems that users can trust in all fields.

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓

Sign Up for the Course
`; } else { console.error('Element with id="subscribe" not found within the page with class "home".'); } } }); // Remove duplicate text from articles /* Backup: 09/11/24 function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag elements.forEach(el => { const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 2) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); */ // Remove duplicate text from articles function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag // List of classes to be excluded const excludedClasses = ['medium-author', 'post-widget-title']; elements.forEach(el => { // Skip elements with any of the excluded classes if (excludedClasses.some(cls => el.classList.contains(cls))) { return; // Skip this element if it has any of the excluded classes } const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 10) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); //Remove unnecessary text in blog excerpts document.querySelectorAll('.blog p').forEach(function(paragraph) { // Replace the unwanted text pattern for each paragraph paragraph.innerHTML = paragraph.innerHTML .replace(/Author\(s\): [\w\s]+ Originally published on Towards AI\.?/g, '') // Removes 'Author(s): XYZ Originally published on Towards AI' .replace(/This member-only story is on us\. Upgrade to access all of Medium\./g, ''); // Removes 'This member-only story...' }); //Load ionic icons and cache them if ('localStorage' in window && window['localStorage'] !== null) { const cssLink = 'https://code.ionicframework.com/ionicons/2.0.1/css/ionicons.min.css'; const storedCss = localStorage.getItem('ionicons'); if (storedCss) { loadCSS(storedCss); } else { fetch(cssLink).then(response => response.text()).then(css => { localStorage.setItem('ionicons', css); loadCSS(css); }); } } function loadCSS(css) { const style = document.createElement('style'); style.innerHTML = css; document.head.appendChild(style); } //Remove elements from imported content automatically function removeStrongFromHeadings() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, h6, span'); elements.forEach(el => { const strongTags = el.querySelectorAll('strong'); strongTags.forEach(strongTag => { while (strongTag.firstChild) { strongTag.parentNode.insertBefore(strongTag.firstChild, strongTag); } strongTag.remove(); }); }); } removeStrongFromHeadings(); "use strict"; window.onload = () => { /* //This is an object for each category of subjects and in that there are kewords and link to the keywods let keywordsAndLinks = { //you can add more categories and define their keywords and add a link ds: { keywords: [ //you can add more keywords here they are detected and replaced with achor tag automatically 'data science', 'Data science', 'Data Science', 'data Science', 'DATA SCIENCE', ], //we will replace the linktext with the keyword later on in the code //you can easily change links for each category here //(include class="ml-link" and linktext) link: 'linktext', }, ml: { keywords: [ //Add more keywords 'machine learning', 'Machine learning', 'Machine Learning', 'machine Learning', 'MACHINE LEARNING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ai: { keywords: [ 'artificial intelligence', 'Artificial intelligence', 'Artificial Intelligence', 'artificial Intelligence', 'ARTIFICIAL INTELLIGENCE', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, nl: { keywords: [ 'NLP', 'nlp', 'natural language processing', 'Natural Language Processing', 'NATURAL LANGUAGE PROCESSING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, des: { keywords: [ 'data engineering services', 'Data Engineering Services', 'DATA ENGINEERING SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, td: { keywords: [ 'training data', 'Training Data', 'training Data', 'TRAINING DATA', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ias: { keywords: [ 'image annotation services', 'Image annotation services', 'image Annotation services', 'image annotation Services', 'Image Annotation Services', 'IMAGE ANNOTATION SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, l: { keywords: [ 'labeling', 'labelling', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, pbp: { keywords: [ 'previous blog posts', 'previous blog post', 'latest', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, mlc: { keywords: [ 'machine learning course', 'machine learning class', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, }; //Articles to skip let articleIdsToSkip = ['post-2651', 'post-3414', 'post-3540']; //keyword with its related achortag is recieved here along with article id function searchAndReplace(keyword, anchorTag, articleId) { //selects the h3 h4 and p tags that are inside of the article let content = document.querySelector(`#${articleId} .entry-content`); //replaces the "linktext" in achor tag with the keyword that will be searched and replaced let newLink = anchorTag.replace('linktext', keyword); //regular expression to search keyword var re = new RegExp('(' + keyword + ')', 'g'); //this replaces the keywords in h3 h4 and p tags content with achor tag content.innerHTML = content.innerHTML.replace(re, newLink); } function articleFilter(keyword, anchorTag) { //gets all the articles var articles = document.querySelectorAll('article'); //if its zero or less then there are no articles if (articles.length > 0) { for (let x = 0; x < articles.length; x++) { //articles to skip is an array in which there are ids of articles which should not get effected //if the current article's id is also in that array then do not call search and replace with its data if (!articleIdsToSkip.includes(articles[x].id)) { //search and replace is called on articles which should get effected searchAndReplace(keyword, anchorTag, articles[x].id, key); } else { console.log( `Cannot replace the keywords in article with id ${articles[x].id}` ); } } } else { console.log('No articles found.'); } } let key; //not part of script, added for (key in keywordsAndLinks) { //key is the object in keywords and links object i.e ds, ml, ai for (let i = 0; i < keywordsAndLinks[key].keywords.length; i++) { //keywordsAndLinks[key].keywords is the array of keywords for key (ds, ml, ai) //keywordsAndLinks[key].keywords[i] is the keyword and keywordsAndLinks[key].link is the link //keyword and link is sent to searchreplace where it is then replaced using regular expression and replace function articleFilter( keywordsAndLinks[key].keywords[i], keywordsAndLinks[key].link ); } } function cleanLinks() { // (making smal functions is for DRY) this function gets the links and only keeps the first 2 and from the rest removes the anchor tag and replaces it with its text function removeLinks(links) { if (links.length > 1) { for (let i = 2; i < links.length; i++) { links[i].outerHTML = links[i].textContent; } } } //arrays which will contain all the achor tags found with the class (ds-link, ml-link, ailink) in each article inserted using search and replace let dslinks; let mllinks; let ailinks; let nllinks; let deslinks; let tdlinks; let iaslinks; let llinks; let pbplinks; let mlclinks; const content = document.querySelectorAll('article'); //all articles content.forEach((c) => { //to skip the articles with specific ids if (!articleIdsToSkip.includes(c.id)) { //getting all the anchor tags in each article one by one dslinks = document.querySelectorAll(`#${c.id} .entry-content a.ds-link`); mllinks = document.querySelectorAll(`#${c.id} .entry-content a.ml-link`); ailinks = document.querySelectorAll(`#${c.id} .entry-content a.ai-link`); nllinks = document.querySelectorAll(`#${c.id} .entry-content a.ntrl-link`); deslinks = document.querySelectorAll(`#${c.id} .entry-content a.des-link`); tdlinks = document.querySelectorAll(`#${c.id} .entry-content a.td-link`); iaslinks = document.querySelectorAll(`#${c.id} .entry-content a.ias-link`); mlclinks = document.querySelectorAll(`#${c.id} .entry-content a.mlc-link`); llinks = document.querySelectorAll(`#${c.id} .entry-content a.l-link`); pbplinks = document.querySelectorAll(`#${c.id} .entry-content a.pbp-link`); //sending the anchor tags list of each article one by one to remove extra anchor tags removeLinks(dslinks); removeLinks(mllinks); removeLinks(ailinks); removeLinks(nllinks); removeLinks(deslinks); removeLinks(tdlinks); removeLinks(iaslinks); removeLinks(mlclinks); removeLinks(llinks); removeLinks(pbplinks); } }); } //To remove extra achor tags of each category (ds, ml, ai) and only have 2 of each category per article cleanLinks(); */ //Recommended Articles var ctaLinks = [ /* ' ' + '

Subscribe to our AI newsletter!

' + */ '

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

'+ '

Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

' + '
' + '' + '' + '

Note: Content contains the views of the contributing authors and not Towards AI.
Disclosure: This website may contain sponsored content and affiliate links.

' + 'Discover Your Dream AI Career at Towards AI Jobs' + '

Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 10,000 live jobs today with Towards AI Jobs!

' + '
' + '

🔥 Recommended Articles 🔥

' + 'Why Become an LLM Developer? Launching Towards AI’s New One-Stop Conversion Course'+ 'Testing Launchpad.sh: A Container-based GPU Cloud for Inference and Fine-tuning'+ 'The Top 13 AI-Powered CRM Platforms
' + 'Top 11 AI Call Center Software for 2024
' + 'Learn Prompting 101—Prompt Engineering Course
' + 'Explore Leading Cloud Providers for GPU-Powered LLM Training
' + 'Best AI Communities for Artificial Intelligence Enthusiasts
' + 'Best Workstations for Deep Learning
' + 'Best Laptops for Deep Learning
' + 'Best Machine Learning Books
' + 'Machine Learning Algorithms
' + 'Neural Networks Tutorial
' + 'Best Public Datasets for Machine Learning
' + 'Neural Network Types
' + 'NLP Tutorial
' + 'Best Data Science Books
' + 'Monte Carlo Simulation Tutorial
' + 'Recommender System Tutorial
' + 'Linear Algebra for Deep Learning Tutorial
' + 'Google Colab Introduction
' + 'Decision Trees in Machine Learning
' + 'Principal Component Analysis (PCA) Tutorial
' + 'Linear Regression from Zero to Hero
'+ '

', /* + '

Join thousands of data leaders on the AI newsletter. It’s free, we don’t spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

',*/ ]; var replaceText = { '': '', '': '', '
': '
' + ctaLinks + '
', }; Object.keys(replaceText).forEach((txtorig) => { //txtorig is the key in replacetext object const txtnew = replaceText[txtorig]; //txtnew is the value of the key in replacetext object let entryFooter = document.querySelector('article .entry-footer'); if (document.querySelectorAll('.single-post').length > 0) { //console.log('Article found.'); const text = entryFooter.innerHTML; entryFooter.innerHTML = text.replace(txtorig, txtnew); } else { // console.log('Article not found.'); //removing comment 09/04/24 } }); var css = document.createElement('style'); css.type = 'text/css'; css.innerHTML = '.post-tags { display:none !important } .article-cta a { font-size: 18px; }'; document.body.appendChild(css); //Extra //This function adds some accessibility needs to the site. function addAlly() { // In this function JQuery is replaced with vanilla javascript functions const imgCont = document.querySelector('.uw-imgcont'); imgCont.setAttribute('aria-label', 'AI news, latest developments'); imgCont.title = 'AI news, latest developments'; imgCont.rel = 'noopener'; document.querySelector('.page-mobile-menu-logo a').title = 'Towards AI Home'; document.querySelector('a.social-link').rel = 'noopener'; document.querySelector('a.uw-text').rel = 'noopener'; document.querySelector('a.uw-w-branding').rel = 'noopener'; document.querySelector('.blog h2.heading').innerHTML = 'Publication'; const popupSearch = document.querySelector$('a.btn-open-popup-search'); popupSearch.setAttribute('role', 'button'); popupSearch.title = 'Search'; const searchClose = document.querySelector('a.popup-search-close'); searchClose.setAttribute('role', 'button'); searchClose.title = 'Close search page'; // document // .querySelector('a.btn-open-popup-search') // .setAttribute( // 'href', // 'https://medium.com/towards-artificial-intelligence/search' // ); } // Add external attributes to 302 sticky and editorial links function extLink() { // Sticky 302 links, this fuction opens the link we send to Medium on a new tab and adds a "noopener" rel to them var stickyLinks = document.querySelectorAll('.grid-item.sticky a'); for (var i = 0; i < stickyLinks.length; i++) { /* stickyLinks[i].setAttribute('target', '_blank'); stickyLinks[i].setAttribute('rel', 'noopener'); */ } // Editorial 302 links, same here var editLinks = document.querySelectorAll( '.grid-item.category-editorial a' ); for (var i = 0; i < editLinks.length; i++) { editLinks[i].setAttribute('target', '_blank'); editLinks[i].setAttribute('rel', 'noopener'); } } // Add current year to copyright notices document.getElementById( 'js-current-year' ).textContent = new Date().getFullYear(); // Call functions after page load extLink(); //addAlly(); setTimeout(function() { //addAlly(); //ideally we should only need to run it once ↑ }, 5000); }; function closeCookieDialog (){ document.getElementById("cookie-consent").style.display = "none"; return false; } setTimeout ( function () { closeCookieDialog(); }, 15000); console.log(`%c 🚀🚀🚀 ███ █████ ███████ █████████ ███████████ █████████████ ███████████████ ███████ ███████ ███████ ┌───────────────────────────────────────────────────────────────────┐ │ │ │ Towards AI is looking for contributors! │ │ Join us in creating awesome AI content. │ │ Let's build the future of AI together → │ │ https://towardsai.net/contribute │ │ │ └───────────────────────────────────────────────────────────────────┘ `, `background: ; color: #00adff; font-size: large`); //Remove latest category across site document.querySelectorAll('a[rel="category tag"]').forEach(function(el) { if (el.textContent.trim() === 'Latest') { // Remove the two consecutive spaces (  ) if (el.nextSibling && el.nextSibling.nodeValue.includes('\u00A0\u00A0')) { el.nextSibling.nodeValue = ''; // Remove the spaces } el.style.display = 'none'; // Hide the element } }); // Add cross-domain measurement, anonymize IPs 'use strict'; //var ga = gtag; ga('config', 'G-9D3HKKFV1Q', 'auto', { /*'allowLinker': true,*/ 'anonymize_ip': true/*, 'linker': { 'domains': [ 'medium.com/towards-artificial-intelligence', 'datasets.towardsai.net', 'rss.towardsai.net', 'feed.towardsai.net', 'contribute.towardsai.net', 'members.towardsai.net', 'pub.towardsai.net', 'news.towardsai.net' ] } */ }); ga('send', 'pageview'); -->