Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: pub@towardsai.net
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

Do Large Language Models Have Minds Like Ours?
Latest   Machine Learning

Do Large Language Models Have Minds Like Ours?

Last Updated on July 17, 2023 by Editorial Team

Author(s): Vincent Carchidi

Originally published on Towards AI.

Source: Image by Possessed Photography on Unsplash.

Do Large Language Models Have Minds Like Ours?

Intellectual spats between generative linguists and machine learning researchers have neglected the most interesting components of human language use.

Custom Tag: Creative Aspect of Language Use

By Vincent J. Carchidi

Do large language models (LLMs) use language creatively? Ample intellectual content has been produced recently over whether LLMs generate text sufficiently novel to be considered “creative” or merely synthesize creatively human-generated content without a distinctive contribution of their own. It is one dimension of a highly complex debate that is unfolding over the nature of both LLMs and human intelligence.

This saga has seen contributions from thinkers in a diversity of disciplines, including computer science, robotics, cognitive science, philosophy, and even national security. A notable flashpoint is linguist Noam Chomsky’s fiery critique of ChatGPT and LLMs in The New York Times. This controversial piece illuminates stark divides between scientific approaches to the nature of the human mind, natural and artificial intelligence (AI), and how engineering makes use (or doesn’t) of these notions.

Chomsky’s NYT piece spurred tremendous debates on this subject, as he highlighted his belief that “Intelligence consists not only of creative conjectures but also of creative criticism.” The discourse which has sprung up in the wake of this and other pieces surrounds familiar arguments about the utility of generative linguistics, the role of cognitive science in AI, and even broader matters such as the emergent theory of mind capabilities in LLMs.

I find myself frustrated and baffled. This is good because otherwise, I may not have written this article. But the reasons are not stellar: Chomsky’s rigid communication style has prevented him from leveraging some of the fascinating features of his own linguistic work in a direct and explicit manner to assess LLMs’ capabilities. Conversely, machine learning researchers have so thoroughly indulged in the euphoria of the field’s recent (and real) advancements that they frequently lack the will required to assess whether human cognition is as straightforward as it seems.

I attempt to remedy this here. Where Chomsky’s approach to the mind and the tradition of generative linguistics broadly are brought into AI, they have focused intensely on familiar arguments like the poverty of the stimulus and the innateness of linguistic knowledge or principles. I instead highlight what is known in the rationalist tradition in philosophy and cognitive science as the “creative aspect of language use,” or CALU.

CALU, referring to the stimulus-free, unbounded, and appropriate and coherent ways in which humans use language, offers a three-pronged test for the existence of a mind similar to our own. It is inextricably connected to human intellectual nature and our means of creativity.

Somehow, despite several AI-induced intellectual spasms lately, the only sustained conversation of CALU and LLMs/AI appears to be in a Machine Learning Street Talk video on Noam Chomsky (Disclaimer: I have no affiliation with MLST). Mohamad Aboufoul also alludes to Chomsky’s views on determinism and free will, relatedly.

Whether LLMs are creative in the sense that human beings, upon reflection, consider themselves to be is one of the most important questions interested individuals can ask at the current moment. An understanding of “true” or “genuine” creativity informs a diversity of views related to human nature, the significance of human effort and output, AI ethics, the nature of current and possible AI systems, and the contours of human-machine interaction in the near future. Creativity is not everything, but if one is interested in AI, then one ought to know where one stands on it.

Through this lens, we ought to assess the linguistic creativity — and the question of whether we are interacting with minds like our own — of LLMs. We begin with a breakdown of CALU and its relevance for AI, followed by an application of its three components to LLMs like ChatGPT, and conclude with some thoughts on the significance of an AI system reproducing CALU for humans.

Table of Contents

  • What Is the Creative Aspect of Language Use?
  • Why Does CALU Even Matter for AI?
  • Do LLMs Exhibit CALU?
  • What Would the Artificial Reproduction of CALU Mean for Humans?

What Is the Creative Aspect of Language Use?

CALU takes what is vigorously debated in AI and linguistics — the syntactic and semantic novelty of human language use — and situates it in a broader, though more subtle, perspective. It is an observation and description of how language is used by human individuals; CALU is not a theory or even an explanatory framework of how language is put to use by humans in concrete settings. (This mirrors the competence-performance distinction often employed in the cognitive sciences, but this should not distract the reader.)

CALU is inextricably bound up with human thought. This creative use of language is, in fact, ordinary — it is the ability, as Chomsky puts it, to “form new statements which express new thoughts and which are appropriate to new situations.” Philosopher James McGilvray notes that this idea is traceable back to Descartes, who believed that, taken together, the three components of CALU are “a test of having a mind “as we do.”” CALU, in this sense, is not intelligence per se, but a fundamental feature of human cognition, shaping the intellectual character of the species.

The ability to form new linguistic expressions in a manner that is causally detached from the circumstances of their use and transmit them to others who find them intelligible and complementary to their own thoughts underwrites the most mundane and the richest of human creations. “This,” Charles Kreidler writes, “is just what happens when the architect envisions a building not yet erected, the composer puts together a concerto that is still to be played, a writer devises a story about imaginary people doing imaginary things…”

Creative language use is thus broken down as follows (drawn from McGilvray’s description here):

· Stimulus Freedom: No external or internal circumstances can be causally traced back to the use of a particular linguistic expression. “Language use might be prompted by but is not causally tied to and determined by current external or internal circumstance.”

· Unbounded: There is no limit on the number or kinds of sentences that are produced either in thought or in speech, including in any specific circumstance. Human linguistic production is not only novel but innovative.

· Appropriate and Coherent to Circumstance: Despite the unbounded and stimulus-free nature of language use, it is nonetheless appropriate for any given circumstance, fictional or otherwise. Uncaused remarks are produced without limit yet appropriately to the eliciting stimulus.

Critically, these three uses of “vocabulary items and syntactic rules,” as linguist Mark Baker points out, must be simultaneous. Language use would not be creative if we only generated an unbounded set of thoughts or speech; it would not be creative if we thought or spoke in a stimulus-free fashion but incoherently and with bounded limits; and it would not be creative to simply utter a few simple words that are appropriate to a situation but not unbounded and stimulus free. Only together do these features of language use make it creative. Only together do they indicate the presence of a mind like our own.

Much more can be said, but consider this point before we proceed to language use by LLMs: while one may draw conclusions about the relationship between semantics and syntax, and a host of overlapping cognitive, psychological, and social matters, from CALU, one does not need to be a generative linguist to recognize that CALU exists.

Why Does CALU Even Matter for AI?

CALU offers a set of criteria by which the existence of a mind can be determined. This ability is remarkable. As I have written elsewhere, it is frankly “ironic that our ordinary use of language possesses a quality so remarkable but that so few of us are prepared to acknowledge it.” I depart from the respectable and understandable view articulated here by computer scientist Pedro Domingos that human creativity is given too much credit, owing more to a simple cutting-and-pasting process than some high-in-the-sky slice of humanity.

Domingos, interestingly, highlights his own experience as a musician as anecdotal evidence that human creativity is a rather mechanistic process (presumably, to say nothing of its enjoyment). I imagine that Domingos’ use of his personal intuitions in this regard is not uncommon in assessments of AI systems like LLMs. We readily believe we understand ourselves and what it means for humans to be creative. We naturally believe we can transfer this understanding to LLMs at will. But, as the need to expound on anthropomorphizing shows, our understanding of human intelligence is frequently deficient upon closer examination. Accusations of goalpost shifting in laying out criteria for what is considered “true” human-like intelligence charitably reflect this realization that we all lack an understanding of human intellectual nature (rather than the uncharitable suggestion that one’s opponents are acting in bad faith).

Indeed, Chomsky’s own approach to the study of language and mind is couched in an oft-neglected philosophy that rejects the use of commonsense intuitions and concepts in scientific inquiry. He frequently likens the proper study of the mind to the development of physics, highlighting the latter’s centuries-long difficulties with commonsense notions of motion and causality. A certain skepticism of simply accepting what is right before our eyes pervade generativist writing. It is in this context that CALU emerges as a phenomenon visible only to those willing to take a sufficiently refined lens to the problem of human language use — recognizing the “crucial if obscure difference” embedded in the observation that “discourse is not a series of random utterances but fits the situation that evokes it but does not cause it.”

Large Language Models may present the most challenging example of human-like language use by non-humans in the species’ history. I am innately resistant to hype and exaggeration in AI, but I know of no comparable example in the history of human invention that exhibits as human-like use of syntactic structures as LLMs like ChatGPT.

Indeed, perhaps surprisingly to some today, automating linguistic creativity has been an intermittent fixation by rationalists and generativists of diverse stripes, beginning with Descartes. Each of the cited works on CALU in this piece thus far alludes to the inability of machines to replicate stimulus-free, unbounded, and appropriate linguistic thoughts and expressions.

Do LLMs Exhibit CALU?

The question before us is this: Do Large Language Models reproduce CALU? An affirmative answer to this question implies that certain LLMs possess minds sufficiently like ours; if a negative, then not.

To answer our question, we consider each criterion in detail:

· Stimulus Freedom: LLMs are engaged through prompts. Human users input strings of human language, and the program returns a response. Claims regarding LLMs’ intellectual capabilities hinge on these programs responding as requested by human users in a direct and appeasing manner.

LLMs’ responses can be tied to an identifiable stimulus. Even the internal message tags that LLMs like Bing AI use (inclusive of the assistant’s “inner monologue”) are inextricably linked to the user’s input. OpenAI reports experimenting with GPT-4’s autonomous self-replicating capabilities (with unfortunate media framing), which we may consider an interesting if indirect, attempt at reproducing CALU. But GPT-4 was ultimately ineffective in this context, even with some success at tricking a TaskRabbit user. Nothing about the ‘simulated’ means by which GPT-4 was tested here suggests its output was stimulus-free. [Judgment: Stimulus-Constrained.]

· Unbounded: It appears that LLMs, including GPT3.5 (ChatGPT), GPT-4 (ChatGPT Plus, Bing AI), and Bard, among others, are capable of producing an unlimited number and variety of sentences for any given context. This is a magnificent achievement. It is also the intense focus of ongoing debates in linguistics as to what this means for Chomsky’s approach to syntax and the generative school broadly. For our purposes, whether LLMs do this by “knowing” or “understanding” the abstract rules of human grammar or by statistically settling on a too-perfect mimicry of them (if there is a difference here) is not directly relevant. The fact is that its syntactic output is unbounded.

Just as importantly, however, is that this output is strictly “verbal” — there is not yet sufficient reason to believe any kind of linguistic thought is occurring. In this same vein, furthermore, the LLMs produce novel linguistic outputs but do not appear to be innovative in the free yet constrained sense that human beings are. (For example, no LLM has yet written this paper on CALU and LLMs, and my attempts to achieve this through prompts of various kinds have returned inaccurate and/or middling results.) The syntactic combinations LLMs produce are novel and limitless but not particularly innovative. Rather than advancing discourse, they seem to excel at capturing what already exists through limitless linguistic expressions (perhaps this is why the significance of LLM-powered chatbots in popular discourse is sometimes downgraded from autonomous superintelligences to helpful collaborators and finally to occasionally useful apps). [Judgment: Syntactically unbounded, semantically bounded.]

· Appropriate and Coherent to Circumstance: On the surface, it seems that LLMs produce linguistic expressions that are coherent and appropriate to the circumstances of their use. This is difficult to probe not only because LLMs are frequently built with guardrails that restrict their outputs (“As an AI language model, I do not…”), but also because it is difficult to know exactly what counts as appropriate. An accepted condition is whether one’s interlocutor judges the responses to be appropriate. As Chomsky puts it, “recognized as appropriate by other participations…who might have reacted in similar ways and whose thoughts, evoked by this discourse, correspond to those of the speaker.” Even here, however, because of our tendency to anthropomorphize, we impose coherence on to LLM-powered chatbots’ answers even when there may be none (we do the same with people, too, but we do not deny that their thoughts can and do correspond with ours).

I must be anecdotal here, but the knowledge that I am not alone in this experience: when I interact with conversational AIs, I have never felt as though there was a mutual correspondence of thought occurring between prompt and response. Even creative prompts which yield interesting results are interesting in the same way that happening upon a unique Wikipedia page is interesting. Wide-ranging, rich conversations in which my human interlocutor’s responses flick from subject to subject with mutually intelligible relevance do not happen, in my experience, with chatbots. Even correct answers to queries have an air of mechanical appeasement, not correspondence with my own thoughts. For example, ChatGPT, Bing, and Bard each return responses that seem appropriate to the topic of CALU and its relationship to AI, but on even moderately close analysis, fail to produce linguistic content of an appropriate nature over an extended conversation. Note that such coherence, though still difficult to pin down, is more precise than one prominent study’s use of the term, seemingly equating coherence with grammaticality and semantics at times. [Judgment: Undetermined, leaning towards frequently inappropriate to circumstances.]

Overall Judgment: Large Language Models do not reproduce CALU. They thus fail, on these terms, to prove they possess minds like our own.

What Would the Artificial Reproduction of CALU Mean for Humans?

If the term “artificial general intelligence,” or AGI, is meant to describe an AI system that possesses intellectual capabilities comparable to that of humans, then CALU must be relevant to identifying its existence. On this test alone, LLMs are neither minds like ours nor AGI.

It is strange, however, that this concept has not been made more explicit by either generativists or their detractors. It is a phenomenon whose mere existence depends only on an acceptance of readily observable and describable features of human language use that do not cohere exclusively with generative theories. Generativists’ own reluctance to carry commonsense intuitions into scientific inquiry has something to offer here.

CALU is central to human intellectual nature and will thus continue to be central to our assessments of future AI systems. We evidently consider the matter of creating an intimate one, yet we have often settled for imprecision and passion in our assessments of it. Much of this, I assume, rests with either the euphoria or the fear that future AI systems could match or exceed our own creative efforts. Much of this, in turn, may rest on convictions individuals hold about human nature.

This thought process is a mistake. It is an understandable mistake, but one owing to the wildly disconnected and overhyped intersection of AI research and a litany of human arts and sciences. If CALU were to be reproduced by an AI system, this might be considered an achievement so momentous that current discourse tilting between euphoria and doom simply evades, not captures, its significance. A language model that actually exhibits CALU would be no more a threat to my significance than the existence of human writers better than myself (and with apologies to Eliezer Yudkowsky, I have no desire to take over the world).

A better way forward is to temporarily calm our passions in the service of bridging divides between scientific and engineering approaches to the mind. Generative linguistics, as noted, is a notable flashpoint in this intersection of worlds, especially with Steven Piantadosi’s LLM-driven critique of Chomsky’s approach to language. But it is a shame to see rich stocks of wisdom on both sides become oversimplified. To echo computer scientist Walid Saba’s sentiments here, we should stare advancements in AI in the face while remaining humble about the complexity and utter sophistication of the human mind.

References:

[1] M. Baker, The Creative Aspect of Language Use and Nonbiological Nativism (2008), Oxford University Press

[2] V.J. Carchidi, Do submarines swim? Methodological dualism and anthropomorphizing AlphaGo (2022), AI & Society

[3] N. Chomsky, Language and Problems of Knowledge (1988), MIT

[4] N. Chomsky, Cartesian Linguistics (2009), Cambridge University Press

[5] N. Chomsky, The Mysteries of Nature: How Deeply Hidden? (2009), The Journal of Philosophy

[6] K. Duggar, T. Scarfe and W. Saba, #78 — Prof. NOAM CHOMSKY (Special Edition) [Video] (2022), Machine Learning Street Talk

[7] C.W. Kreilder, Introducing English Semantics (1998), Routledge

[8] J. McGilvray, Chomsky on the Creative Aspect of Language Use and Its Implications for Lexical Semantic Studies (2011), Cambridge University Press

[9] J. McGilvray, Cognitive Science: What Should It Be? (2017), Cambridge University Press

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓

Sign Up for the Course
`; } else { console.error('Element with id="subscribe" not found within the page with class "home".'); } } }); // Remove duplicate text from articles /* Backup: 09/11/24 function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag elements.forEach(el => { const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 2) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); */ // Remove duplicate text from articles function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag // List of classes to be excluded const excludedClasses = ['medium-author', 'post-widget-title']; elements.forEach(el => { // Skip elements with any of the excluded classes if (excludedClasses.some(cls => el.classList.contains(cls))) { return; // Skip this element if it has any of the excluded classes } const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 10) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); //Remove unnecessary text in blog excerpts document.querySelectorAll('.blog p').forEach(function(paragraph) { // Replace the unwanted text pattern for each paragraph paragraph.innerHTML = paragraph.innerHTML .replace(/Author\(s\): [\w\s]+ Originally published on Towards AI\.?/g, '') // Removes 'Author(s): XYZ Originally published on Towards AI' .replace(/This member-only story is on us\. Upgrade to access all of Medium\./g, ''); // Removes 'This member-only story...' }); //Load ionic icons and cache them if ('localStorage' in window && window['localStorage'] !== null) { const cssLink = 'https://code.ionicframework.com/ionicons/2.0.1/css/ionicons.min.css'; const storedCss = localStorage.getItem('ionicons'); if (storedCss) { loadCSS(storedCss); } else { fetch(cssLink).then(response => response.text()).then(css => { localStorage.setItem('ionicons', css); loadCSS(css); }); } } function loadCSS(css) { const style = document.createElement('style'); style.innerHTML = css; document.head.appendChild(style); } //Remove elements from imported content automatically function removeStrongFromHeadings() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, h6, span'); elements.forEach(el => { const strongTags = el.querySelectorAll('strong'); strongTags.forEach(strongTag => { while (strongTag.firstChild) { strongTag.parentNode.insertBefore(strongTag.firstChild, strongTag); } strongTag.remove(); }); }); } removeStrongFromHeadings(); "use strict"; window.onload = () => { /* //This is an object for each category of subjects and in that there are kewords and link to the keywods let keywordsAndLinks = { //you can add more categories and define their keywords and add a link ds: { keywords: [ //you can add more keywords here they are detected and replaced with achor tag automatically 'data science', 'Data science', 'Data Science', 'data Science', 'DATA SCIENCE', ], //we will replace the linktext with the keyword later on in the code //you can easily change links for each category here //(include class="ml-link" and linktext) link: 'linktext', }, ml: { keywords: [ //Add more keywords 'machine learning', 'Machine learning', 'Machine Learning', 'machine Learning', 'MACHINE LEARNING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ai: { keywords: [ 'artificial intelligence', 'Artificial intelligence', 'Artificial Intelligence', 'artificial Intelligence', 'ARTIFICIAL INTELLIGENCE', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, nl: { keywords: [ 'NLP', 'nlp', 'natural language processing', 'Natural Language Processing', 'NATURAL LANGUAGE PROCESSING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, des: { keywords: [ 'data engineering services', 'Data Engineering Services', 'DATA ENGINEERING SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, td: { keywords: [ 'training data', 'Training Data', 'training Data', 'TRAINING DATA', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ias: { keywords: [ 'image annotation services', 'Image annotation services', 'image Annotation services', 'image annotation Services', 'Image Annotation Services', 'IMAGE ANNOTATION SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, l: { keywords: [ 'labeling', 'labelling', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, pbp: { keywords: [ 'previous blog posts', 'previous blog post', 'latest', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, mlc: { keywords: [ 'machine learning course', 'machine learning class', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, }; //Articles to skip let articleIdsToSkip = ['post-2651', 'post-3414', 'post-3540']; //keyword with its related achortag is recieved here along with article id function searchAndReplace(keyword, anchorTag, articleId) { //selects the h3 h4 and p tags that are inside of the article let content = document.querySelector(`#${articleId} .entry-content`); //replaces the "linktext" in achor tag with the keyword that will be searched and replaced let newLink = anchorTag.replace('linktext', keyword); //regular expression to search keyword var re = new RegExp('(' + keyword + ')', 'g'); //this replaces the keywords in h3 h4 and p tags content with achor tag content.innerHTML = content.innerHTML.replace(re, newLink); } function articleFilter(keyword, anchorTag) { //gets all the articles var articles = document.querySelectorAll('article'); //if its zero or less then there are no articles if (articles.length > 0) { for (let x = 0; x < articles.length; x++) { //articles to skip is an array in which there are ids of articles which should not get effected //if the current article's id is also in that array then do not call search and replace with its data if (!articleIdsToSkip.includes(articles[x].id)) { //search and replace is called on articles which should get effected searchAndReplace(keyword, anchorTag, articles[x].id, key); } else { console.log( `Cannot replace the keywords in article with id ${articles[x].id}` ); } } } else { console.log('No articles found.'); } } let key; //not part of script, added for (key in keywordsAndLinks) { //key is the object in keywords and links object i.e ds, ml, ai for (let i = 0; i < keywordsAndLinks[key].keywords.length; i++) { //keywordsAndLinks[key].keywords is the array of keywords for key (ds, ml, ai) //keywordsAndLinks[key].keywords[i] is the keyword and keywordsAndLinks[key].link is the link //keyword and link is sent to searchreplace where it is then replaced using regular expression and replace function articleFilter( keywordsAndLinks[key].keywords[i], keywordsAndLinks[key].link ); } } function cleanLinks() { // (making smal functions is for DRY) this function gets the links and only keeps the first 2 and from the rest removes the anchor tag and replaces it with its text function removeLinks(links) { if (links.length > 1) { for (let i = 2; i < links.length; i++) { links[i].outerHTML = links[i].textContent; } } } //arrays which will contain all the achor tags found with the class (ds-link, ml-link, ailink) in each article inserted using search and replace let dslinks; let mllinks; let ailinks; let nllinks; let deslinks; let tdlinks; let iaslinks; let llinks; let pbplinks; let mlclinks; const content = document.querySelectorAll('article'); //all articles content.forEach((c) => { //to skip the articles with specific ids if (!articleIdsToSkip.includes(c.id)) { //getting all the anchor tags in each article one by one dslinks = document.querySelectorAll(`#${c.id} .entry-content a.ds-link`); mllinks = document.querySelectorAll(`#${c.id} .entry-content a.ml-link`); ailinks = document.querySelectorAll(`#${c.id} .entry-content a.ai-link`); nllinks = document.querySelectorAll(`#${c.id} .entry-content a.ntrl-link`); deslinks = document.querySelectorAll(`#${c.id} .entry-content a.des-link`); tdlinks = document.querySelectorAll(`#${c.id} .entry-content a.td-link`); iaslinks = document.querySelectorAll(`#${c.id} .entry-content a.ias-link`); mlclinks = document.querySelectorAll(`#${c.id} .entry-content a.mlc-link`); llinks = document.querySelectorAll(`#${c.id} .entry-content a.l-link`); pbplinks = document.querySelectorAll(`#${c.id} .entry-content a.pbp-link`); //sending the anchor tags list of each article one by one to remove extra anchor tags removeLinks(dslinks); removeLinks(mllinks); removeLinks(ailinks); removeLinks(nllinks); removeLinks(deslinks); removeLinks(tdlinks); removeLinks(iaslinks); removeLinks(mlclinks); removeLinks(llinks); removeLinks(pbplinks); } }); } //To remove extra achor tags of each category (ds, ml, ai) and only have 2 of each category per article cleanLinks(); */ //Recommended Articles var ctaLinks = [ /* ' ' + '

Subscribe to our AI newsletter!

' + */ '

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

'+ '

Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

' + '
' + '' + '' + '

Note: Content contains the views of the contributing authors and not Towards AI.
Disclosure: This website may contain sponsored content and affiliate links.

' + 'Discover Your Dream AI Career at Towards AI Jobs' + '

Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 10,000 live jobs today with Towards AI Jobs!

' + '
' + '

🔥 Recommended Articles 🔥

' + 'Why Become an LLM Developer? Launching Towards AI’s New One-Stop Conversion Course'+ 'Testing Launchpad.sh: A Container-based GPU Cloud for Inference and Fine-tuning'+ 'The Top 13 AI-Powered CRM Platforms
' + 'Top 11 AI Call Center Software for 2024
' + 'Learn Prompting 101—Prompt Engineering Course
' + 'Explore Leading Cloud Providers for GPU-Powered LLM Training
' + 'Best AI Communities for Artificial Intelligence Enthusiasts
' + 'Best Workstations for Deep Learning
' + 'Best Laptops for Deep Learning
' + 'Best Machine Learning Books
' + 'Machine Learning Algorithms
' + 'Neural Networks Tutorial
' + 'Best Public Datasets for Machine Learning
' + 'Neural Network Types
' + 'NLP Tutorial
' + 'Best Data Science Books
' + 'Monte Carlo Simulation Tutorial
' + 'Recommender System Tutorial
' + 'Linear Algebra for Deep Learning Tutorial
' + 'Google Colab Introduction
' + 'Decision Trees in Machine Learning
' + 'Principal Component Analysis (PCA) Tutorial
' + 'Linear Regression from Zero to Hero
'+ '

', /* + '

Join thousands of data leaders on the AI newsletter. It’s free, we don’t spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

',*/ ]; var replaceText = { '': '', '': '', '
': '
' + ctaLinks + '
', }; Object.keys(replaceText).forEach((txtorig) => { //txtorig is the key in replacetext object const txtnew = replaceText[txtorig]; //txtnew is the value of the key in replacetext object let entryFooter = document.querySelector('article .entry-footer'); if (document.querySelectorAll('.single-post').length > 0) { //console.log('Article found.'); const text = entryFooter.innerHTML; entryFooter.innerHTML = text.replace(txtorig, txtnew); } else { // console.log('Article not found.'); //removing comment 09/04/24 } }); var css = document.createElement('style'); css.type = 'text/css'; css.innerHTML = '.post-tags { display:none !important } .article-cta a { font-size: 18px; }'; document.body.appendChild(css); //Extra //This function adds some accessibility needs to the site. function addAlly() { // In this function JQuery is replaced with vanilla javascript functions const imgCont = document.querySelector('.uw-imgcont'); imgCont.setAttribute('aria-label', 'AI news, latest developments'); imgCont.title = 'AI news, latest developments'; imgCont.rel = 'noopener'; document.querySelector('.page-mobile-menu-logo a').title = 'Towards AI Home'; document.querySelector('a.social-link').rel = 'noopener'; document.querySelector('a.uw-text').rel = 'noopener'; document.querySelector('a.uw-w-branding').rel = 'noopener'; document.querySelector('.blog h2.heading').innerHTML = 'Publication'; const popupSearch = document.querySelector$('a.btn-open-popup-search'); popupSearch.setAttribute('role', 'button'); popupSearch.title = 'Search'; const searchClose = document.querySelector('a.popup-search-close'); searchClose.setAttribute('role', 'button'); searchClose.title = 'Close search page'; // document // .querySelector('a.btn-open-popup-search') // .setAttribute( // 'href', // 'https://medium.com/towards-artificial-intelligence/search' // ); } // Add external attributes to 302 sticky and editorial links function extLink() { // Sticky 302 links, this fuction opens the link we send to Medium on a new tab and adds a "noopener" rel to them var stickyLinks = document.querySelectorAll('.grid-item.sticky a'); for (var i = 0; i < stickyLinks.length; i++) { /* stickyLinks[i].setAttribute('target', '_blank'); stickyLinks[i].setAttribute('rel', 'noopener'); */ } // Editorial 302 links, same here var editLinks = document.querySelectorAll( '.grid-item.category-editorial a' ); for (var i = 0; i < editLinks.length; i++) { editLinks[i].setAttribute('target', '_blank'); editLinks[i].setAttribute('rel', 'noopener'); } } // Add current year to copyright notices document.getElementById( 'js-current-year' ).textContent = new Date().getFullYear(); // Call functions after page load extLink(); //addAlly(); setTimeout(function() { //addAlly(); //ideally we should only need to run it once ↑ }, 5000); }; function closeCookieDialog (){ document.getElementById("cookie-consent").style.display = "none"; return false; } setTimeout ( function () { closeCookieDialog(); }, 15000); console.log(`%c 🚀🚀🚀 ███ █████ ███████ █████████ ███████████ █████████████ ███████████████ ███████ ███████ ███████ ┌───────────────────────────────────────────────────────────────────┐ │ │ │ Towards AI is looking for contributors! │ │ Join us in creating awesome AI content. │ │ Let's build the future of AI together → │ │ https://towardsai.net/contribute │ │ │ └───────────────────────────────────────────────────────────────────┘ `, `background: ; color: #00adff; font-size: large`); //Remove latest category across site document.querySelectorAll('a[rel="category tag"]').forEach(function(el) { if (el.textContent.trim() === 'Latest') { // Remove the two consecutive spaces (  ) if (el.nextSibling && el.nextSibling.nodeValue.includes('\u00A0\u00A0')) { el.nextSibling.nodeValue = ''; // Remove the spaces } el.style.display = 'none'; // Hide the element } }); // Add cross-domain measurement, anonymize IPs 'use strict'; //var ga = gtag; ga('config', 'G-9D3HKKFV1Q', 'auto', { /*'allowLinker': true,*/ 'anonymize_ip': true/*, 'linker': { 'domains': [ 'medium.com/towards-artificial-intelligence', 'datasets.towardsai.net', 'rss.towardsai.net', 'feed.towardsai.net', 'contribute.towardsai.net', 'members.towardsai.net', 'pub.towardsai.net', 'news.towardsai.net' ] } */ }); ga('send', 'pageview'); -->