Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: pub@towardsai.net
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

Managing an AI developer: Lessons Learned from SMOL AI — Part 1
Latest   Machine Learning

Managing an AI developer: Lessons Learned from SMOL AI — Part 1

Last Updated on June 28, 2023 by Editorial Team

Author(s): Meir Kanevskiy

Originally published on Towards AI.

Source: Image by DALL-E

One of the most interesting ramifications of the recent breakthroughs, specifically in large language models, is the potential for building automated agents capable of fulfilling work projects using these models. The theoretical capabilities here are vast. Sometimes, when communicating with a chat-based model like ChatGPT, a simple follow-up prompt such as “improve your answer” or “make your answer more accurate” can significantly enhance the initial query’s response. However, building such automated agents raises an old-new problem: effective management and responsibility. How do you manage such an agent? Managing real human beings is no simple task and has spawned thousands of pages of literature, as well as popular and polished practices and methodologies based on decades of experience. Can these practices be applied to AI-agent-performed development? What factors should be considered? What metrics should be assigned to projects delivered by AI agents? While we cannot fully answer these questions, let’s consider a specific case and see what we can learn from it.

The task

In various situations, there is a need to label different entities, such as configurations, datasets, models, color schemes, or any other meaningful group of similar entities in a project. It would be helpful to quickly assign recognizable names to these entities. Recognizable to a human eye (as not in UUID4) and names in plural (as in more than there are recallable colors in a rainbow). Frequent users of e.g. docker cli or wandb might have already recognized the issue. Running docker containers are automatically labeled with rather funny and due to their absurdity easily discernible names like heuristic_einstein or musing_babbage. If we pause right here, a human being reading this article probably needs to read no further to state, execute and deliver the project we’re about to hand over to our AI agent. Our human perception is an amazingly complex thing, utilizing our whole life experience full with different semantics and abstractions we take for granted. We analyze, assume and extrapolate our observations at once, not even thinking of the grand baggage of experience that stands behind those abilities. However, when approaching a language model, one has to be very judicious and humble with regards to one’s understanding of how a model actually thinks. To take on our experiment, we will use the brilliant and light-speed evolving SMOL-AI’s developer, according to its own readme:

Human-centric & Coherent Whole Program Synthesis aka your own personal junior developer

Initial problem statement

So, let’s say we want to write a reusable program (not just a snippet) that generates a random name following a naming scheme similar to that of docker. According to SMOL Dev’s readme, an initial prompt needs to be provided along with the supported model’s API key (in this case, GPT4). The result is evaluated and repeated if necessary.

So let’s write our initial prompt. Since luckily we don’t have to utilize our own creativity, let’s make the naming scheme a little more nuanced:

Please write a naming scheme generator function 
that generates a random name to the likes of running docker containers,
consisting of an adjective and a noun,
where adjective is an emotional description, e.g. dramatic,
and noun is an abstract entity like manifold.
It has to contain up to 7 adjective options for every letter
and up to 7 nouns for every letter.

Initial result

Having installed necessary python requirements, running smol dev on a prompt is as easy as (using only )

export OPENAI_API_KEY=***
python main_no_modal.py ./prompts/prompt.md

Scope

Smol dev’s working cycle took several minutes and produced the following repo:

├── adjectives.py
├── naming_scheme_generator.py
├── nouns.py
└── shared_dependencies.md

The main function was quite straightforward and, arguably, the random letter one-liner is even elegant:

import random
from adjectives import adjectives
from nouns import nouns

def get_random_element(arr):
return random.choice(arr)

def generate_random_name():
random_letter = chr(97 + random.randint(0, 25))
adjective = get_random_element(adjectives[random_letter])
noun = get_random_element(nouns[random_letter])
return f"{adjective}-{noun}"

if __name__ == "__main__":
print(generate_random_name())

2 elements are drawing attention here, as they have not been expxlicitly specified by the prompt:

  1. This file is cmd executable and has a designated __main__ branch.
  2. Random element choice ids abstracted into its own function

Reasoning

It’s hard to determine the exact motivation behind introducing these elements into the solution, but if done by a human, the __main__ clause would likely be dictated by the desire to easily turn the standalone function described in the prompt into a ready-to-use CLI tool. This simple improvement would allow the function to be run in the shell and chained with other commands. A suitable term for this could be "MMVP" (Minimal Minimal Value Product), an addon to the prompt resolution that requires minimal additional effort to make it actually usable.

Abstracting the random choice function, on the other hand, is a more obvious addition that eases expected further modifications.

Functionality

Correct integration, including importing the corresponding iterable and randomly indexing it with the (almost as we’ll shortly see) correct length constraints, is not surprising to users familiar with GPT’s brilliance. Now let’s examine the components that provide the actual functionality: adjectives.py and nouns.py.

nouns.py fits our prompt well, with slight deviations from the requested semantic scope requested, which, with all honesty was vague enough to allow that:

nouns = {
'a': ['abyss', 'angel', 'artifact', 'anomaly', 'algorithm', 'atmosphere', 'antenna'],
'b': ['beacon', 'bubble', 'boundary', 'balance', 'butterfly', 'breeze', 'blossom'],
'c': ['cosmos', 'catalyst', 'crystal', 'conundrum', 'cipher', 'cascade', 'crescendo'],
'd': ['dimension', 'dynamo', 'dream', 'duality', 'dawn', 'dusk', 'divergence'],
'e': ['echo', 'eternity', 'enigma', 'essence', 'evolution', 'eclipse', 'equilibrium'],
'f': ['frequency', 'flux', 'fountain', 'fusion', 'fable', 'fantasy', 'form'],
'g': ['galaxy', 'gravity', 'glimmer', 'glow', 'genesis', 'garden', 'groove'],
'h': ['harmony', 'horizon', 'haze', 'hallucination', 'hologram', 'hypnosis', 'haven'],
'i': ['infinity', 'illusion', 'insight', 'imagination', 'intuition', 'inception', 'impulse'],
'j': ['juxtaposition', 'jubilation', 'jigsaw', 'journey', 'jolt', 'junction', 'jazz'],
'k': ['kaleidoscope', 'karma', 'key', 'kinetic', 'kismet', 'kinesis', 'knight'],
'l': ['labyrinth', 'light', 'luminescence', 'levitation', 'landscape', 'legacy', 'loop'],
'm': ['manifold', 'matrix', 'mystery', 'mirage', 'metamorphosis', 'momentum', 'myth'],
'n': ['nebula', 'nexus', 'nocturne', 'novelty', 'nirvana', 'numen', 'nucleus'],
'o': ['oasis', 'odyssey', 'oracle', 'oscillation', 'origin', 'overture', 'oblivion'],
'p': ['paradox', 'portal', 'pulse', 'pattern', 'phantom', 'prophecy', 'paragon'],
'q': ['quasar', 'quantum', 'quintessence', 'quiver', 'quandary', 'quicksilver', 'quest'],
'r': ['resonance', 'reverie', 'rhythm', 'radiance', 'reflection', 'rift', 'revelation'],
's': ['symphony', 'spectrum', 'shadow', 'silhouette', 'solstice', 'serendipity', 'sanctuary'],
't': ['transcendence', 'tesseract', 'twilight', 'tide', 'tempo', 'trance', 'threshold'],
'u': ['universe', 'utopia', 'unity', 'umbra', 'uplift', 'ultra', 'undulation'],
'v': ['vortex', 'vision', 'vibration', 'voyage', 'veil', 'vivid', 'vanguard'],
'w': ['wavelength', 'whisper', 'wonder', 'wisdom', 'web', 'wind', 'wave'],
'x': ['xenon', 'xanadu', 'x-ray', 'xenolith', 'xylem', 'xenogenesis', 'xylophone'],
'y': ['yonder', 'yearning', 'yugen', 'yin', 'yang', 'yawn', 'yesteryear'],
'z': ['zenith', 'zephyr', 'zone', 'zodiac', 'zigzag', 'zombie', 'zeppelin']
}

adjectives.py, however, reveals an interesting surprise:

adjectives = {
'a': ['angry', 'anxious', 'amused', 'adorable', 'amazing', 'ambitious', 'artistic'],
'b': ['brave', 'bored', 'bitter', 'bold', 'bashful', 'bewildered', 'benevolent'],
# ...
'z': ['zealous', 'zany', 'zestful', 'zen', 'zippy', 'zombie-like', 'zigzag']
}

This dictionary would obviously be unusable in our case and would cause an unhandled exception if a letter other than ‘a’, ‘b’, or ‘z’ is chosen.

Reasoning

When done by a human, such a mistake would be a classic evidence of lack of testing:

  1. A functional block was implemented with an incomplete placeholder.
  2. Sufficient testing was not performed.
  3. The placeholder was left in the final solution.

The interesting deviation in our case is the involvement of two actors: the SMOL AI agent and the actual content supplier in the form of a language model. As is evident from the length of the supplied lists, this dictionary was intended to be a finalized answer to the query for supplying the adjectives component. However, the language model sometimes omits repetitive code and provides an initial example without completing it. In a chat use-case, this often takes the form of an answer containing general directions for performing a task instead of actual code. This occurs when the prompt does not emphasize providing code specifically and is more of a general question on a matter.

Even less “human” in this case is the mismatch between the attention to detail in the main file we described above and this incompleteness being overlook.

Improvement

Requirements

Taking above considerations into account, let’s engineer a more advanced prompt. This time, we’ll describe our needs in a formal manner, more suitable to how an actual project might be described, while adding some useful degrees of freedom:

This project is a naming scheme generator.
It has to be subject to the following tech stack:
1. Programming language: wherever possible, but not limited to python
It has to be subject to the following specification:
1. Having an importable and callable method in python that returns a random name
2. Said name has to be a pair of an adjective and a noun starting with the same letter to the likes of docker container naming convention
3. Adjective and noun have to be of specific theme. Default themes should emotional description, e.g. dramatic for adjective and an abstract entity like manifold for noun.
4. Apart from the default themes, theme pair should be customizable by providing a json with list of available options per each letter of the English alphabet. Providing the json can be constrained to a certain folder inside the project structure.
5. it has to have tests, evaluating extensiveness of fulfilling the above specifications
  1. Stating the tech stack: It is by chance that the provided solution was in Python. We never explicitly asked for it, and the referenced Docker naming scheme is, expectedly, written in Go.
  2. Specifying the use-case: We wouldn’t want to leave the usability of our solution to the model’s consideration alone. As we’ve seen, it can be reduced to an MMVP, as described above.
  3. Static data parametrization: Although the referenced Docker naming scheme is hardcoded, generally one might want to avoid such an approach, especially for the central functionality of the project.
  4. Testing: as we’ve seen, as opposed to adding cli usability of its own accord, the model did not provide any unit tests for the solution. This had serious ramifications, resulting in a completely broken key component (the adjectives list).

Improved result

This time the run took 11 minutes and produced a much more elaborate repo structure:

├── naming_scheme_generator
│ ├── generator.py
│ ├── __init__.py
│ └── themes
│ ├── custom_theme_example.json
│ ├── default_theme.json
│ └── __init__.py
├── shared_dependencies.md
└── tests
├── __init__.py
├── test_generator.py
└── test_themes.py

This structure is sound and well-aligned with our above requirements. The word registry has been moved to a single JSON file, correctly listing all the letters combined in the following format. You can notice that the model returned a minimal plural number of 2, as we did not provide a prerequisite for the number of words per letter:

"A": {
"adjectives": ["anxious", "amazing"],
"nouns": ["artifact", "ambience"]
}

generator.py showed another interesting change. Note that the improvised "MMVP" in the form of a __main__ branch for CLI usage is now absent, as we explicitly stated that the usage should be an "importable and callable method in Python":

import random
from typing import Dict, List
from .themes import default_theme

def generate_name(theme_adjective: Dict[str, List[str]], theme_noun: Dict[str, List[str]]) -> str:
letter = random.choice(list(theme_adjective.keys()))
adjective = random.choice(theme_adjective[letter])
noun = random.choice(theme_noun[letter])
return f"{adjective}_{noun}"

def load_custom_theme(file_path: str) -> Dict[str, List[str]]:
with open(file_path, "r") as file:
custom_theme = json.load(file)
return custom_theme

def generate_name_with_theme(theme: str = "default") -> str:
if theme == "default":
theme_adjective = default_theme["adjectives"]
theme_noun = default_theme["nouns"]
else:
custom_theme = load_custom_theme(theme)
theme_adjective = custom_theme["adjectives"]
theme_noun = custom_theme["nouns"]

return generate_name(theme_adjective, theme_noun)

Multiple problems become apparent here:

  • Import errors: The json library import is missing.
  • The .themes folder (leaving aside the taste question of using . imports) does not have any Python-importable default_theme. Instead, it's a JSON file meant to be read from Python code. The same error is present in the __init__.py file of the themes folder:
from .default_theme import default_theme
from .custom_theme_example import custom_theme_example
  • The theme dictionary, if read as-is from the JSON file, has a structure opposite to that implied by the code. The theme itself doesn’t have “adjective” and “noun” keys. Its keys are letters of the alphabet, each nested with adjectives and nouns.

Due to the above reasons, the written tests are failing on the imports stage and cannot be executed. However, their usage of imported functions is correct, and their extensiveness is genuinely impressive:

import unittest
from naming_scheme_generator.generator import generate_name, load_custom_theme
from naming_scheme_generator.themes import default_theme

class TestGenerator(unittest.TestCase):

def test_generate_name_default_theme(self):
name = generate_name(default_theme.adjectives, default_theme.nouns)
self.assertIsNotNone(name)
self.assertTrue(isinstance(name, str))
self.assertEqual(len(name.split(" ")), 2)
self.assertEqual(name.split(" ")[0][0], name.split(" ")[1][0])

def test_generate_name_custom_theme(self):
custom_theme = load_custom_theme("naming_scheme_generator/themes/custom_theme_example.json")
name = generate_name(custom_theme.adjectives, custom_theme.nouns)
self.assertIsNotNone(name)
self.assertTrue(isinstance(name, str))
self.assertEqual(len(name.split(" ")), 2)
self.assertEqual(name.split(" ")[0][0], name.split(" ")[1][0])

def test_load_custom_theme(self):
custom_theme = load_custom_theme("naming_scheme_generator/themes/custom_theme_example.json")
self.assertIsNotNone(custom_theme)
self.assertTrue(hasattr(custom_theme, "adjectives"))
self.assertTrue(hasattr(custom_theme, "nouns"))

if __name__ == '__main__':
unittest.main()

Intermediate conclusions

With the above said and considering the very limited scope of our “test project,” which hardly allows for a component breakdown, one could argue that:

  • In the initial case, the “core functionality” of the solution was faulty. Even though the utility code was correct, the human supervisor would have had to come up with the missing words, which were plenty. Additional effort in this case would be of a creative nature.
  • In the latter case, the necessary improvements are technical. Missing imports need to be corrected, the default theme workflow needs to be changed from a Pythonic import to a JSON read (similar to the already correct custom theme), and the nesting levels in the word dictionary need to be swapped.
  • In the latter case, the project infrastructure is correct. Theme customization is separated from the main generator code, and extensive tests are correctly organized and separated into a distinct folder.

The fallbacks of the second solution are much easier to correct and can be delegated to a junior developer role, while the fallbacks of the first case would correspond to a major core algorithm issue in a more complex project, requiring the involvement of a more senior and qualified human supervisor.

Of course, it’s also worth mentioning that with all of the above, the achievements that SMOL AI contributors have been able to attain in such a short time are fascinating. The magnificence of the latest developments in large language models makes it easy to theorize about their automation. However, bringing a solution like this to an actual usable implementation is a different class of achievement.

In Part 2 we’ll take a look at further iterations and see if SMOL Dev can actually become that junior developer and improve its own results.

Thank you for reading!

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓

Sign Up for the Course
`; } else { console.error('Element with id="subscribe" not found within the page with class "home".'); } } }); // Remove duplicate text from articles /* Backup: 09/11/24 function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag elements.forEach(el => { const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 2) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); */ // Remove duplicate text from articles function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag // List of classes to be excluded const excludedClasses = ['medium-author', 'post-widget-title']; elements.forEach(el => { // Skip elements with any of the excluded classes if (excludedClasses.some(cls => el.classList.contains(cls))) { return; // Skip this element if it has any of the excluded classes } const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 10) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); //Remove unnecessary text in blog excerpts document.querySelectorAll('.blog p').forEach(function(paragraph) { // Replace the unwanted text pattern for each paragraph paragraph.innerHTML = paragraph.innerHTML .replace(/Author\(s\): [\w\s]+ Originally published on Towards AI\.?/g, '') // Removes 'Author(s): XYZ Originally published on Towards AI' .replace(/This member-only story is on us\. Upgrade to access all of Medium\./g, ''); // Removes 'This member-only story...' }); //Load ionic icons and cache them if ('localStorage' in window && window['localStorage'] !== null) { const cssLink = 'https://code.ionicframework.com/ionicons/2.0.1/css/ionicons.min.css'; const storedCss = localStorage.getItem('ionicons'); if (storedCss) { loadCSS(storedCss); } else { fetch(cssLink).then(response => response.text()).then(css => { localStorage.setItem('ionicons', css); loadCSS(css); }); } } function loadCSS(css) { const style = document.createElement('style'); style.innerHTML = css; document.head.appendChild(style); } //Remove elements from imported content automatically function removeStrongFromHeadings() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, h6, span'); elements.forEach(el => { const strongTags = el.querySelectorAll('strong'); strongTags.forEach(strongTag => { while (strongTag.firstChild) { strongTag.parentNode.insertBefore(strongTag.firstChild, strongTag); } strongTag.remove(); }); }); } removeStrongFromHeadings(); "use strict"; window.onload = () => { /* //This is an object for each category of subjects and in that there are kewords and link to the keywods let keywordsAndLinks = { //you can add more categories and define their keywords and add a link ds: { keywords: [ //you can add more keywords here they are detected and replaced with achor tag automatically 'data science', 'Data science', 'Data Science', 'data Science', 'DATA SCIENCE', ], //we will replace the linktext with the keyword later on in the code //you can easily change links for each category here //(include class="ml-link" and linktext) link: 'linktext', }, ml: { keywords: [ //Add more keywords 'machine learning', 'Machine learning', 'Machine Learning', 'machine Learning', 'MACHINE LEARNING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ai: { keywords: [ 'artificial intelligence', 'Artificial intelligence', 'Artificial Intelligence', 'artificial Intelligence', 'ARTIFICIAL INTELLIGENCE', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, nl: { keywords: [ 'NLP', 'nlp', 'natural language processing', 'Natural Language Processing', 'NATURAL LANGUAGE PROCESSING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, des: { keywords: [ 'data engineering services', 'Data Engineering Services', 'DATA ENGINEERING SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, td: { keywords: [ 'training data', 'Training Data', 'training Data', 'TRAINING DATA', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ias: { keywords: [ 'image annotation services', 'Image annotation services', 'image Annotation services', 'image annotation Services', 'Image Annotation Services', 'IMAGE ANNOTATION SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, l: { keywords: [ 'labeling', 'labelling', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, pbp: { keywords: [ 'previous blog posts', 'previous blog post', 'latest', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, mlc: { keywords: [ 'machine learning course', 'machine learning class', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, }; //Articles to skip let articleIdsToSkip = ['post-2651', 'post-3414', 'post-3540']; //keyword with its related achortag is recieved here along with article id function searchAndReplace(keyword, anchorTag, articleId) { //selects the h3 h4 and p tags that are inside of the article let content = document.querySelector(`#${articleId} .entry-content`); //replaces the "linktext" in achor tag with the keyword that will be searched and replaced let newLink = anchorTag.replace('linktext', keyword); //regular expression to search keyword var re = new RegExp('(' + keyword + ')', 'g'); //this replaces the keywords in h3 h4 and p tags content with achor tag content.innerHTML = content.innerHTML.replace(re, newLink); } function articleFilter(keyword, anchorTag) { //gets all the articles var articles = document.querySelectorAll('article'); //if its zero or less then there are no articles if (articles.length > 0) { for (let x = 0; x < articles.length; x++) { //articles to skip is an array in which there are ids of articles which should not get effected //if the current article's id is also in that array then do not call search and replace with its data if (!articleIdsToSkip.includes(articles[x].id)) { //search and replace is called on articles which should get effected searchAndReplace(keyword, anchorTag, articles[x].id, key); } else { console.log( `Cannot replace the keywords in article with id ${articles[x].id}` ); } } } else { console.log('No articles found.'); } } let key; //not part of script, added for (key in keywordsAndLinks) { //key is the object in keywords and links object i.e ds, ml, ai for (let i = 0; i < keywordsAndLinks[key].keywords.length; i++) { //keywordsAndLinks[key].keywords is the array of keywords for key (ds, ml, ai) //keywordsAndLinks[key].keywords[i] is the keyword and keywordsAndLinks[key].link is the link //keyword and link is sent to searchreplace where it is then replaced using regular expression and replace function articleFilter( keywordsAndLinks[key].keywords[i], keywordsAndLinks[key].link ); } } function cleanLinks() { // (making smal functions is for DRY) this function gets the links and only keeps the first 2 and from the rest removes the anchor tag and replaces it with its text function removeLinks(links) { if (links.length > 1) { for (let i = 2; i < links.length; i++) { links[i].outerHTML = links[i].textContent; } } } //arrays which will contain all the achor tags found with the class (ds-link, ml-link, ailink) in each article inserted using search and replace let dslinks; let mllinks; let ailinks; let nllinks; let deslinks; let tdlinks; let iaslinks; let llinks; let pbplinks; let mlclinks; const content = document.querySelectorAll('article'); //all articles content.forEach((c) => { //to skip the articles with specific ids if (!articleIdsToSkip.includes(c.id)) { //getting all the anchor tags in each article one by one dslinks = document.querySelectorAll(`#${c.id} .entry-content a.ds-link`); mllinks = document.querySelectorAll(`#${c.id} .entry-content a.ml-link`); ailinks = document.querySelectorAll(`#${c.id} .entry-content a.ai-link`); nllinks = document.querySelectorAll(`#${c.id} .entry-content a.ntrl-link`); deslinks = document.querySelectorAll(`#${c.id} .entry-content a.des-link`); tdlinks = document.querySelectorAll(`#${c.id} .entry-content a.td-link`); iaslinks = document.querySelectorAll(`#${c.id} .entry-content a.ias-link`); mlclinks = document.querySelectorAll(`#${c.id} .entry-content a.mlc-link`); llinks = document.querySelectorAll(`#${c.id} .entry-content a.l-link`); pbplinks = document.querySelectorAll(`#${c.id} .entry-content a.pbp-link`); //sending the anchor tags list of each article one by one to remove extra anchor tags removeLinks(dslinks); removeLinks(mllinks); removeLinks(ailinks); removeLinks(nllinks); removeLinks(deslinks); removeLinks(tdlinks); removeLinks(iaslinks); removeLinks(mlclinks); removeLinks(llinks); removeLinks(pbplinks); } }); } //To remove extra achor tags of each category (ds, ml, ai) and only have 2 of each category per article cleanLinks(); */ //Recommended Articles var ctaLinks = [ /* ' ' + '

Subscribe to our AI newsletter!

' + */ '

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

'+ '

Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

' + '
' + '' + '' + '

Note: Content contains the views of the contributing authors and not Towards AI.
Disclosure: This website may contain sponsored content and affiliate links.

' + 'Discover Your Dream AI Career at Towards AI Jobs' + '

Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 10,000 live jobs today with Towards AI Jobs!

' + '
' + '

🔥 Recommended Articles 🔥

' + 'Why Become an LLM Developer? Launching Towards AI’s New One-Stop Conversion Course'+ 'Testing Launchpad.sh: A Container-based GPU Cloud for Inference and Fine-tuning'+ 'The Top 13 AI-Powered CRM Platforms
' + 'Top 11 AI Call Center Software for 2024
' + 'Learn Prompting 101—Prompt Engineering Course
' + 'Explore Leading Cloud Providers for GPU-Powered LLM Training
' + 'Best AI Communities for Artificial Intelligence Enthusiasts
' + 'Best Workstations for Deep Learning
' + 'Best Laptops for Deep Learning
' + 'Best Machine Learning Books
' + 'Machine Learning Algorithms
' + 'Neural Networks Tutorial
' + 'Best Public Datasets for Machine Learning
' + 'Neural Network Types
' + 'NLP Tutorial
' + 'Best Data Science Books
' + 'Monte Carlo Simulation Tutorial
' + 'Recommender System Tutorial
' + 'Linear Algebra for Deep Learning Tutorial
' + 'Google Colab Introduction
' + 'Decision Trees in Machine Learning
' + 'Principal Component Analysis (PCA) Tutorial
' + 'Linear Regression from Zero to Hero
'+ '

', /* + '

Join thousands of data leaders on the AI newsletter. It’s free, we don’t spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

',*/ ]; var replaceText = { '': '', '': '', '
': '
' + ctaLinks + '
', }; Object.keys(replaceText).forEach((txtorig) => { //txtorig is the key in replacetext object const txtnew = replaceText[txtorig]; //txtnew is the value of the key in replacetext object let entryFooter = document.querySelector('article .entry-footer'); if (document.querySelectorAll('.single-post').length > 0) { //console.log('Article found.'); const text = entryFooter.innerHTML; entryFooter.innerHTML = text.replace(txtorig, txtnew); } else { // console.log('Article not found.'); //removing comment 09/04/24 } }); var css = document.createElement('style'); css.type = 'text/css'; css.innerHTML = '.post-tags { display:none !important } .article-cta a { font-size: 18px; }'; document.body.appendChild(css); //Extra //This function adds some accessibility needs to the site. function addAlly() { // In this function JQuery is replaced with vanilla javascript functions const imgCont = document.querySelector('.uw-imgcont'); imgCont.setAttribute('aria-label', 'AI news, latest developments'); imgCont.title = 'AI news, latest developments'; imgCont.rel = 'noopener'; document.querySelector('.page-mobile-menu-logo a').title = 'Towards AI Home'; document.querySelector('a.social-link').rel = 'noopener'; document.querySelector('a.uw-text').rel = 'noopener'; document.querySelector('a.uw-w-branding').rel = 'noopener'; document.querySelector('.blog h2.heading').innerHTML = 'Publication'; const popupSearch = document.querySelector$('a.btn-open-popup-search'); popupSearch.setAttribute('role', 'button'); popupSearch.title = 'Search'; const searchClose = document.querySelector('a.popup-search-close'); searchClose.setAttribute('role', 'button'); searchClose.title = 'Close search page'; // document // .querySelector('a.btn-open-popup-search') // .setAttribute( // 'href', // 'https://medium.com/towards-artificial-intelligence/search' // ); } // Add external attributes to 302 sticky and editorial links function extLink() { // Sticky 302 links, this fuction opens the link we send to Medium on a new tab and adds a "noopener" rel to them var stickyLinks = document.querySelectorAll('.grid-item.sticky a'); for (var i = 0; i < stickyLinks.length; i++) { /* stickyLinks[i].setAttribute('target', '_blank'); stickyLinks[i].setAttribute('rel', 'noopener'); */ } // Editorial 302 links, same here var editLinks = document.querySelectorAll( '.grid-item.category-editorial a' ); for (var i = 0; i < editLinks.length; i++) { editLinks[i].setAttribute('target', '_blank'); editLinks[i].setAttribute('rel', 'noopener'); } } // Add current year to copyright notices document.getElementById( 'js-current-year' ).textContent = new Date().getFullYear(); // Call functions after page load extLink(); //addAlly(); setTimeout(function() { //addAlly(); //ideally we should only need to run it once ↑ }, 5000); }; function closeCookieDialog (){ document.getElementById("cookie-consent").style.display = "none"; return false; } setTimeout ( function () { closeCookieDialog(); }, 15000); console.log(`%c 🚀🚀🚀 ███ █████ ███████ █████████ ███████████ █████████████ ███████████████ ███████ ███████ ███████ ┌───────────────────────────────────────────────────────────────────┐ │ │ │ Towards AI is looking for contributors! │ │ Join us in creating awesome AI content. │ │ Let's build the future of AI together → │ │ https://towardsai.net/contribute │ │ │ └───────────────────────────────────────────────────────────────────┘ `, `background: ; color: #00adff; font-size: large`); //Remove latest category across site document.querySelectorAll('a[rel="category tag"]').forEach(function(el) { if (el.textContent.trim() === 'Latest') { // Remove the two consecutive spaces (  ) if (el.nextSibling && el.nextSibling.nodeValue.includes('\u00A0\u00A0')) { el.nextSibling.nodeValue = ''; // Remove the spaces } el.style.display = 'none'; // Hide the element } }); // Add cross-domain measurement, anonymize IPs 'use strict'; //var ga = gtag; ga('config', 'G-9D3HKKFV1Q', 'auto', { /*'allowLinker': true,*/ 'anonymize_ip': true/*, 'linker': { 'domains': [ 'medium.com/towards-artificial-intelligence', 'datasets.towardsai.net', 'rss.towardsai.net', 'feed.towardsai.net', 'contribute.towardsai.net', 'members.towardsai.net', 'pub.towardsai.net', 'news.towardsai.net' ] } */ }); ga('send', 'pageview'); -->