Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: pub@towardsai.net
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

Quick Start Robotics and Reinforcement Learning with MuJoCo
Artificial Intelligence   Latest   Machine Learning

Quick Start Robotics and Reinforcement Learning with MuJoCo

Author(s): Yasin Yousif

Originally published on Towards AI.

A starter tutorial of its basic structure, capabilities, and main workflow

Images are rendered from xml source in menagerie repo under BSD-3-Clause for Trossen, and Apache license for Franka and Apptronik

MujoCo is a physics simulator for robotics research developed by Google DeepMind and written in C++ with a Python API. The advantage of using MujoCo lies in its various implemented models along with full dynamic and physics properties, such as friction, inertia, elasticity, etc. This realism allows researchers to rigorously test reinforcement learning algorithms in simulations before deployment, mitigating risks associated with real-world applications. Simulating exact replicas of robot manipulators is particularly valuable, as it enables training in a safe virtual environment and then seamless transition to production. Notable examples of these models include popular brands like ALOHA, FRANKA, and KUKA readily available within MujoCo.

Table of Content:

  • Overview
  • MJCF Format
  • The Task
  • Continuous Proximal Policy Optimization
  • Training Results
  • Conclusion

Overview

Beyond the core MujoCo library (installable via pip install mujoco), two invaluable packages enhance its capabilities: dm_control (https://github.com/google-deepmind/dm_control) and mujoco_menagerie (https://github.com/google-deepmind/mujoco_menagerie).

mujoco_menagerie offers a wealth of open-source robot models in .xml format, simplifying the simulation of complex systems. These models encompass diverse designs, as illustrated in the image above.

In dm_control, (installable also with pip: pip install dm_control with its own version of MujoCo), a very useful code base is provided for creating Reinforcement Learning pipelines from MujoCo models as environments classes with suitable .step(), .reward() methods. These pipelines are available via its suite subpackage, and are intended to serve as benchmarks, on which different proposed Reinforcement learning methods can be evaluated and compared. Therefore, it should not be alerted when used for that purpose.

These benchmarks can be shown by running the following:

# Control Suite
from dm_control import suite

for domain, task in suite.BENCHMARKING:
print(f'{domain:<{max_len}} {task}')

which will give the following domains and tasks among others:

Additionally dm_control allow the manipulation of the MJCF models of the entities from within the running script, utilizing its PyMJCF subpackage. Therefore, the user doesn't need to change the xml files in order to add new joints, or replicate a certain structure for example.

MJCF MuJoCo XML Configuration File is the physics modeling language used to represent bodies and joints in MujoCo. In order to get up and running with MujoCo, a basic understanding of this format is needed. Further explanations and details about this format are available in MujoCo documentation as well as its tutorial notebook.

MuJoCo XML Configuration File Format MJCF

To show a working example of an MJCF file, we will review the car.xml source code available with MujoCo github repository here. It basically exhibits a triple-wheel toy vehicle, with two front lights, where it has two main degrees of freedom DoFs: Forward-Backward and Left-Right movements.

By taking a look at the first part of the code we note that the code always is between <mujoco> ..</mujoco> tags. We also note the compiler tag defines what <compiler> is used (Euler by default) and allow setting its options.

<mujoco>
<compiler autolimits="true"/>

Next, as some objects in the model may need its own customized texture and geometric shape unlike standard shapes such as spheres and boxes, the <texture>, <material> and <mesh> tags can be utilized as follows. We note also in the <mesh> tag that the exact points coordinates are provided in the vertex option, where each row represent a point in the surface.

<asset>
<texture name="grid" type="2d" builtin="checker" width="512" height="512" rgb1=".1 .2 .3" rgb2=".2 .3 .4"/>
<material name="grid" texture="grid" texrepeat="1 1" texuniform="true" reflectance=".2"/>
<mesh name="chasis" scale=".01 .006 .0015"
vertex=" 9 2 0
-10 10 10
9 -2 0
10 3 -10
10 -3 -10
-8 10 -10
-10 -10 10
-8 -10 -10
-5 0 20"
/>

</asset>

The <default> tag is helpful to set some default values for certain classes, like the wheelclass which will be always of certain shape, size and color (defined with type, size, and rgba respectively)

<default>
<joint damping=".03" actuatorfrcrange="-0.5 0.5"/>
<default class="wheel">
<geom type="cylinder" size=".03 .01" rgba=".5 .5 1 1"/>
</default>
<default class="decor">
<site type="box" rgba=".5 1 .5 1"/>
</default>
</default>

The first body in Mujoco models is always is the <worldbody> with the order of 0, as a parent object for all the other bodies in the model. Since we have only one car, the only children body of it should be car.

Within each body we can define its children of other bodies, geometries, joint or lights, with <body>, <geom>, <joint>, <light> tags respectively.

This is shown in the next snippet, in which we note the options of name, class and pos among others, which define the unique name, the defined class in default and the initial position of the parent tag respectively.

<worldbody>
<geom type="plane" size="3 3 .01" material="grid"/>
<body name="car" pos="0 0 .03">
<freejoint/>
<light name="top light" pos="0 0 2" mode="trackcom" diffuse=".4 .4 .4"/>
<geom name="chasis" type="mesh" mesh="chasis" rgba="0 .8 0 1"/>
<geom name="front wheel" pos=".08 0 -.015" type="sphere" size=".015" condim="1" priority="1"/>
<light name="front light" pos=".1 0 .02" dir="2 0 -1" diffuse="1 1 1"/>
<body name="left wheel" pos="-.07 .06 0" zaxis="0 1 0">
<joint name="left"/>
<geom class="wheel"/>
<site class="decor" size=".006 .025 .012"/>
<site class="decor" size=".025 .006 .012"/>
</body>
<body name="right wheel" pos="-.07 -.06 0" zaxis="0 1 0">
<joint name="right"/>
<geom class="wheel"/>
<site class="decor" size=".006 .025 .012"/>
<site class="decor" size=".025 .006 .012"/>
</body>
</body>
</worldbody>

As the car can move in any direction including jumping and flipping with respect to the ground floor, it gets <freejoint/> tag with 6 DoFs. While each of its wheels: right and left wheels, get one DoF, along its previously defined axis with the zaxis="0 1 0"option the y−axis.

The active control handles in MujoCo are defined with the <tendon> tag, defining group of joints as the <fixed> tag, and then with the <actuator> tag, to define the exact name and control range of the motor <tag>. As in the following code.

<tendon>
<fixed name="forward">
<joint joint="left" coef=".5"/>
<joint joint="right" coef=".5"/>
</fixed>
<fixed name="turn">
<joint joint="left" coef="-.5"/>
<joint joint="right" coef=".5"/>
</fixed>
</tendon>
<actuator>
<motor name="forward" tendon="forward" ctrlrange="-1 1"/>
<motor name="turn" tendon="turn" ctrlrange="-1 1"/>
</actuator>

This system of tendons conveniently control the car, as we can control the linear movement of the car, with the "forward" tendon, having forward displacement of 0.5 for both wheels, and the turning movement with the "turn" tendon, having opposite directions of displacement for each of the wheels, which physically will make the car turn.

The degree of displacement is controlled with both of the defined motors, by multiplying their values with the coef values of the tendons.

Lastly, the <sensor> tag defines the joints that should read, as generalized displacements value on its DoF.

<sensor>
<jointactuatorfrc name="right" joint="right"/>
<jointactuatorfrc name="left" joint="left"/>
</sensor>
</mujoco>

The Task

In order to train and run the reinforcement learning agent to control the car, we must set a clear purpose of the intended behavior. For example we may aim to make the car take a circular path or drive towards a fixed, but unknown position.

For this example, we will define a reward so that the car drive from its initial position A=(0,0,0) towards B=(-1,4,0). This point is somehow to the left of the car, so it has to turn as well as drive in straight line, as shown below.

Made by author

For this task, we must define a reward function in relation to the euclidean distance between the current position of the car and the target position. We choose to take the exponent of the negative distance np.exp(-np.linalg.norm(A,B)) to represent this reward so that the values are always in the range [0,1].

Continuous Proximal Policy Optimization

As we noted in the XML file, the range of the actuators values is continuous, from -1 to 1. This means that the action space is continuous too; therefore, the training algorithm should be suitable to address these scenarios.

This means that algorithms like DQN will not be suitable, as it can only be applied to discrete action spaces. However, actor critic methods, like PPO, can still be used to train models of continuous action space.

The PPO code used here for this task is based on CleanRL single-file implementation for the continuous PPO, but with modifying some parameters and replacing the environment with our newly written environment enveloping the previous MujoCo model.

Practically we train for 2e6 steps, with 2500 steps per episode. As the default update rate in MujoCo is 2ms, then 2500 steps translates to 5 seconds.

It is worth noting that the discrete PPO update formulas are the same for the continuous case, expect for the type of the output distribution in the policy model, where it will be categorical Categorical in the discrete case, and Gaussian Normal, or any other continuous distribution in the continuous case.

Next we will show the used environment for stepping and simulating the MujuCo model, which will be used for the training program of PPO.

Training Environment

As we will be using the main MujoCo package (not dm_control), we will do the following imports:

import mujoco
import mujoco.viewer
import numpy as np
import time
import torch

We then define the init method of the environment class, in which:

  1. The XML file of the model is loaded with the command: mujoco.MjModel.from_xml_path(), which will result in the model structure containing the geometries and constants such as time-steps, and gravity constant in model.opt.
  2. The data structure are loaded from the model structure with the command data = mujoco.MjData(model). In this structure, the current state values (like generalized velocity data.qvel, generalized position data.qpos, as well as actuator values data.ctrl), can be read and set.
  3. Duration is 5 seconds, which can be mapped to the simulation time by delaying it in specific amount, as the simulation is usually much faster. For example 5 seconds maybe simulated in 0.5 seconds.
  4. Rendering: if the render variable is set to True. A viewer GUI will be initialized with mujoco.viewer.launch_passive(model,data). The passive feature is needed so that the GUI doesn't block the code execution. However, it will be updated to the recent values in data when viewer.sync() is called, and it should be closed with viewer.close()
class Cars():
def __init__(self, max_steps = 3*500, seed=0,render=False):
self.model = mujoco.MjModel.from_xml_path('./car.xml')
self.data = mujoco.MjData(self.model)
self.duration = int(max_steps//500)
self.single_action_space = (2,)
self.single_observation_space = (13,)
self.viewer = None
self.reset()
if render:
self.viewer = mujoco.viewer.launch_passive(self.model, self.data)

In reset() method, the data structure will be rested based on the original model using mujoco.mj_resetData.

Here we can choose the shape of the state we will be using to solve our problem. We note as the task is only moving in 2D, therefore we need the current Cartesian

position of the car data.body('car').xpos, in addition to its orientation data.body('car').xquat, and lastly the velocity data.body('car').cvel also maybe helpful to judge if we want to accelerate of decelerate.

Note that data.body() or data.geom() allow named access to these objects as defined in the XML file, or even by their index number , where 0 always indicate the worldbody.

def reset(self):
mujoco.mj_resetData(self.model, self.data)
self.episodic_return = 0
state = np.hstack((self.data.body('car').xpos[:3],
self.data.body('car').cvel,
self.data.body('car').xquat))
if self.viewer is not None:
self.viewer.close()
self.viewer = mujoco.viewer.launch_passive(self.model, self.data)
return state

As our task is to reach the point [-1,4], then our reward can be as simple as the distance between the current position and the destination. However, taking exp(-distance) seems more suitable since it will restrict the rewards values to the range [0,1], which can lead to better stability in learning.

As mentioned previously all we have to do to synchronize the change to the viewer window is to invoke the command self.viewer.sync().

def reward(self, state, action):
car_dist = (np.linalg.norm(np.array([-1,4]-state[:2])))
return np.exp(-((car_dist)))

def render(self):
if self.viewer.is_running():
self.viewer.sync()

def close(self):
self.viewer.close()

In the step() routine, the actual model will be updated. First by setting the current action of the forward and turning movements in the data.ctrl. But it is noted that the action is transformed with the np.tanh() which has the output range of [-1,1]. This will allow the neural network of the policy to be trained on the full range [-Inf, Inf] for its output vector, which is easier to represent, as smaller values may get rounded during training.

We additionally keep count of the episodic return, and handle the terminal case, by resting the environment.

def step(self, action):
self.data.ctrl = np.tanh(action)
mujoco.mj_step(self.model, self.data)
state = np.hstack((self.data.body('car').xpos[:3],
self.data.body('car').cvel,
self.data.body('car').xquat))
reward = self.reward(state, np.tanh(action))
self.episodic_return += reward
done = False
info = {}
if self.data.time>=self.duration:
done = True
info.update({'episode':{'r':self.episodic_return,'l':self.data.time}})
info.update({"terminal_observation":state.copy()})
state = self.reset()
return state, reward, done, info

This finished the main environment class of the car model. It is not that complicated or hard to write. However, in dm_control a customized environments and pipelines with various tools are already available and ready to be used for training RL agents. An extensive topic, that is left for explorations in future posts.

Training Results

After training the PPO program with the previous environment and using a suitable agent network, we record the following training curve for the episodic return.

Made by author

We can see that the model is clearly learning, albeit slowly. There you have it. Your first simulated and controlled RL agent with MujoCo.

However, we still need to see it in action: does the robot really move towards point [-1,4]? To do that we need to run the following testing program with the render variable set to True.

def main():
duration = 5
env = Cars(max_steps=duration*500,render=True)
#2000000 is the training iterations
policy = torch.load(f'ppo_agent_cars_{2000000}_mlp.pth')
state = env.reset()
start = time.time()
while time.time() - start < duration:
with torch.no_grad():
action = policy.actor(torch.Tensor(state).to('cuda')).cpu().numpy()[:2]
state, reward, done, info = env.step(action)
if done:
break
time.sleep(0.003)
env.render()
env.close()

After initializing the environment and loading the trained model with pytorch. We get the initial state by resetting the environment. Inside the while loop, we alternate between inferring the action from the actor model, and stepping the environment. Lastly we keep rendering each frame with env.render().

If we run the program without any delay, we will get a very fast simulation that we may not be able to observe and depending on our while condition, it may get repeated many times before the program finishes.

To avoid that, we delay the execution by some amount with time.sleep(). The program may still run several times (before duration seconds has passed), but it will be observable.

In my case, this code shows the car moving exactly as shown in the image above (in The Task section), but as the speed is limited and the episode length is only 5 seconds, the simulation ends before reaching the point [-1,4], because reaching the point will be physically impossible in that case, no matter how long the model is trained.

Conclusion

While this tutorial merely scratches the surface of MuJoCo’s vast API capabilities, it equips you with the foundational knowledge to embark on your robotic simulation journey. MuJoCo’s C++ foundation enables lightning-fast performance, making it ideal for training intricate robots of diverse configurations.

This versatility positions MuJoCo as a valuable tool in both research and industry:

  • Research: Researchers can rigorously test and compare novel reinforcement learning algorithms within challenging, realistic scenarios without the logistical complexities and costs of physical prototyping.
  • Industry: Manufacturers can thoroughly evaluate robot designs and models in environments mirroring real-world conditions, ensuring optimal performance before deployment.

This Reinforcement and Imitation Learning series will delve deeper into specific, popular algorithms, exploring their intricacies and applications. Subscribe or follow along to stay informed and explore the full potential of these powerful techniques!

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓

Sign Up for the Course
`; } else { console.error('Element with id="subscribe" not found within the page with class "home".'); } } }); // Remove duplicate text from articles /* Backup: 09/11/24 function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag elements.forEach(el => { const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 2) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); */ // Remove duplicate text from articles function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag // List of classes to be excluded const excludedClasses = ['medium-author', 'post-widget-title']; elements.forEach(el => { // Skip elements with any of the excluded classes if (excludedClasses.some(cls => el.classList.contains(cls))) { return; // Skip this element if it has any of the excluded classes } const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 10) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); //Remove unnecessary text in blog excerpts document.querySelectorAll('.blog p').forEach(function(paragraph) { // Replace the unwanted text pattern for each paragraph paragraph.innerHTML = paragraph.innerHTML .replace(/Author\(s\): [\w\s]+ Originally published on Towards AI\.?/g, '') // Removes 'Author(s): XYZ Originally published on Towards AI' .replace(/This member-only story is on us\. Upgrade to access all of Medium\./g, ''); // Removes 'This member-only story...' }); //Load ionic icons and cache them if ('localStorage' in window && window['localStorage'] !== null) { const cssLink = 'https://code.ionicframework.com/ionicons/2.0.1/css/ionicons.min.css'; const storedCss = localStorage.getItem('ionicons'); if (storedCss) { loadCSS(storedCss); } else { fetch(cssLink).then(response => response.text()).then(css => { localStorage.setItem('ionicons', css); loadCSS(css); }); } } function loadCSS(css) { const style = document.createElement('style'); style.innerHTML = css; document.head.appendChild(style); } //Remove elements from imported content automatically function removeStrongFromHeadings() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, h6, span'); elements.forEach(el => { const strongTags = el.querySelectorAll('strong'); strongTags.forEach(strongTag => { while (strongTag.firstChild) { strongTag.parentNode.insertBefore(strongTag.firstChild, strongTag); } strongTag.remove(); }); }); } removeStrongFromHeadings(); "use strict"; window.onload = () => { /* //This is an object for each category of subjects and in that there are kewords and link to the keywods let keywordsAndLinks = { //you can add more categories and define their keywords and add a link ds: { keywords: [ //you can add more keywords here they are detected and replaced with achor tag automatically 'data science', 'Data science', 'Data Science', 'data Science', 'DATA SCIENCE', ], //we will replace the linktext with the keyword later on in the code //you can easily change links for each category here //(include class="ml-link" and linktext) link: 'linktext', }, ml: { keywords: [ //Add more keywords 'machine learning', 'Machine learning', 'Machine Learning', 'machine Learning', 'MACHINE LEARNING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ai: { keywords: [ 'artificial intelligence', 'Artificial intelligence', 'Artificial Intelligence', 'artificial Intelligence', 'ARTIFICIAL INTELLIGENCE', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, nl: { keywords: [ 'NLP', 'nlp', 'natural language processing', 'Natural Language Processing', 'NATURAL LANGUAGE PROCESSING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, des: { keywords: [ 'data engineering services', 'Data Engineering Services', 'DATA ENGINEERING SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, td: { keywords: [ 'training data', 'Training Data', 'training Data', 'TRAINING DATA', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ias: { keywords: [ 'image annotation services', 'Image annotation services', 'image Annotation services', 'image annotation Services', 'Image Annotation Services', 'IMAGE ANNOTATION SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, l: { keywords: [ 'labeling', 'labelling', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, pbp: { keywords: [ 'previous blog posts', 'previous blog post', 'latest', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, mlc: { keywords: [ 'machine learning course', 'machine learning class', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, }; //Articles to skip let articleIdsToSkip = ['post-2651', 'post-3414', 'post-3540']; //keyword with its related achortag is recieved here along with article id function searchAndReplace(keyword, anchorTag, articleId) { //selects the h3 h4 and p tags that are inside of the article let content = document.querySelector(`#${articleId} .entry-content`); //replaces the "linktext" in achor tag with the keyword that will be searched and replaced let newLink = anchorTag.replace('linktext', keyword); //regular expression to search keyword var re = new RegExp('(' + keyword + ')', 'g'); //this replaces the keywords in h3 h4 and p tags content with achor tag content.innerHTML = content.innerHTML.replace(re, newLink); } function articleFilter(keyword, anchorTag) { //gets all the articles var articles = document.querySelectorAll('article'); //if its zero or less then there are no articles if (articles.length > 0) { for (let x = 0; x < articles.length; x++) { //articles to skip is an array in which there are ids of articles which should not get effected //if the current article's id is also in that array then do not call search and replace with its data if (!articleIdsToSkip.includes(articles[x].id)) { //search and replace is called on articles which should get effected searchAndReplace(keyword, anchorTag, articles[x].id, key); } else { console.log( `Cannot replace the keywords in article with id ${articles[x].id}` ); } } } else { console.log('No articles found.'); } } let key; //not part of script, added for (key in keywordsAndLinks) { //key is the object in keywords and links object i.e ds, ml, ai for (let i = 0; i < keywordsAndLinks[key].keywords.length; i++) { //keywordsAndLinks[key].keywords is the array of keywords for key (ds, ml, ai) //keywordsAndLinks[key].keywords[i] is the keyword and keywordsAndLinks[key].link is the link //keyword and link is sent to searchreplace where it is then replaced using regular expression and replace function articleFilter( keywordsAndLinks[key].keywords[i], keywordsAndLinks[key].link ); } } function cleanLinks() { // (making smal functions is for DRY) this function gets the links and only keeps the first 2 and from the rest removes the anchor tag and replaces it with its text function removeLinks(links) { if (links.length > 1) { for (let i = 2; i < links.length; i++) { links[i].outerHTML = links[i].textContent; } } } //arrays which will contain all the achor tags found with the class (ds-link, ml-link, ailink) in each article inserted using search and replace let dslinks; let mllinks; let ailinks; let nllinks; let deslinks; let tdlinks; let iaslinks; let llinks; let pbplinks; let mlclinks; const content = document.querySelectorAll('article'); //all articles content.forEach((c) => { //to skip the articles with specific ids if (!articleIdsToSkip.includes(c.id)) { //getting all the anchor tags in each article one by one dslinks = document.querySelectorAll(`#${c.id} .entry-content a.ds-link`); mllinks = document.querySelectorAll(`#${c.id} .entry-content a.ml-link`); ailinks = document.querySelectorAll(`#${c.id} .entry-content a.ai-link`); nllinks = document.querySelectorAll(`#${c.id} .entry-content a.ntrl-link`); deslinks = document.querySelectorAll(`#${c.id} .entry-content a.des-link`); tdlinks = document.querySelectorAll(`#${c.id} .entry-content a.td-link`); iaslinks = document.querySelectorAll(`#${c.id} .entry-content a.ias-link`); mlclinks = document.querySelectorAll(`#${c.id} .entry-content a.mlc-link`); llinks = document.querySelectorAll(`#${c.id} .entry-content a.l-link`); pbplinks = document.querySelectorAll(`#${c.id} .entry-content a.pbp-link`); //sending the anchor tags list of each article one by one to remove extra anchor tags removeLinks(dslinks); removeLinks(mllinks); removeLinks(ailinks); removeLinks(nllinks); removeLinks(deslinks); removeLinks(tdlinks); removeLinks(iaslinks); removeLinks(mlclinks); removeLinks(llinks); removeLinks(pbplinks); } }); } //To remove extra achor tags of each category (ds, ml, ai) and only have 2 of each category per article cleanLinks(); */ //Recommended Articles var ctaLinks = [ /* ' ' + '

Subscribe to our AI newsletter!

' + */ '

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

'+ '

Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

' + '
' + '' + '' + '

Note: Content contains the views of the contributing authors and not Towards AI.
Disclosure: This website may contain sponsored content and affiliate links.

' + 'Discover Your Dream AI Career at Towards AI Jobs' + '

Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 10,000 live jobs today with Towards AI Jobs!

' + '
' + '

🔥 Recommended Articles 🔥

' + 'Why Become an LLM Developer? Launching Towards AI’s New One-Stop Conversion Course'+ 'Testing Launchpad.sh: A Container-based GPU Cloud for Inference and Fine-tuning'+ 'The Top 13 AI-Powered CRM Platforms
' + 'Top 11 AI Call Center Software for 2024
' + 'Learn Prompting 101—Prompt Engineering Course
' + 'Explore Leading Cloud Providers for GPU-Powered LLM Training
' + 'Best AI Communities for Artificial Intelligence Enthusiasts
' + 'Best Workstations for Deep Learning
' + 'Best Laptops for Deep Learning
' + 'Best Machine Learning Books
' + 'Machine Learning Algorithms
' + 'Neural Networks Tutorial
' + 'Best Public Datasets for Machine Learning
' + 'Neural Network Types
' + 'NLP Tutorial
' + 'Best Data Science Books
' + 'Monte Carlo Simulation Tutorial
' + 'Recommender System Tutorial
' + 'Linear Algebra for Deep Learning Tutorial
' + 'Google Colab Introduction
' + 'Decision Trees in Machine Learning
' + 'Principal Component Analysis (PCA) Tutorial
' + 'Linear Regression from Zero to Hero
'+ '

', /* + '

Join thousands of data leaders on the AI newsletter. It’s free, we don’t spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

',*/ ]; var replaceText = { '': '', '': '', '
': '
' + ctaLinks + '
', }; Object.keys(replaceText).forEach((txtorig) => { //txtorig is the key in replacetext object const txtnew = replaceText[txtorig]; //txtnew is the value of the key in replacetext object let entryFooter = document.querySelector('article .entry-footer'); if (document.querySelectorAll('.single-post').length > 0) { //console.log('Article found.'); const text = entryFooter.innerHTML; entryFooter.innerHTML = text.replace(txtorig, txtnew); } else { // console.log('Article not found.'); //removing comment 09/04/24 } }); var css = document.createElement('style'); css.type = 'text/css'; css.innerHTML = '.post-tags { display:none !important } .article-cta a { font-size: 18px; }'; document.body.appendChild(css); //Extra //This function adds some accessibility needs to the site. function addAlly() { // In this function JQuery is replaced with vanilla javascript functions const imgCont = document.querySelector('.uw-imgcont'); imgCont.setAttribute('aria-label', 'AI news, latest developments'); imgCont.title = 'AI news, latest developments'; imgCont.rel = 'noopener'; document.querySelector('.page-mobile-menu-logo a').title = 'Towards AI Home'; document.querySelector('a.social-link').rel = 'noopener'; document.querySelector('a.uw-text').rel = 'noopener'; document.querySelector('a.uw-w-branding').rel = 'noopener'; document.querySelector('.blog h2.heading').innerHTML = 'Publication'; const popupSearch = document.querySelector$('a.btn-open-popup-search'); popupSearch.setAttribute('role', 'button'); popupSearch.title = 'Search'; const searchClose = document.querySelector('a.popup-search-close'); searchClose.setAttribute('role', 'button'); searchClose.title = 'Close search page'; // document // .querySelector('a.btn-open-popup-search') // .setAttribute( // 'href', // 'https://medium.com/towards-artificial-intelligence/search' // ); } // Add external attributes to 302 sticky and editorial links function extLink() { // Sticky 302 links, this fuction opens the link we send to Medium on a new tab and adds a "noopener" rel to them var stickyLinks = document.querySelectorAll('.grid-item.sticky a'); for (var i = 0; i < stickyLinks.length; i++) { /* stickyLinks[i].setAttribute('target', '_blank'); stickyLinks[i].setAttribute('rel', 'noopener'); */ } // Editorial 302 links, same here var editLinks = document.querySelectorAll( '.grid-item.category-editorial a' ); for (var i = 0; i < editLinks.length; i++) { editLinks[i].setAttribute('target', '_blank'); editLinks[i].setAttribute('rel', 'noopener'); } } // Add current year to copyright notices document.getElementById( 'js-current-year' ).textContent = new Date().getFullYear(); // Call functions after page load extLink(); //addAlly(); setTimeout(function() { //addAlly(); //ideally we should only need to run it once ↑ }, 5000); }; function closeCookieDialog (){ document.getElementById("cookie-consent").style.display = "none"; return false; } setTimeout ( function () { closeCookieDialog(); }, 15000); console.log(`%c 🚀🚀🚀 ███ █████ ███████ █████████ ███████████ █████████████ ███████████████ ███████ ███████ ███████ ┌───────────────────────────────────────────────────────────────────┐ │ │ │ Towards AI is looking for contributors! │ │ Join us in creating awesome AI content. │ │ Let's build the future of AI together → │ │ https://towardsai.net/contribute │ │ │ └───────────────────────────────────────────────────────────────────┘ `, `background: ; color: #00adff; font-size: large`); //Remove latest category across site document.querySelectorAll('a[rel="category tag"]').forEach(function(el) { if (el.textContent.trim() === 'Latest') { // Remove the two consecutive spaces (  ) if (el.nextSibling && el.nextSibling.nodeValue.includes('\u00A0\u00A0')) { el.nextSibling.nodeValue = ''; // Remove the spaces } el.style.display = 'none'; // Hide the element } }); // Add cross-domain measurement, anonymize IPs 'use strict'; //var ga = gtag; ga('config', 'G-9D3HKKFV1Q', 'auto', { /*'allowLinker': true,*/ 'anonymize_ip': true/*, 'linker': { 'domains': [ 'medium.com/towards-artificial-intelligence', 'datasets.towardsai.net', 'rss.towardsai.net', 'feed.towardsai.net', 'contribute.towardsai.net', 'members.towardsai.net', 'pub.towardsai.net', 'news.towardsai.net' ] } */ }); ga('send', 'pageview'); -->