Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: pub@towardsai.net
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

Inventory Optimization with Data Science: Hands-On Tutorial with Python
Data Science   Latest   Machine Learning

Inventory Optimization with Data Science: Hands-On Tutorial with Python

Last Updated on November 5, 2023 by Editorial Team

Author(s): Peyman Kor

Originally published on Towards AI.

Intro

Effective inventory management is important for businesses across various industries. In the previous blog, we explored how to model the process and how to frame it with the Markov Process.

The below illustration shows how to model the probabilistic movement of the states:

Transition probabilities of Markov Process — Image source: Author

In this blog, we go one step ahead and think about how to frame inventory optimization with the Markov Reward Process (MRP). Here, we have the additional term “Reward,” meaning we need to think about how to assign “Reward“ to the process.

This blog will have the following structure:

  • Markov Reward Process (MRP)
  • Reward Modeling Bike Shop Inventory Optimization
  • Hands-On Python Coding
  • Bellman Equation
  • Putting Everything Together
  • Key Summary

At the end of the blog, you will learn:

  • How to frame the Markov Reward Process (MRP) for Bike Shop Inventory Optimization.
  • Write hands-on code to model MRP in Python.

Markov Reward Process:

Markov Reward Process (in short MRP) is a Markov Process, where the sequence of “Rewards, State” tuple follows the Markov Reward Property as:

Here, R_{t+1} is a reward induced if we move from state S_{t} to S_{t+1}.

MRP need a new Python data structure. The new data structure not only keeps state evolution with its probability, but this time it’s rewards too.

To show how MRP data structure looks like in Python, I am giving an example of MarkovRewProcess_Exa data structure. Let me explain what it looks like:

MarkovRewProcess_Exa = {"Current State A":{("NextS1fromA", "RewardS1fromA"):"PNextS1fromA",
("NextS2fromA", "RewardS2fromA"):"PNextS2fromA"},

"Current State B":{("NextS1fromB", "RewardS1fromB"):"PNextS1fromB",
("NextS2fromB", "RewardS2fromB"):"PNextS2fromB"}}

The **MarkovRewProcess_Exa** is a dictionary where the keys are the current state, S_t. Then, each state going to a new state would have some rewards, and from state S_t to tuple of (S_{t+1}, R_{t+1}) has a specific probability.

Here you can see when you are at S_t = Current State A, and you move to next (state, reward) written as (“NextS1fromA”, “RewardS1fromA”), the probability of happening of this move is : **PNextS1fromA**:

MarkovRewProcess_Exa["Current State A"][("NextS1fromA", "RewardS1fromA")]
Code output — Image Source: Author

And we can iterate over all possible (new state, reward) to get the overview of the process evolves:

for (state, value) in MarkovRewProcess_Exa.items():

print("The Current state is: {}".format(state))

for (next_state,reward),trans_prob in value.items():

print("The ({} with {}) \nwith Probability of: {} \
"
.format(next_state, reward, trans_prob ))p
Code Output — Image Source: Author

Reward Modeling Bike Shop Inventory Optimization

Now, at this stage, we need to assign a reward to the Markov Process of Bike Shop, as we discussed in the previous article. What we mean is that given you are at state S_t, if you move to state S_{t+1}, how much “cost/reward” is accrued?

In the example of Bike Shop Inventory Optimization, the cost is compromised of two parts:

– 1) When we hold inventory items in the inventory, we are making the cost of overnight holding costs. Imagine holding items in inventory is costly (maintenance or other fees).

– 2) The second cost is missed customer cost. This happens when you have three bicycles in your inventory, but there are five customers. In this case, you missed two customers.

In other words, reward in the example of Bike Shop is defined as:

Defining Reward: Image Source: Author

However, we can not write the equation straightforwardly, as we need to distinguish two different cases,

Case I) The number of available items in inventory meets the “demand”

In this case, we do not have any “Missed Customer cost”. Meaning only we are dealing with overnight holding costs. We assume this cost of overholding per bicycle (item) is equal to h. We can write that

Reward Modeling Case I— Image Source: Author

Note that states are “recorded” each day at 8 PM . For example, today is Monday, and at 8 PM, you record that you have two bicycles in the inventory (α = 2) and one bicycle order that arrives tomorrow morning at 6 AM (β = 1). If tomorrow's demand is i=1. Given this, the reward is the equal cost of holding:

-hα=-h×2

Case II) The number of available items in inventory is lower than the “demand”

In this case, when demand i is equal and greater than all the bicycles you have in the early morning, α + β. In this case, the cost is first -hα plus the Missed Costumer cost, where the unit cost of the missed customer is equal to p.

Reward Modeling Case II — Image Source: Author

The right side of the above equation needs to be explained. The term p is simply the unit cost of the missing customer, then the term.

The second part of Reward Modeling Case II — Image Source: Author

It is equal to an expected number of missed customers multiplied by p.

Hands-On Python Coding:

In the below, we wrote a function that is the generator of the Markov Reward Process Dictionary. The final output of the dictionary structure for the Markov Reward Process is as follows:

from typing import Dict, Tuple

MRP_dict: Dict[Tuple[int, int], Dict[Tuple[Tuple[int, int], float], float]] = {}

I did a little bit of explanation and annotation of the final data structure below:

MRP_Dict is the final data data structure of MRP process. Image Source: Author
from typing import Dict, Tuple
from scipy.stats import poisson

def generate_Markov_Rew_Process_Dict(user_capacity: int, user_poisson_lambda: int,
holding_cost: int, missedcostumer_cost: int
):
"""
Generate a Markov Reward Process Dictionary based on the provided parameters.

Args:
user_capacity (int): The the capacity of the inventory.
user_poisson_lambda (int): The Poisson lambda parameter.
holding_cost (int): The unit cost of holding item in inventory overnight.
missedcostumer_cost (int):The unit cost of missed costumer.
Returns:
Dict[Tuple[int, int], Dict[Tuple[Tuple[int, int], float], float]]: The Markov Reward Process Dictionary.
"""


MRP_dict: Dict[Tuple[int, int], Dict[Tuple[Tuple[int, int], float], float]] = {}

for alpha in range(user_capacity + 1):
for beta in range(user_capacity + 1 - alpha):

state = (alpha, beta)
init_inv = alpha + beta
beta1 = user_capacity - init_inv

base_reward = -alpha * holding_cost

for demand in range(init_inv + 1):
if demand <= (init_inv - 1):

# Calculate transition probability for demand
transition_prob = poisson.pmf(demand, user_poisson_lambda)

if state in MRP_dict:

MRP_dict[state][((init_inv - demand, beta1), base_reward)]= transition_prob

else:

MRP_dict[state] = {((init_inv - demand, beta1), base_reward):transition_prob}


else:
# probability of not meeting the demands
transition_prob = 1 - poisson.cdf(init_inv - 1, user_poisson_lambda)

transition_prob2 = 1 - poisson.cdf(init_inv, user_poisson_lambda)


reward = base_reward - missedcostumer_cost * ((user_poisson_lambda * transition_prob) -
init_inv * transition_prob2)

if state in MRP_dict:

MRP_dict[state][((0, beta1),reward)]= transition_prob

else:

MRP_dict[state] = {((0, beta1),reward):transition_prob}

return MRP_dict

Having the function, we can give the function's input (same as the previous article) and then get the dictionary of the Markov Reward Process.

# Example usage:
inv_capacity_val = 2
poisson_lambda_val = 2.0
holding_cost_val = 1
missedcostumer_cost_val = 10

MRP_dict = generate_Markov_Rew_Process_Dict(
user_capacity= inv_capacity_val,
user_poisson_lambda = poisson_lambda_val,
holding_cost = holding_cost_val,
missedcostumer_cost = missedcostumer_cost_val)

We can print out this dictionary and figure out how the data has been stored in the dictionary:

for (state, value) in MRP_dict.items():

print("The Current state is: {}".format(state))

for (next_state,reward),trans_prob in value.items():

print("The (State ,Reward): ({} , {:.2f}) occurs with Probability of: {:.2f} \
"
.format(next_state, reward, trans_prob ))

Bellman Equation:

The main idea of the Markov Reward Process is to figure out the expected total return of each state if we continue the process for a long time under a specific policy. The “State Value Function” that maps “Expected Return of Each State” to “State” called as “State Value Function” can be defined as:

Bellman Equation for Value Function of Each State — Image Source: Author

Some explanation of the second equation is needed. The V(s) is the expected total return of following specific policy from state s, compromised of two parts. The first part is the reward we expect to get right away (immediate), plus the average value of all the different states we could go to next, times the value of those new states V(S’).

In other words, this equation is recursive.

Annotation of Bellman Equation for State Value Function — Image Source: Author

The goal of the rest of this blog is to find the state value function V(s), which consists of two parts:

– Part I) Find the Expected Immediate Reward for all states

– Part II) Build the Transition Probability Matrix

Part I) Find the Expected Immediate Reward

The below function will find theR(s) for all the states. In fact, R(s) is the expected immediate reward for states in the inventory problem.

def calculate_expected_immediate_rewards(MarkovRewProcessDict):
"""
Calculate the expected immediate rewards for each state in a Markov Reward Process.

Args:
MarkovRewProcessDict (dict): Markov Reward Process Dictionary.

Returns:
dict: Dictionary with states as keys and their expected immediate rewards as values.
"""

E_immediate_R = {} # Initialize a dictionary to store expected immediate rewards

for from_state, value in MarkovRewProcessDict.items():
# Calculate the expected immediate reward for the current state
expected_reward = sum(reward[1] * prob for (reward, prob) in value.items())
E_immediate_R[from_state] = expected_reward

return E_immediate_R

# Example usage:
# E_immediate_R = calculate_expected_immediate_rewards(MarkovRewProcessDict)

Calling the function:

R_exp_imm =calculate_expected_immediate_rewards(MRP_dict)
R_exp_imm

Part II) Build the Transition Probability Matrix

The transition probability matrix is like a table where the *states* are the column names and rows, and each cell shows the likelihood of moving from one state to another.

We can initialize this matrix with:

states = list(MRP_dict.keys())

df_trans_prob = pd.DataFrame(0.0, columns=states, index=states)
df_trans_prob

The above is the initialization of the matrix. Now we can go and start filling out this matrix as below:

import numpy as np
import pandas as pd

def create_transition_probability_matrix(MarkovRewProcessDict):
"""
Create a transition probability matrix from a Markov Reward Process Dictionary.

Args:
MarkovRewProcessDict (dict): Markov Reward Process Dictionary.

Returns:
pd.DataFrame: Transition probability matrix.
"""

states = list(MarkovRewProcessDict.keys())
num_states = len(states)

# Initialize an empty matrix with zeros
trans_prob = np.zeros((num_states, num_states))
df_trans_prob = pd.DataFrame(trans_prob, columns=states, index=states)

for i, from_state in enumerate(states):
for j, to_state in enumerate(states):
for (new_state, reward) in MarkovRewProcessDict.get(from_state, {}):
if new_state == to_state:
probability = MarkovRewProcessDict[from_state].get((new_state, reward), 0.0)
df_trans_prob.iloc[i, j] = probability

return df_trans_prob

Calling the function:

df_trans_prob = create_transition_probability_matrix(MRP_dict)
df_trans_prob

Let me explain what this matrix means. For example, think about the third row, which is state S_t=(0,2), then the following probability can be written as:

Now, having the transition matrix probability and expected immediate rewards of states, we can calculate the value function of each state using the vector and matrix format of the Bellman equation as below:

import numpy as np
import pandas as pd

def calculate_state_value_function(trans_prob_mat, expected_immediate_rew, gamma):
"""
Calculate the state value function using transition probability matrix and expected immediate rewards.

Args:
trans_prob_mat (pd.DataFrame): Transition probability matrix.
expected_immediate_rew (dict): Dictionary with states as keys and expected immediate rewards as values.
gamma (float): Discount factor.

Returns:
pd.DataFrame: DataFrame with 'Expected Immediate Reward' and 'Value Function' columns indexed by states.
"""

states = list(expected_immediate_rew.keys())

R_exp = np.array(list(expected_immediate_rew.values()))

# Calculate the value function vector using the equation
val_func_vec = np.linalg.solve(np.eye(len(R_exp)) - gamma * trans_prob_mat, R_exp)

# Create a DataFrame with 'Expected Immediate Reward' and 'Value Function' columns
MarkRevData = pd.DataFrame({'Expected Immediate Reward': R_exp, 'Value Function': val_func_vec}, index=states)

return MarkRevData

# Example usage:
# MarkRevData = calculate_state_value_function(df_trans_prob, E_immediate_R, gamma)

Calling this function:

calculate_state_value_function(trans_prob_mat=df_trans_prob, expected_immediate_rew=R_exp_imm, gamma=0.9)

Putting Everything Together:

One way is to put all codes together, make a MarkovRewardProcess class, and call modules of the class. I have put full code (less than 100 lines) here:

from typing import Dict, Tuple
from scipy.stats import poisson
import numpy as np
import pandas as pd

class MarkovRewardProcess:
def __init__(self):
self.MRP_dict = {}

def generate_Markov_Rew_Process_Dict(self, user_capacity: int, user_poisson_lambda: int,
holding_cost: int, missedcostumer_cost: int
):
self.MRP_dict = {} # Initialize the Markov Reward Process Dictionary

for alpha in range(user_capacity + 1):
for beta in range(user_capacity + 1 - alpha):

state = (alpha, beta)
init_inv = alpha + beta
beta1 = user_capacity - init_inv

base_reward = -alpha * holding_cost

for demand in range(init_inv + 1):
if demand <= (init_inv - 1):
transition_prob = poisson.pmf(demand, user_poisson_lambda)

if state in self.MRP_dict:
self.MRP_dict[state][((init_inv - demand, beta1), base_reward)] = transition_prob
else:
self.MRP_dict[state] = {((init_inv - demand, beta1), base_reward): transition_prob}
else:
transition_prob = 1 - poisson.cdf(init_inv - 1, user_poisson_lambda)
transition_prob2 = 1 - poisson.cdf(init_inv, user_poisson_lambda)
reward = base_reward - missedcostumer_cost * ((user_poisson_lambda * transition_prob) -
init_inv * transition_prob2)
if state in self.MRP_dict:
self.MRP_dict[state][((0, beta1), reward)] = transition_prob
else:
self.MRP_dict[state] = {((0, beta1), reward): transition_prob}

def calculate_expected_immediate_rewards(self):
E_immediate_R = {}
for from_state, value in self.MRP_dict.items():
expected_reward = sum(reward[1] * prob for (reward, prob) in value.items())
E_immediate_R[from_state] = expected_reward
return E_immediate_R

def create_transition_probability_matrix(self):
states = list(self.MRP_dict.keys())
num_states = len(states)
trans_prob = np.zeros((num_states, num_states))
df_trans_prob = pd.DataFrame(trans_prob, columns=states, index=states)

for i, from_state in enumerate(states):
for j, to_state in enumerate(states):
for (new_state, reward) in self.MRP_dict.get(from_state, {}):
if new_state == to_state:
probability = self.MRP_dict[from_state].get((new_state, reward), 0.0)
df_trans_prob.iloc[i, j] = probability
return df_trans_prob

def calculate_state_value_function(self, trans_prob_mat, expected_immediate_rew, gamma):
states = list(expected_immediate_rew.keys())
R_exp = np.array(list(expected_immediate_rew.values()))
val_func_vec = np.linalg.solve(np.eye(len(R_exp)) - gamma * trans_prob_mat, R_exp)
MarkRevData = pd.DataFrame({'Expected Immediate Reward': R_exp, 'Value Function': val_func_vec}, index=states)
return MarkRevData

mrp = MarkovRewardProcess()

# Generate the Markov Reward Process Dictionary
user_capacity = 2
user_poisson_lambda = 2
holding_cost = 1
missedcostumer_cost = 10

mrp.generate_Markov_Rew_Process_Dict(user_capacity, user_poisson_lambda, holding_cost, missedcostumer_cost)

E_immediate_R = mrp.calculate_expected_immediate_rewards()
trans_prob_mat = mrp.create_transition_probability_matrix()

gamma = 0.9 # Replace with your desired discount factor
MRP_Data = mrp.calculate_state_value_function(trans_prob_mat, E_immediate_R, gamma)

print(MRP_Data)
Code output using writing Python Class — Image Source: Author

Key Summary:

– In this blog, the Markov Reward Process was introduced, and in the example of inventory optimization, the “reward” of the process was defined.

– A new Python data structure, “MRP_dict,” was introduced where it tracks the states, the reward, and the probability of the next (state, reward).

– The Markov Reward Process aims to find a State Value Function, meaning what the “Expected Total Return” for every state is. This was achieved by using the calculate_state_value_function in the Python code.

[1] You can read this example more in-depth in “Foundation of Reinforcement Learning with Application in Finance”. However, I have rewritten Python codes in this blog to make it easier to understand.

Thanks for reading so far!

I hope this article has provided an easy-to-understand tutorial on how to do inventory optimization with Python.

If you think this article helped you to learn more about inventory optimization and Markov Process, please give it a U+1F44F and follow!

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓

Sign Up for the Course
`; } else { console.error('Element with id="subscribe" not found within the page with class "home".'); } } }); // Remove duplicate text from articles /* Backup: 09/11/24 function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag elements.forEach(el => { const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 2) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); */ // Remove duplicate text from articles function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag // List of classes to be excluded const excludedClasses = ['medium-author', 'post-widget-title']; elements.forEach(el => { // Skip elements with any of the excluded classes if (excludedClasses.some(cls => el.classList.contains(cls))) { return; // Skip this element if it has any of the excluded classes } const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 10) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); //Remove unnecessary text in blog excerpts document.querySelectorAll('.blog p').forEach(function(paragraph) { // Replace the unwanted text pattern for each paragraph paragraph.innerHTML = paragraph.innerHTML .replace(/Author\(s\): [\w\s]+ Originally published on Towards AI\.?/g, '') // Removes 'Author(s): XYZ Originally published on Towards AI' .replace(/This member-only story is on us\. Upgrade to access all of Medium\./g, ''); // Removes 'This member-only story...' }); //Load ionic icons and cache them if ('localStorage' in window && window['localStorage'] !== null) { const cssLink = 'https://code.ionicframework.com/ionicons/2.0.1/css/ionicons.min.css'; const storedCss = localStorage.getItem('ionicons'); if (storedCss) { loadCSS(storedCss); } else { fetch(cssLink).then(response => response.text()).then(css => { localStorage.setItem('ionicons', css); loadCSS(css); }); } } function loadCSS(css) { const style = document.createElement('style'); style.innerHTML = css; document.head.appendChild(style); } //Remove elements from imported content automatically function removeStrongFromHeadings() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, h6, span'); elements.forEach(el => { const strongTags = el.querySelectorAll('strong'); strongTags.forEach(strongTag => { while (strongTag.firstChild) { strongTag.parentNode.insertBefore(strongTag.firstChild, strongTag); } strongTag.remove(); }); }); } removeStrongFromHeadings(); "use strict"; window.onload = () => { /* //This is an object for each category of subjects and in that there are kewords and link to the keywods let keywordsAndLinks = { //you can add more categories and define their keywords and add a link ds: { keywords: [ //you can add more keywords here they are detected and replaced with achor tag automatically 'data science', 'Data science', 'Data Science', 'data Science', 'DATA SCIENCE', ], //we will replace the linktext with the keyword later on in the code //you can easily change links for each category here //(include class="ml-link" and linktext) link: 'linktext', }, ml: { keywords: [ //Add more keywords 'machine learning', 'Machine learning', 'Machine Learning', 'machine Learning', 'MACHINE LEARNING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ai: { keywords: [ 'artificial intelligence', 'Artificial intelligence', 'Artificial Intelligence', 'artificial Intelligence', 'ARTIFICIAL INTELLIGENCE', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, nl: { keywords: [ 'NLP', 'nlp', 'natural language processing', 'Natural Language Processing', 'NATURAL LANGUAGE PROCESSING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, des: { keywords: [ 'data engineering services', 'Data Engineering Services', 'DATA ENGINEERING SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, td: { keywords: [ 'training data', 'Training Data', 'training Data', 'TRAINING DATA', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ias: { keywords: [ 'image annotation services', 'Image annotation services', 'image Annotation services', 'image annotation Services', 'Image Annotation Services', 'IMAGE ANNOTATION SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, l: { keywords: [ 'labeling', 'labelling', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, pbp: { keywords: [ 'previous blog posts', 'previous blog post', 'latest', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, mlc: { keywords: [ 'machine learning course', 'machine learning class', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, }; //Articles to skip let articleIdsToSkip = ['post-2651', 'post-3414', 'post-3540']; //keyword with its related achortag is recieved here along with article id function searchAndReplace(keyword, anchorTag, articleId) { //selects the h3 h4 and p tags that are inside of the article let content = document.querySelector(`#${articleId} .entry-content`); //replaces the "linktext" in achor tag with the keyword that will be searched and replaced let newLink = anchorTag.replace('linktext', keyword); //regular expression to search keyword var re = new RegExp('(' + keyword + ')', 'g'); //this replaces the keywords in h3 h4 and p tags content with achor tag content.innerHTML = content.innerHTML.replace(re, newLink); } function articleFilter(keyword, anchorTag) { //gets all the articles var articles = document.querySelectorAll('article'); //if its zero or less then there are no articles if (articles.length > 0) { for (let x = 0; x < articles.length; x++) { //articles to skip is an array in which there are ids of articles which should not get effected //if the current article's id is also in that array then do not call search and replace with its data if (!articleIdsToSkip.includes(articles[x].id)) { //search and replace is called on articles which should get effected searchAndReplace(keyword, anchorTag, articles[x].id, key); } else { console.log( `Cannot replace the keywords in article with id ${articles[x].id}` ); } } } else { console.log('No articles found.'); } } let key; //not part of script, added for (key in keywordsAndLinks) { //key is the object in keywords and links object i.e ds, ml, ai for (let i = 0; i < keywordsAndLinks[key].keywords.length; i++) { //keywordsAndLinks[key].keywords is the array of keywords for key (ds, ml, ai) //keywordsAndLinks[key].keywords[i] is the keyword and keywordsAndLinks[key].link is the link //keyword and link is sent to searchreplace where it is then replaced using regular expression and replace function articleFilter( keywordsAndLinks[key].keywords[i], keywordsAndLinks[key].link ); } } function cleanLinks() { // (making smal functions is for DRY) this function gets the links and only keeps the first 2 and from the rest removes the anchor tag and replaces it with its text function removeLinks(links) { if (links.length > 1) { for (let i = 2; i < links.length; i++) { links[i].outerHTML = links[i].textContent; } } } //arrays which will contain all the achor tags found with the class (ds-link, ml-link, ailink) in each article inserted using search and replace let dslinks; let mllinks; let ailinks; let nllinks; let deslinks; let tdlinks; let iaslinks; let llinks; let pbplinks; let mlclinks; const content = document.querySelectorAll('article'); //all articles content.forEach((c) => { //to skip the articles with specific ids if (!articleIdsToSkip.includes(c.id)) { //getting all the anchor tags in each article one by one dslinks = document.querySelectorAll(`#${c.id} .entry-content a.ds-link`); mllinks = document.querySelectorAll(`#${c.id} .entry-content a.ml-link`); ailinks = document.querySelectorAll(`#${c.id} .entry-content a.ai-link`); nllinks = document.querySelectorAll(`#${c.id} .entry-content a.ntrl-link`); deslinks = document.querySelectorAll(`#${c.id} .entry-content a.des-link`); tdlinks = document.querySelectorAll(`#${c.id} .entry-content a.td-link`); iaslinks = document.querySelectorAll(`#${c.id} .entry-content a.ias-link`); mlclinks = document.querySelectorAll(`#${c.id} .entry-content a.mlc-link`); llinks = document.querySelectorAll(`#${c.id} .entry-content a.l-link`); pbplinks = document.querySelectorAll(`#${c.id} .entry-content a.pbp-link`); //sending the anchor tags list of each article one by one to remove extra anchor tags removeLinks(dslinks); removeLinks(mllinks); removeLinks(ailinks); removeLinks(nllinks); removeLinks(deslinks); removeLinks(tdlinks); removeLinks(iaslinks); removeLinks(mlclinks); removeLinks(llinks); removeLinks(pbplinks); } }); } //To remove extra achor tags of each category (ds, ml, ai) and only have 2 of each category per article cleanLinks(); */ //Recommended Articles var ctaLinks = [ /* ' ' + '

Subscribe to our AI newsletter!

' + */ '

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

'+ '

Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

' + '
' + '' + '' + '

Note: Content contains the views of the contributing authors and not Towards AI.
Disclosure: This website may contain sponsored content and affiliate links.

' + 'Discover Your Dream AI Career at Towards AI Jobs' + '

Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 10,000 live jobs today with Towards AI Jobs!

' + '
' + '

🔥 Recommended Articles 🔥

' + 'Why Become an LLM Developer? Launching Towards AI’s New One-Stop Conversion Course'+ 'Testing Launchpad.sh: A Container-based GPU Cloud for Inference and Fine-tuning'+ 'The Top 13 AI-Powered CRM Platforms
' + 'Top 11 AI Call Center Software for 2024
' + 'Learn Prompting 101—Prompt Engineering Course
' + 'Explore Leading Cloud Providers for GPU-Powered LLM Training
' + 'Best AI Communities for Artificial Intelligence Enthusiasts
' + 'Best Workstations for Deep Learning
' + 'Best Laptops for Deep Learning
' + 'Best Machine Learning Books
' + 'Machine Learning Algorithms
' + 'Neural Networks Tutorial
' + 'Best Public Datasets for Machine Learning
' + 'Neural Network Types
' + 'NLP Tutorial
' + 'Best Data Science Books
' + 'Monte Carlo Simulation Tutorial
' + 'Recommender System Tutorial
' + 'Linear Algebra for Deep Learning Tutorial
' + 'Google Colab Introduction
' + 'Decision Trees in Machine Learning
' + 'Principal Component Analysis (PCA) Tutorial
' + 'Linear Regression from Zero to Hero
'+ '

', /* + '

Join thousands of data leaders on the AI newsletter. It’s free, we don’t spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

',*/ ]; var replaceText = { '': '', '': '', '
': '
' + ctaLinks + '
', }; Object.keys(replaceText).forEach((txtorig) => { //txtorig is the key in replacetext object const txtnew = replaceText[txtorig]; //txtnew is the value of the key in replacetext object let entryFooter = document.querySelector('article .entry-footer'); if (document.querySelectorAll('.single-post').length > 0) { //console.log('Article found.'); const text = entryFooter.innerHTML; entryFooter.innerHTML = text.replace(txtorig, txtnew); } else { // console.log('Article not found.'); //removing comment 09/04/24 } }); var css = document.createElement('style'); css.type = 'text/css'; css.innerHTML = '.post-tags { display:none !important } .article-cta a { font-size: 18px; }'; document.body.appendChild(css); //Extra //This function adds some accessibility needs to the site. function addAlly() { // In this function JQuery is replaced with vanilla javascript functions const imgCont = document.querySelector('.uw-imgcont'); imgCont.setAttribute('aria-label', 'AI news, latest developments'); imgCont.title = 'AI news, latest developments'; imgCont.rel = 'noopener'; document.querySelector('.page-mobile-menu-logo a').title = 'Towards AI Home'; document.querySelector('a.social-link').rel = 'noopener'; document.querySelector('a.uw-text').rel = 'noopener'; document.querySelector('a.uw-w-branding').rel = 'noopener'; document.querySelector('.blog h2.heading').innerHTML = 'Publication'; const popupSearch = document.querySelector$('a.btn-open-popup-search'); popupSearch.setAttribute('role', 'button'); popupSearch.title = 'Search'; const searchClose = document.querySelector('a.popup-search-close'); searchClose.setAttribute('role', 'button'); searchClose.title = 'Close search page'; // document // .querySelector('a.btn-open-popup-search') // .setAttribute( // 'href', // 'https://medium.com/towards-artificial-intelligence/search' // ); } // Add external attributes to 302 sticky and editorial links function extLink() { // Sticky 302 links, this fuction opens the link we send to Medium on a new tab and adds a "noopener" rel to them var stickyLinks = document.querySelectorAll('.grid-item.sticky a'); for (var i = 0; i < stickyLinks.length; i++) { /* stickyLinks[i].setAttribute('target', '_blank'); stickyLinks[i].setAttribute('rel', 'noopener'); */ } // Editorial 302 links, same here var editLinks = document.querySelectorAll( '.grid-item.category-editorial a' ); for (var i = 0; i < editLinks.length; i++) { editLinks[i].setAttribute('target', '_blank'); editLinks[i].setAttribute('rel', 'noopener'); } } // Add current year to copyright notices document.getElementById( 'js-current-year' ).textContent = new Date().getFullYear(); // Call functions after page load extLink(); //addAlly(); setTimeout(function() { //addAlly(); //ideally we should only need to run it once ↑ }, 5000); }; function closeCookieDialog (){ document.getElementById("cookie-consent").style.display = "none"; return false; } setTimeout ( function () { closeCookieDialog(); }, 15000); console.log(`%c 🚀🚀🚀 ███ █████ ███████ █████████ ███████████ █████████████ ███████████████ ███████ ███████ ███████ ┌───────────────────────────────────────────────────────────────────┐ │ │ │ Towards AI is looking for contributors! │ │ Join us in creating awesome AI content. │ │ Let's build the future of AI together → │ │ https://towardsai.net/contribute │ │ │ └───────────────────────────────────────────────────────────────────┘ `, `background: ; color: #00adff; font-size: large`); //Remove latest category across site document.querySelectorAll('a[rel="category tag"]').forEach(function(el) { if (el.textContent.trim() === 'Latest') { // Remove the two consecutive spaces (  ) if (el.nextSibling && el.nextSibling.nodeValue.includes('\u00A0\u00A0')) { el.nextSibling.nodeValue = ''; // Remove the spaces } el.style.display = 'none'; // Hide the element } }); // Add cross-domain measurement, anonymize IPs 'use strict'; //var ga = gtag; ga('config', 'G-9D3HKKFV1Q', 'auto', { /*'allowLinker': true,*/ 'anonymize_ip': true/*, 'linker': { 'domains': [ 'medium.com/towards-artificial-intelligence', 'datasets.towardsai.net', 'rss.towardsai.net', 'feed.towardsai.net', 'contribute.towardsai.net', 'members.towardsai.net', 'pub.towardsai.net', 'news.towardsai.net' ] } */ }); ga('send', 'pageview'); -->