Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: pub@towardsai.net
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

A Great Overview of Machine Learning Operations and How the MLFlow Project Made It Easy (Step By Step)
Latest   Machine Learning

A Great Overview of Machine Learning Operations and How the MLFlow Project Made It Easy (Step By Step)

Last Updated on July 17, 2023 by Editorial Team

Author(s): Ashbab khan

Originally published on Towards AI.

Photo by Ramón Salinero on Unsplash

development is a most important concept whether we talk about human evolution from hunter-gatherer to the modern human we see today. The same concept applies to the technologies such as Artificial Intelligence.

In the past, our focus is how to use Machine Learning to solve a problem, and almost a single person is handling all of the things, but today we see that the work is now divided. We now need a Data Scientist, Machine Learning Engineer, and Data Engineer due to the advancement in Technology.

Today making a production model is a challenging task, and we need to understand how we made it possible through Machine Learning Operations skills so in this article we are getting familiar with MLOps and MlFlow let’s get started.

What is MLOps in simple terms

???????????????????? in simple terms, is the operation skills on how to make your model into production, which will be retrained and effectively tracked

such as which experiment generates the highest accuracy model or the Lowest accuracy model is based on summary metrics upload, and we can easily see the type of hyperparameters used in the highest and lowest score model.

We can easily trace the experiment author and when it is created what is the output artifact of the component such as a pre-process component generating preprocessed data and what are the summary metrics created such as the accuracy or roc_score, etc.

How an end-to-end Machine Learning Pipeline in MLOps works.

Developing an ML model for production is a challenging task it requires a lot of different skill sets other than making a machine learning model because a model is useless if it can’t make it to production.

Machine Learning Operations Engineering is one of them that helps us do this tough job in less time and with fewer resources.
According to experts, almost 50 to 90% of the model doesn’t make it into production, and that’s the scariest part.

Let’s see a simple ETL Pipeline
In the image, the blue one is a component, and the orange one is an artifact.

Source: Image by the author ( ashbab khan )

A component is a group of programming files that perform some task, whereas the artifact is the file or a directory produced by the component

for example:- see the above image. We have 6 components from download data to evaluation, and every component at least has three important files. After the component finishes running it will generate some files.

For example, the download component will take the Url of the data and give us a raw data file which we upload as an artifact to the platform, such as Weights and Biases.

the same process is done in the preprocessing component where it takes the previously created raw data as input and then performs preprocessing and again generates new data which is preprocessed data, and we again save it back to Weights and Biases and the same things happen with other components.

the one thing you might notice is that we are generating lots of data because every component in a pipeline needs different data so our data must be stored at the location so that any of your other teams who is residing anywhere also get access to the data.

As we discussed earlier artifact is a file or a directory of files generated by the different components, but most importantly artifact can be versioned. It means whenever we are making some changes in the component, or maybe new data is added to the source and running the component again creates a new version of the artifact.

as we see in the below image sample.csv is our artifact, and below there is a version of that artifact.

various tools and packages are used to store and fetch those artifacts one of the tools is Weights and Biases we can also use some other tools like Neptune, Comet, Paperspace, etc.

“Images of run and artifacts we see the 4 runsJob Type with a tag finished it means that the runs finish works and produce the respective output which we can see after clicking on the run and then clicking on the overview tab which is located at first and we can also see our artifact section which is located at last tab on clicking the artifact section we see our artifacts so that’s where we see our first artifact Raw data and all the artifact is stored here”. Image source: Image by author

Let’s take a brief introduction on how these tools or packages like Weights and Biases, also known as wandb used in MlOps:

  • This will be used to store and fetch artifacts
  • We can store many configurations to it, like the hyperparameters used in the model, and also store the images like feature importance, roc curve or correlation, etc.
  • Every time we fetch or store artifacts or, in simple terms, when we initialize the wandb we are creating a run. A run is an experiment that stores our configuration file, images, and basics info like who created this run and when and what is the input artifact it used and what is the output artifact it produced in the above image we see our run and artifacts.

See the ETL pipeline image. We have two components above:

  • download data
  • preprocessing

When we run the ETL pipeline, what happens is that it passed the URL to our download data component, and it generates a file called raw_data (artifact) which we store in the weight and biases.

Then in the preprocessing component, we fetch this raw_data file from weight and biases and then preprocess this data and then store it as new data, which is clean_data.csv and upload it to weight and biases as an artifact, this is how we store and fetch different files and directories from anywhere using weights and biases.

See the image of End to End ML pipeline. We see a lot of components such as download_data, preprocessing, checking_data, etc. the work of this pipeline is the same as we see in ETL the download_data generates an artifact, and then that artifact is used by preprocessing component, and so on

in the end, the random_forest component will generate our model, which is an inference artifact that is ready to predict the target value of unknown observations.

So now our inference artifact is ready for production, and as we discussed earlier, that artifact can be versioned, so let’s say we generated 2 versions from v0 and v1 and we decided to use the v1 for production then we marked this artifact as prod means ready for production.

then we upload our project to GitHub so that other developers have access to this code, and the rest of the process is handled by software engineers and DevOps engineers that how it will be integrating the model with the software.

“We see that we marked the v1 as a prod tag because this version is ready for production and we can change this anytime”. Image source: Image by author

Problem due to model to drift and how to solve it

As we are all familiar with that a model needs updated data for the best performance otherwise it will cause model drift.

It is very important to continuously update the data and re-train our model to avoid model drift, so there might be questions about how we will do this in MlOps, and the answer is simple.

let’s say our source data is updated with new data, and we want to re-train our model with that new data, so we have to run our pipeline again this will run all the components again with the new data passed in the download data component, and so on.

when our inference pipeline run again with the newly added data, it will generate a new version v2 inference artifact and then we have to unmark the v1 from prod that we marked previously and mark the latest version v2 as a prod, and that’s it our new inference artifact is ready for production again.

But this is not a simple task we have to do a lot of experiments and create a lot of versions of the artifact, and the best-performing artifact is marked as production so that any person easily identifies that this version of the artifact is going to be used in production.

What is MLFlow and how MLFlow used in MLOps to run our end-to-end machine learning pipeline in production?

MLFlow is a Framework that is used to run our Machine Learning Pipeline through the effective workflow using a simple command.

mlflow run .
mlflow run src/download_data

through MlFlow we can chain all the components that we have seen previously so that it can execute all the components in the right flow.

“Three important files of a MlFlow project”. Source: Image by author

The MlFlow project is divided into three parts:

  • ???????????????????????????????????????????? (????????????????????????????????????????????????)
  • ????????????????
  • ???????????????????????????? ????????????????????????????????????????

Let’s understand them one by one

????????????????????????????????????????????: the first one is the environment file means the packages that are used in a code, such as pandas, MlFlow, and, seaborn these all are packages required in a data science project so below is our environment file.

we see that component is the name of this environment file, and then we have the dependencies section these are the packages that we are downloading from the two channels the first one is conda forge if it is can’t able to find any packages in conda forge.

then it will look in the defaults channel to download and install the file, and we can also download and install packages from pip as we do for wandb.

Important note

if we are downloading a specific version of packages from the channel as we do below, like mlflow=2.0.1 we have to use only one = sign, and if we are downloading specific version packages from pip we have to use == sign like we do wandb==0.13.9.

name: components
channels:
- conda-forge
- defaults
dependencies:
- mlflow=2.0.1
- python=3.10.9
- hydra-core
- pip
- pip:
- wandb==0.13.9

????????????????: Code is the actual logic which is our programming file.

???????????????????????????? ????????????????????????????????????????: MlFlow doesn’t run the coding file directly it first runs the project definition file which is an MLproject file in this file we write a code that how the code file is run and what arguments our code file needs.

Below we see our MLProject file the first thing that we understand is that in the conda_env we are using our environment file name as we see above in the environment section this will run our conda.yml file (the above environment file we see is saved as a conda.yml file).

this will install the necessary packages from channels and pip, then it runs the parameters section, and at the last, the command section will run our python file run.py with the necessary arguments that our python file needs.

This is how the MlProject file looks like

name: download_file
conda_env: conda.yml

entry_points:
main:
parameters:
artifact_name:
description: Name for the output artifact
type: str

artifact_type:
description: Type of the output artifact. This will be used to categorize the artifact in the W&B
interface
type: str

artifact_description:
description: A brief description of the output artifact
type: str

command: >-
python run.py --artifact_name {artifact_name} \
--artifact_type {artifact_type} \
--artifact_description {artifact_description}

So what are the above parameters, and where it is getting value from

Let’s understand our first question what is this above parameter is

To create an artifact, we need these three parameters, and we store and fetch artifacts from the code file. That’s why we are passing these parameters to our code file from the MlProject file to avoid hard coding:

  • artifact name
  • artifact type
  • artifact description

Passing value to a python file using cmd

To receive an argument through cmd or MlProject to a python file, we have to use a package called argparse below is a demonstration of how Argparse works.

below is the code that we need to add in the python file __main__ section to receive arguments from cmd in our case, we are receiving 3 arguments from MlProject to our code file or python file.

  1. —artifact_name
  2. — artifact_type
  3. — artifact_description

Code file

if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Download URL to a local destination")

parser.add_argument("--artifact_name",
type=str,
help="Name for the output artifact",
required=True)

parser.add_argument("--artifact_type",
type=str,
help="Output artifact type.",
required=True)

parser.add_argument("--artifact_description",
type=str,
help="A brief description of this artifact",
required=True
)

args = parser.parse_args()

see the MLproject file again that’s the same three arguments we are passing to the run.py file in the command section and that’s how we receive the same 3 arguments in python.

Our second question is where the MlProject file getting value from

As we discussed earlier that, whenever we run a MlFlow project, the first file will run is the MlProject file, and we use the above command in the terminal, which is mlflow run {file_path}. This command is used when we are passing nothing to our Mlproject file, but in our case, there are 3 parameters that this file needs, so we use a little bit different command.

The below code is the practical demonstration of how we pass value to the MlProject file through cmd.

mlflow run . -P artifact_name=clean_data.csv \
-P artifact_type=clean_data \
-P artifact_description="Clean and preprocessed data"

here in the above code, to pass a value to MlProject parameters through MlFlow we used the -P command and the parameters name as we did in the above code.

So that’s how MlFlow runs the components and how these three files code, environment, and MlProject work together thank you for reading this article.

you can connect with me on LinkedIn

till then if you have any questions regarding this post, feel free to ask in the comment section

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓

Sign Up for the Course
`; } else { console.error('Element with id="subscribe" not found within the page with class "home".'); } } }); // Remove duplicate text from articles /* Backup: 09/11/24 function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag elements.forEach(el => { const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 2) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); */ // Remove duplicate text from articles function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag // List of classes to be excluded const excludedClasses = ['medium-author', 'post-widget-title']; elements.forEach(el => { // Skip elements with any of the excluded classes if (excludedClasses.some(cls => el.classList.contains(cls))) { return; // Skip this element if it has any of the excluded classes } const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 10) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); //Remove unnecessary text in blog excerpts document.querySelectorAll('.blog p').forEach(function(paragraph) { // Replace the unwanted text pattern for each paragraph paragraph.innerHTML = paragraph.innerHTML .replace(/Author\(s\): [\w\s]+ Originally published on Towards AI\.?/g, '') // Removes 'Author(s): XYZ Originally published on Towards AI' .replace(/This member-only story is on us\. Upgrade to access all of Medium\./g, ''); // Removes 'This member-only story...' }); //Load ionic icons and cache them if ('localStorage' in window && window['localStorage'] !== null) { const cssLink = 'https://code.ionicframework.com/ionicons/2.0.1/css/ionicons.min.css'; const storedCss = localStorage.getItem('ionicons'); if (storedCss) { loadCSS(storedCss); } else { fetch(cssLink).then(response => response.text()).then(css => { localStorage.setItem('ionicons', css); loadCSS(css); }); } } function loadCSS(css) { const style = document.createElement('style'); style.innerHTML = css; document.head.appendChild(style); } //Remove elements from imported content automatically function removeStrongFromHeadings() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, h6, span'); elements.forEach(el => { const strongTags = el.querySelectorAll('strong'); strongTags.forEach(strongTag => { while (strongTag.firstChild) { strongTag.parentNode.insertBefore(strongTag.firstChild, strongTag); } strongTag.remove(); }); }); } removeStrongFromHeadings(); "use strict"; window.onload = () => { /* //This is an object for each category of subjects and in that there are kewords and link to the keywods let keywordsAndLinks = { //you can add more categories and define their keywords and add a link ds: { keywords: [ //you can add more keywords here they are detected and replaced with achor tag automatically 'data science', 'Data science', 'Data Science', 'data Science', 'DATA SCIENCE', ], //we will replace the linktext with the keyword later on in the code //you can easily change links for each category here //(include class="ml-link" and linktext) link: 'linktext', }, ml: { keywords: [ //Add more keywords 'machine learning', 'Machine learning', 'Machine Learning', 'machine Learning', 'MACHINE LEARNING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ai: { keywords: [ 'artificial intelligence', 'Artificial intelligence', 'Artificial Intelligence', 'artificial Intelligence', 'ARTIFICIAL INTELLIGENCE', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, nl: { keywords: [ 'NLP', 'nlp', 'natural language processing', 'Natural Language Processing', 'NATURAL LANGUAGE PROCESSING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, des: { keywords: [ 'data engineering services', 'Data Engineering Services', 'DATA ENGINEERING SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, td: { keywords: [ 'training data', 'Training Data', 'training Data', 'TRAINING DATA', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ias: { keywords: [ 'image annotation services', 'Image annotation services', 'image Annotation services', 'image annotation Services', 'Image Annotation Services', 'IMAGE ANNOTATION SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, l: { keywords: [ 'labeling', 'labelling', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, pbp: { keywords: [ 'previous blog posts', 'previous blog post', 'latest', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, mlc: { keywords: [ 'machine learning course', 'machine learning class', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, }; //Articles to skip let articleIdsToSkip = ['post-2651', 'post-3414', 'post-3540']; //keyword with its related achortag is recieved here along with article id function searchAndReplace(keyword, anchorTag, articleId) { //selects the h3 h4 and p tags that are inside of the article let content = document.querySelector(`#${articleId} .entry-content`); //replaces the "linktext" in achor tag with the keyword that will be searched and replaced let newLink = anchorTag.replace('linktext', keyword); //regular expression to search keyword var re = new RegExp('(' + keyword + ')', 'g'); //this replaces the keywords in h3 h4 and p tags content with achor tag content.innerHTML = content.innerHTML.replace(re, newLink); } function articleFilter(keyword, anchorTag) { //gets all the articles var articles = document.querySelectorAll('article'); //if its zero or less then there are no articles if (articles.length > 0) { for (let x = 0; x < articles.length; x++) { //articles to skip is an array in which there are ids of articles which should not get effected //if the current article's id is also in that array then do not call search and replace with its data if (!articleIdsToSkip.includes(articles[x].id)) { //search and replace is called on articles which should get effected searchAndReplace(keyword, anchorTag, articles[x].id, key); } else { console.log( `Cannot replace the keywords in article with id ${articles[x].id}` ); } } } else { console.log('No articles found.'); } } let key; //not part of script, added for (key in keywordsAndLinks) { //key is the object in keywords and links object i.e ds, ml, ai for (let i = 0; i < keywordsAndLinks[key].keywords.length; i++) { //keywordsAndLinks[key].keywords is the array of keywords for key (ds, ml, ai) //keywordsAndLinks[key].keywords[i] is the keyword and keywordsAndLinks[key].link is the link //keyword and link is sent to searchreplace where it is then replaced using regular expression and replace function articleFilter( keywordsAndLinks[key].keywords[i], keywordsAndLinks[key].link ); } } function cleanLinks() { // (making smal functions is for DRY) this function gets the links and only keeps the first 2 and from the rest removes the anchor tag and replaces it with its text function removeLinks(links) { if (links.length > 1) { for (let i = 2; i < links.length; i++) { links[i].outerHTML = links[i].textContent; } } } //arrays which will contain all the achor tags found with the class (ds-link, ml-link, ailink) in each article inserted using search and replace let dslinks; let mllinks; let ailinks; let nllinks; let deslinks; let tdlinks; let iaslinks; let llinks; let pbplinks; let mlclinks; const content = document.querySelectorAll('article'); //all articles content.forEach((c) => { //to skip the articles with specific ids if (!articleIdsToSkip.includes(c.id)) { //getting all the anchor tags in each article one by one dslinks = document.querySelectorAll(`#${c.id} .entry-content a.ds-link`); mllinks = document.querySelectorAll(`#${c.id} .entry-content a.ml-link`); ailinks = document.querySelectorAll(`#${c.id} .entry-content a.ai-link`); nllinks = document.querySelectorAll(`#${c.id} .entry-content a.ntrl-link`); deslinks = document.querySelectorAll(`#${c.id} .entry-content a.des-link`); tdlinks = document.querySelectorAll(`#${c.id} .entry-content a.td-link`); iaslinks = document.querySelectorAll(`#${c.id} .entry-content a.ias-link`); mlclinks = document.querySelectorAll(`#${c.id} .entry-content a.mlc-link`); llinks = document.querySelectorAll(`#${c.id} .entry-content a.l-link`); pbplinks = document.querySelectorAll(`#${c.id} .entry-content a.pbp-link`); //sending the anchor tags list of each article one by one to remove extra anchor tags removeLinks(dslinks); removeLinks(mllinks); removeLinks(ailinks); removeLinks(nllinks); removeLinks(deslinks); removeLinks(tdlinks); removeLinks(iaslinks); removeLinks(mlclinks); removeLinks(llinks); removeLinks(pbplinks); } }); } //To remove extra achor tags of each category (ds, ml, ai) and only have 2 of each category per article cleanLinks(); */ //Recommended Articles var ctaLinks = [ /* ' ' + '

Subscribe to our AI newsletter!

' + */ '

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

'+ '

Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

' + '
' + '' + '' + '

Note: Content contains the views of the contributing authors and not Towards AI.
Disclosure: This website may contain sponsored content and affiliate links.

' + 'Discover Your Dream AI Career at Towards AI Jobs' + '

Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 10,000 live jobs today with Towards AI Jobs!

' + '
' + '

🔥 Recommended Articles 🔥

' + 'Why Become an LLM Developer? Launching Towards AI’s New One-Stop Conversion Course'+ 'Testing Launchpad.sh: A Container-based GPU Cloud for Inference and Fine-tuning'+ 'The Top 13 AI-Powered CRM Platforms
' + 'Top 11 AI Call Center Software for 2024
' + 'Learn Prompting 101—Prompt Engineering Course
' + 'Explore Leading Cloud Providers for GPU-Powered LLM Training
' + 'Best AI Communities for Artificial Intelligence Enthusiasts
' + 'Best Workstations for Deep Learning
' + 'Best Laptops for Deep Learning
' + 'Best Machine Learning Books
' + 'Machine Learning Algorithms
' + 'Neural Networks Tutorial
' + 'Best Public Datasets for Machine Learning
' + 'Neural Network Types
' + 'NLP Tutorial
' + 'Best Data Science Books
' + 'Monte Carlo Simulation Tutorial
' + 'Recommender System Tutorial
' + 'Linear Algebra for Deep Learning Tutorial
' + 'Google Colab Introduction
' + 'Decision Trees in Machine Learning
' + 'Principal Component Analysis (PCA) Tutorial
' + 'Linear Regression from Zero to Hero
'+ '

', /* + '

Join thousands of data leaders on the AI newsletter. It’s free, we don’t spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

',*/ ]; var replaceText = { '': '', '': '', '
': '
' + ctaLinks + '
', }; Object.keys(replaceText).forEach((txtorig) => { //txtorig is the key in replacetext object const txtnew = replaceText[txtorig]; //txtnew is the value of the key in replacetext object let entryFooter = document.querySelector('article .entry-footer'); if (document.querySelectorAll('.single-post').length > 0) { //console.log('Article found.'); const text = entryFooter.innerHTML; entryFooter.innerHTML = text.replace(txtorig, txtnew); } else { // console.log('Article not found.'); //removing comment 09/04/24 } }); var css = document.createElement('style'); css.type = 'text/css'; css.innerHTML = '.post-tags { display:none !important } .article-cta a { font-size: 18px; }'; document.body.appendChild(css); //Extra //This function adds some accessibility needs to the site. function addAlly() { // In this function JQuery is replaced with vanilla javascript functions const imgCont = document.querySelector('.uw-imgcont'); imgCont.setAttribute('aria-label', 'AI news, latest developments'); imgCont.title = 'AI news, latest developments'; imgCont.rel = 'noopener'; document.querySelector('.page-mobile-menu-logo a').title = 'Towards AI Home'; document.querySelector('a.social-link').rel = 'noopener'; document.querySelector('a.uw-text').rel = 'noopener'; document.querySelector('a.uw-w-branding').rel = 'noopener'; document.querySelector('.blog h2.heading').innerHTML = 'Publication'; const popupSearch = document.querySelector$('a.btn-open-popup-search'); popupSearch.setAttribute('role', 'button'); popupSearch.title = 'Search'; const searchClose = document.querySelector('a.popup-search-close'); searchClose.setAttribute('role', 'button'); searchClose.title = 'Close search page'; // document // .querySelector('a.btn-open-popup-search') // .setAttribute( // 'href', // 'https://medium.com/towards-artificial-intelligence/search' // ); } // Add external attributes to 302 sticky and editorial links function extLink() { // Sticky 302 links, this fuction opens the link we send to Medium on a new tab and adds a "noopener" rel to them var stickyLinks = document.querySelectorAll('.grid-item.sticky a'); for (var i = 0; i < stickyLinks.length; i++) { /* stickyLinks[i].setAttribute('target', '_blank'); stickyLinks[i].setAttribute('rel', 'noopener'); */ } // Editorial 302 links, same here var editLinks = document.querySelectorAll( '.grid-item.category-editorial a' ); for (var i = 0; i < editLinks.length; i++) { editLinks[i].setAttribute('target', '_blank'); editLinks[i].setAttribute('rel', 'noopener'); } } // Add current year to copyright notices document.getElementById( 'js-current-year' ).textContent = new Date().getFullYear(); // Call functions after page load extLink(); //addAlly(); setTimeout(function() { //addAlly(); //ideally we should only need to run it once ↑ }, 5000); }; function closeCookieDialog (){ document.getElementById("cookie-consent").style.display = "none"; return false; } setTimeout ( function () { closeCookieDialog(); }, 15000); console.log(`%c 🚀🚀🚀 ███ █████ ███████ █████████ ███████████ █████████████ ███████████████ ███████ ███████ ███████ ┌───────────────────────────────────────────────────────────────────┐ │ │ │ Towards AI is looking for contributors! │ │ Join us in creating awesome AI content. │ │ Let's build the future of AI together → │ │ https://towardsai.net/contribute │ │ │ └───────────────────────────────────────────────────────────────────┘ `, `background: ; color: #00adff; font-size: large`); //Remove latest category across site document.querySelectorAll('a[rel="category tag"]').forEach(function(el) { if (el.textContent.trim() === 'Latest') { // Remove the two consecutive spaces (  ) if (el.nextSibling && el.nextSibling.nodeValue.includes('\u00A0\u00A0')) { el.nextSibling.nodeValue = ''; // Remove the spaces } el.style.display = 'none'; // Hide the element } }); // Add cross-domain measurement, anonymize IPs 'use strict'; //var ga = gtag; ga('config', 'G-9D3HKKFV1Q', 'auto', { /*'allowLinker': true,*/ 'anonymize_ip': true/*, 'linker': { 'domains': [ 'medium.com/towards-artificial-intelligence', 'datasets.towardsai.net', 'rss.towardsai.net', 'feed.towardsai.net', 'contribute.towardsai.net', 'members.towardsai.net', 'pub.towardsai.net', 'news.towardsai.net' ] } */ }); ga('send', 'pageview'); -->