Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: pub@towardsai.net
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

Google Colab 101 Tutorial with Python — Tips, Tricks, and FAQ 
Editorial   Programming   Tutorials

Google Colab 101 Tutorial with Python — Tips, Tricks, and FAQ 

Last Updated on October 21, 2021 by Editorial Team

Author(s): Saniya Parveez, Roberto Iriondo
Laptop displaying Google Colab by Google, image is a derivative from original by Bongkarn Thanyakij on Pexels.
Source: Derivative from original by Bongkarn Thanyakij on Pexels

An in-depth tutorial on how to use Google Colab with Python, along with Colab’s tips, tricks, and FAQ

This tutorial’s code is available on Github and its full implementation as well on Google Colab.

Table of Contents

  1. Introduction
  2. Why We Use Google Colab?
  3. Start Google Colab
  4. Uploading a Notebook from Github
  5. Uploading Data from Kaggle
  6. Read Files from Google Drive
  7. Setting up Hardware Accelerator GPU for Runtime
  8. Clone a GitHub Repository to Google Drive
  9. Colab Magic
  10. Plotting
  11. TPU (Tensor Processing Unit) in Google Colab
  12. Conclusion

Introduction

Google Colab is a project from Google Research, a free, Jupyter based environment that allows us to create Jupyter [programming] notebooks to write and execute Python [1](and other Python-based third-party tools and machine learning frameworks such as Pandas, PyTorch, Tensorflow, Keras, Monk, OpenCV, and others) in a web browser.

A programming notebook is a type of a shell or kernel in the form of a word processor, where we can write and execute code. The data required for processing in Google Colab can be mounted into Google Drive or imported from any source on the internet. Project Jupyter is an open-source software organization that develops and supports Jupyter notebooks for interactive computing [4].

Google Colab requires no configuration to get started and provides free access to GPUs. One of the main functionalities of Google Colab is that it allows anyone to share live code, mathematical equations, data visualizations, data processing (cleaning and transformation), numerical simulations, machine learning models, and many other projects with others.

Why We Use Google Colab?

Google Colab has unique and critical features:

  • It provides a free Jupyter notebook environment.
  • It comes with pre-installed packages.
  • It hosts entirely on Google Cloud.
  • Users do not need to set up on servers or workstations.
  • Notebooks save automatically on a user’s Google Drive.
  • It provides browser-based Jupyter notebooks.
  • It is completely free of cost and offers GPU and TPU power (unless you need more resources and decide to go pro with Colab Pro).
  • It supports Python versions 2 and 3 (however, Google Suggests migrating important notebooks to Python 3 [2] [5]).
  • It provides two hardware accelerators:
    1. GPU (Graphical Processing Unit).
    2. TPU (Tensor Processing Unit).

Start Google Colab

Python code can be executed directly on the web browser by using Colab. We can launch it with the URL below:

The launch window opens with a popup offering many features:

Figure 1: Screenshot of Google Colab’s start page.
Figure 1: Screenshot of Google Colab’s start page.

It provides options to create a notebook as well as to upload and select from different sources such as:

  • GitHub
  • Google Drive
  • Local computer

Uploading a Notebook from GitHub

Python code can be directly uploaded from Github by using its project’s URL or by searching the organization or user. The steps below highlight how to upload a project using a Github URL:

  • Launch Google Colab.
  • Select the GitHub tab from the popup box.
Figure 2: Screenshot of Google Colab’s upload code using a Github URL.
Figure 2: Screenshot of Google Colab’s upload code using a Github URL.
  • Enter the GitHub’s project URL and search it to fetch the code
Figure 3: Screenshot showing how to upload a Github repository with Google Colab.
Figure 3: Screenshot showing how to upload a Github repository with Google Colab.
  • It will upload the complete code with one click to the Google Colab notebook.
Figure 4: Screenshot showcasing the uploaded Github repository using a URL.
Figure 4: Screenshot showcasing the uploaded Github repository using a URL.

Similarly, the code can be uploaded directly from Google Drive by filtering saved notebooks by name, date, owner, or modified date.

Figure 5: Screenshot showing how to upload a notebook directly from Google Drive to Google Colab.
Figure 5: Screenshot showing how to upload a notebook directly from Google Drive to Google Colab.

Uploading Data from Kaggle

Data from Kaggle can be uploaded directly into Colab for processing. An API token from Kaggle is required to accomplish the data import.

Steps to generate API token from Kaggle

  • Open Kaggle
  • Go to “My Account”
  • Scroll down to the “API” section
Figure 6: Screenshot of Kaggle’s website showing the API section.
Figure 6: Screenshot of Kaggle’s website showing the API section.
  • Click on “Expire API Token” to remove the previous token if required.
  • Click on “Create New API Token.” It will generate a new token and download a JSON file named “kaggle.json
  • The “kaggle.json” file contains the username and key like:
Figure 7: Screenshot of IDE showing test API key for Kaggle.
Figure 7: Screenshot of IDE showing test API key for Kaggle.

Steps to upload data from Kaggle

Save the “kaggle.json” file on your local computer.

Install the Kaggle package

!pip install -q kaggle

Import packages:

from google.colab import files

Upload the local file “kaggle.json”

files.upload()
Figure 8: Screenshot of the output of the uploaded file “kaggle.json”
Figure 8: Screenshot of the output of the uploaded file “kaggle.json”

Check if the Colab notebook connects with Kaggle correctly.

!kaggle datasets list
Figure 9: Screenshot showing a dataset list from Kaggle.

Download any competition data from Kaggle (i.e., competition name — predict future sales)

!kaggle competitions download -c competitive-data-science-predict-future-sales
Figure 10: Screenshot showing the output of data downloads from Kaggle.
Figure 10: Screenshot showing the output of data downloads from Kaggle.

Data from Kaggle will be downloaded and uploaded in Colab, like:

Figure 11: Screenshot of Google Colab showing the uploaded data from Kaggle.
Figure 11: Screenshot of Google Colab showing the uploaded data from Kaggle.

Read Files from Google Drive

Google Colab provides functionality to read data from google drive too.

Import packages

import globimport pandas as pdfrom google.colab import drive

Mount Google Drive

drive.mount('/gdrive')

This will ask Google to drive authorization code.

Figure 12: Screenshot showing Google Colab asking you to insert Google Drive’s authorization code.
Figure 12: Screenshot showing Google Colab asking you to insert Google Drive’s authorization code.

Input box for the authorization code

Click on the link and generate the authorization code.

Read a CSV file from the drive.

file_path = glob.glob("/gdrive/My Drive/***.csv")for file in file_path:
    df = pd.read_csv(file)
    print(df)
Figure 13: Output from the test CSV file on Google Drive.
Figure 13: Output from the test CSV file on Google Drive.

Setting up Hardware Accelerator GPU for Runtime

Google Colab provides a free cloud service with a GPU hardware accelerator. High configurations GPU machines are very costly and required in machine learning and deep learning to simultaneously process multiple computations.

Nvidia GPU card, image from Nana Dua on Pexels.
Source: Nana Dua on Pexels

Why are GPUs required in Machine Learning or Deep Learning?

Nowadays, GPUs are dominant in machine learning and deep learning due to the optimized capability of more compute-intensive workloads and streaming memory models.

GPUs give outstanding performance through parallelism and can launch millions of threads in one call. They function unusually better than CPUs even though GPUs may have a lower clock speed and the absence of many-core management features compared to a CPU.

Setup Hardware Accelerator GPU in Colab

Steps to setup GPU:

  • Go to Runtime → Change runtime type.
  • Select “GPU” from the popup
Figure 14: Screenshot of GPU’s accelerator selection.
Figure 14: Screenshot of GPU’s accelerator selection.

Checking details about the GPU in Colab.

Import important packages

import tensorflow as tf
from tensorflow.python.client import device_lib

Check the GPU accelerator

tf.test.gpu_device_name()
Figure 15: Screenshot of GPU’s accelerator in Google Colab.
Figure 15: Screenshot of GPU’s accelerator in Google Colab.

Check the hardware used for the GPU.

device_lib.list_local_devices()
Figure 16: Screenshot showcasing the details about the GPU on our session.
Figure 16: Screenshot showcasing the details about the GPU in our session.

Code Example Using a GPU

Check the number of available GPUs without selecting GPU in Runtime. Keep it set to “None.”

Figure 17: Hardware accelerator set to none.
Figure 17: Hardware accelerator set to none.
import tensorflow as tf
no_of_gpu = len(tf.config.experimental.list_physical_devices('GPU'))
print("Total GPUS: ", no_of_gpu)
Figure 18: Hardware accelerator is None, so the GPU’s value is 0.

Select Hardware accelerator in runtime to GPU.

Figure 19: Screenshot of hardware accelerator set to GPU.
Figure 19: Screenshot of hardware accelerator set to GPU.
import tensorflow as tf
no_of_gpu =len(tf.config.experimental.list_physical_devices('GPU'))
print("Total GPUS: ", no_of_gpu)
Figure 20: Hardware accelerator is GPU so the value of GPU is 1.

Multiply Tensors on GPU:

try:
    with tf.device('/device:GPU:1'):
        tensor1 = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
        tensor2 = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])
        result = tf.matmul(tensor1, tensor2)
        print(result)
except RuntimeError as exception:
    print(exception)
Figure 21: Tensor multiplication result.
Figure 21: Tensor multiplication result.

Clone a GitHub Repository to Google Drive

GitHub repository code can be cloned and stored into local Google Drive.

Steps to clone the GitHub repository to Google Drive:

  • Mount Google Drive.
from google.colab import drive
drive.mount('/content/gdrive')
Figure 22: Google Drive successfully mounted in Google Colab.
Figure 22: Google Drive successfully mounted in Google Colab.
  • Enter into Google drive and create a directory named “project.”
%cd gdrive/My Drive/
mkdir project
%cd project/
Figure 22: Google Drive successfully mounted in Google Colab.
Figure 23: Entering the project directory.
  • Clone the repository, i.e.:
!git clone https://github.com/saniyaparveez/youtube_video_type_prediction.git
Figure 24: Screenshot showing how to clone a Github repository.
Figure 24: Screenshot showing how to clone a Github repository.
  • Check the cloned project.
!ls
Figure 25: Cloned project.
Figure 25: Cloned project.

Colab Magic

There are some interestingly amazing tricks which Colab offers. It provides multiple commands that provide quick operations in short. These commands are used with a % prefix.

List All Magic Commands

%lsmagic
Figure 26: List of all of Google Colab’s magic commands.
Figure 26: List of all of Google Colab’s magic commands.

List Local Directories

%ldir
Figure 27: List of local directories.
Figure 27: List of local directories.

Get Notebook History

%history

CPU Time

%time
Figure 28: CPU and wall time.
Figure 28: CPU and wall time.

How long has the system been running?

!uptime
Figure 29: Displaying system uptime.
Figure 29: Displaying system uptime.

Display available and used memory

!free -hprint("-"*100)
Figure 30; Display available and used memory.
Figure 30: Display available and used memory.

Display the CPU specification

!lscpu
print("-"*70)
Figure 31: Display CPU specification.
Figure 31: Display CPU specification.

List all running VM processes.

%%sh
echo "List all running VM processes."
ps -ef
echo "Done"
Figure 32: Display all VM running processes.
Figure 32: Display all VM running processes.

Embed HTML Text

%%html<marquee>Towards AI is a great publication platform</marquee>

Design HTML Form

#@title Personal Details
#@markdown Information.
Name = 'Peter' #@param {type: "string"}
Age = 25  #@param {type: "slider", min: 1, max: 100}
zip = 1234  #@param {type: "number"}
Date = '2020-01-26'  #@param {type: "date"}
Gender = "Male"  #@param ['Male', 'Female', 'Other']
#@markdown ---print("Submitting the form")
print(string_type, slider_value, number, date, pick_me)
print("Submitted")
Figure 33: Generating a form in Google Colab.
Figure 33: Generating a form in Google Colab.
Figure 34: Cell execution output.
Figure 34: Cell execution output.

Plotting

Google Colab can be used for data visualization, as well. The following code and graph show a plot containing more than one polynomial, Y = X³+X²+X [3].

x = np.arange(-10,10)
y = np.power(x,3)
y1 = np.power(x,3) + np.power(x,2) + x
plt.scatter(x,y1,c="red")
plt.scatter(x,y)
Figure 35: A graph showing more than one polynomial.

The following code and map are used to graph a heat map.

import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
length = 10
data = 5 + np.random.randn(length, length)
data += np.arange(length)
data += np.reshape(np.arange(length), (length, 1))
sns.heatmap(data)
plt.show()
Figure 36: Heatmap.
Figure 36: Heatmap.

TPU (Tensor Processing Unit) in Google Colab

We use Tensor Processing Units (TPUs) for acceleration on a Tensorflow graph. They are an AI accelerator application-specification integrated circuit (ASIC) specially designed for the neural network machines. Google has developed this processing unit.

TPU has a wonder configuration of teraflops, floating-point performance, and others. Each TPU packs up to 180 teraflops of floating-point performance and 64 GB of high-bandwidth memory on a single board. A teraflop is the measurement of a computer’s speed. Its speed can be a trillion floating-point operations per second.

Setup TPU in Colab

Steps to setup a TPU in Google Colab:

  • Runtime menu → Change runtime
Figure 36: Selection of TPU hardware accelerator.
Figure 37: Selection of TPU hardware accelerator.

Check Running on TPU Hardware Accelerator

It requires the TensorFlow package. Below code, implementations check whether Colab has set TPU accelerator or not?

import tensorflow as tf
try:
    tpu = tf.distribute.cluster_resolver.TPUClusterResolver()  
    print('Running on TPU ', tpu.cluster_spec().as_dict()['worker'])
except ValueError:
    print('Exception')
Figure 38: Running on TPU hardware accelerator.
Figure 38: Running on TPU hardware accelerator.

If the TPU is not set, then this code will encounter an error.

Conclusion

Google Colab or Colaboratory, from Google Research, is a Jupyter notebook environment to execute python-based code to build a machine learning or deep learning model.

It is completely free (unless you would like to go pro) and provides GPU and TPU hardware accelerators. It is effortless to use and share due to the zero-configuration features requirement.

It allows combining executable code and rich text in a single document and images, HTML, LaTex, and others. It has a vital machine learning library like TensorFlow already installed, so it is perfect for machine learning and deep learning model building. Colab is outstanding for developing neural networks.

We can achieve parallelism and execution of multiple threads by using CPU based hardware accelerator. We can share Google Colab notebooks publicly as a tutorial notebook. The insertion of HTML tags and the text’s styling builds an attractive and meaningful notebook for tutorials, and the insertion of text with code is remarkably helpful for explaining code flow and logic.

Data scientists and machine learners can harness Python libraries’ full power to analyze and visualize data, and Google Colab can import data directly from Kaggle and upload code from GitHub.


DISCLAIMER: The views expressed in this article are those of the author(s) and do not represent the views of Carnegie Mellon University nor other companies (directly or indirectly) associated with the author(s). These writings do not intend to be final products, yet rather a reflection of current thinking and being a catalyst for discussion and improvement.

All images are from the author(s) unless stated otherwise.

Published via Towards AI

Resources

Google colab implementation.

Github repository.

References

[1] Google Colab, https://colab.research.google.com/

[2] Python 2 Deprecation, Google Colab, Google, https://research.google.com/colaboratory/faq.html#python-2-deprecation

[3] Machine Learning Algorithms for Beginners with Code Examples in Python, Pratik Shukla, Roberto Iriondo, Towards AI, https://towardsai.net/p/machine-learning/machine-learning-algorithms-for-beginners-with-python-code-examples-ml-19c6afd60daa

[4] Project Jupyter, https://jupyter.org/

[5] Google Colab, FAQ, https://research.google.com/colaboratory/faq.html

Feedback ↓

Sign Up for the Course
`; } else { console.error('Element with id="subscribe" not found within the page with class "home".'); } } }); // Remove duplicate text from articles /* Backup: 09/11/24 function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag elements.forEach(el => { const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 2) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); */ // Remove duplicate text from articles function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag // List of classes to be excluded const excludedClasses = ['medium-author', 'post-widget-title']; elements.forEach(el => { // Skip elements with any of the excluded classes if (excludedClasses.some(cls => el.classList.contains(cls))) { return; // Skip this element if it has any of the excluded classes } const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 10) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); //Remove unnecessary text in blog excerpts document.querySelectorAll('.blog p').forEach(function(paragraph) { // Replace the unwanted text pattern for each paragraph paragraph.innerHTML = paragraph.innerHTML .replace(/Author\(s\): [\w\s]+ Originally published on Towards AI\.?/g, '') // Removes 'Author(s): XYZ Originally published on Towards AI' .replace(/This member-only story is on us\. Upgrade to access all of Medium\./g, ''); // Removes 'This member-only story...' }); //Load ionic icons and cache them if ('localStorage' in window && window['localStorage'] !== null) { const cssLink = 'https://code.ionicframework.com/ionicons/2.0.1/css/ionicons.min.css'; const storedCss = localStorage.getItem('ionicons'); if (storedCss) { loadCSS(storedCss); } else { fetch(cssLink).then(response => response.text()).then(css => { localStorage.setItem('ionicons', css); loadCSS(css); }); } } function loadCSS(css) { const style = document.createElement('style'); style.innerHTML = css; document.head.appendChild(style); } //Remove elements from imported content automatically function removeStrongFromHeadings() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, h6, span'); elements.forEach(el => { const strongTags = el.querySelectorAll('strong'); strongTags.forEach(strongTag => { while (strongTag.firstChild) { strongTag.parentNode.insertBefore(strongTag.firstChild, strongTag); } strongTag.remove(); }); }); } removeStrongFromHeadings(); "use strict"; window.onload = () => { /* //This is an object for each category of subjects and in that there are kewords and link to the keywods let keywordsAndLinks = { //you can add more categories and define their keywords and add a link ds: { keywords: [ //you can add more keywords here they are detected and replaced with achor tag automatically 'data science', 'Data science', 'Data Science', 'data Science', 'DATA SCIENCE', ], //we will replace the linktext with the keyword later on in the code //you can easily change links for each category here //(include class="ml-link" and linktext) link: 'linktext', }, ml: { keywords: [ //Add more keywords 'machine learning', 'Machine learning', 'Machine Learning', 'machine Learning', 'MACHINE LEARNING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ai: { keywords: [ 'artificial intelligence', 'Artificial intelligence', 'Artificial Intelligence', 'artificial Intelligence', 'ARTIFICIAL INTELLIGENCE', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, nl: { keywords: [ 'NLP', 'nlp', 'natural language processing', 'Natural Language Processing', 'NATURAL LANGUAGE PROCESSING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, des: { keywords: [ 'data engineering services', 'Data Engineering Services', 'DATA ENGINEERING SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, td: { keywords: [ 'training data', 'Training Data', 'training Data', 'TRAINING DATA', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ias: { keywords: [ 'image annotation services', 'Image annotation services', 'image Annotation services', 'image annotation Services', 'Image Annotation Services', 'IMAGE ANNOTATION SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, l: { keywords: [ 'labeling', 'labelling', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, pbp: { keywords: [ 'previous blog posts', 'previous blog post', 'latest', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, mlc: { keywords: [ 'machine learning course', 'machine learning class', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, }; //Articles to skip let articleIdsToSkip = ['post-2651', 'post-3414', 'post-3540']; //keyword with its related achortag is recieved here along with article id function searchAndReplace(keyword, anchorTag, articleId) { //selects the h3 h4 and p tags that are inside of the article let content = document.querySelector(`#${articleId} .entry-content`); //replaces the "linktext" in achor tag with the keyword that will be searched and replaced let newLink = anchorTag.replace('linktext', keyword); //regular expression to search keyword var re = new RegExp('(' + keyword + ')', 'g'); //this replaces the keywords in h3 h4 and p tags content with achor tag content.innerHTML = content.innerHTML.replace(re, newLink); } function articleFilter(keyword, anchorTag) { //gets all the articles var articles = document.querySelectorAll('article'); //if its zero or less then there are no articles if (articles.length > 0) { for (let x = 0; x < articles.length; x++) { //articles to skip is an array in which there are ids of articles which should not get effected //if the current article's id is also in that array then do not call search and replace with its data if (!articleIdsToSkip.includes(articles[x].id)) { //search and replace is called on articles which should get effected searchAndReplace(keyword, anchorTag, articles[x].id, key); } else { console.log( `Cannot replace the keywords in article with id ${articles[x].id}` ); } } } else { console.log('No articles found.'); } } let key; //not part of script, added for (key in keywordsAndLinks) { //key is the object in keywords and links object i.e ds, ml, ai for (let i = 0; i < keywordsAndLinks[key].keywords.length; i++) { //keywordsAndLinks[key].keywords is the array of keywords for key (ds, ml, ai) //keywordsAndLinks[key].keywords[i] is the keyword and keywordsAndLinks[key].link is the link //keyword and link is sent to searchreplace where it is then replaced using regular expression and replace function articleFilter( keywordsAndLinks[key].keywords[i], keywordsAndLinks[key].link ); } } function cleanLinks() { // (making smal functions is for DRY) this function gets the links and only keeps the first 2 and from the rest removes the anchor tag and replaces it with its text function removeLinks(links) { if (links.length > 1) { for (let i = 2; i < links.length; i++) { links[i].outerHTML = links[i].textContent; } } } //arrays which will contain all the achor tags found with the class (ds-link, ml-link, ailink) in each article inserted using search and replace let dslinks; let mllinks; let ailinks; let nllinks; let deslinks; let tdlinks; let iaslinks; let llinks; let pbplinks; let mlclinks; const content = document.querySelectorAll('article'); //all articles content.forEach((c) => { //to skip the articles with specific ids if (!articleIdsToSkip.includes(c.id)) { //getting all the anchor tags in each article one by one dslinks = document.querySelectorAll(`#${c.id} .entry-content a.ds-link`); mllinks = document.querySelectorAll(`#${c.id} .entry-content a.ml-link`); ailinks = document.querySelectorAll(`#${c.id} .entry-content a.ai-link`); nllinks = document.querySelectorAll(`#${c.id} .entry-content a.ntrl-link`); deslinks = document.querySelectorAll(`#${c.id} .entry-content a.des-link`); tdlinks = document.querySelectorAll(`#${c.id} .entry-content a.td-link`); iaslinks = document.querySelectorAll(`#${c.id} .entry-content a.ias-link`); mlclinks = document.querySelectorAll(`#${c.id} .entry-content a.mlc-link`); llinks = document.querySelectorAll(`#${c.id} .entry-content a.l-link`); pbplinks = document.querySelectorAll(`#${c.id} .entry-content a.pbp-link`); //sending the anchor tags list of each article one by one to remove extra anchor tags removeLinks(dslinks); removeLinks(mllinks); removeLinks(ailinks); removeLinks(nllinks); removeLinks(deslinks); removeLinks(tdlinks); removeLinks(iaslinks); removeLinks(mlclinks); removeLinks(llinks); removeLinks(pbplinks); } }); } //To remove extra achor tags of each category (ds, ml, ai) and only have 2 of each category per article cleanLinks(); */ //Recommended Articles var ctaLinks = [ /* ' ' + '

Subscribe to our AI newsletter!

' + */ '

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

'+ '

Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

' + '
' + '' + '' + '

Note: Content contains the views of the contributing authors and not Towards AI.
Disclosure: This website may contain sponsored content and affiliate links.

' + 'Discover Your Dream AI Career at Towards AI Jobs' + '

Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 10,000 live jobs today with Towards AI Jobs!

' + '
' + '

🔥 Recommended Articles 🔥

' + 'Why Become an LLM Developer? Launching Towards AI’s New One-Stop Conversion Course'+ 'Testing Launchpad.sh: A Container-based GPU Cloud for Inference and Fine-tuning'+ 'The Top 13 AI-Powered CRM Platforms
' + 'Top 11 AI Call Center Software for 2024
' + 'Learn Prompting 101—Prompt Engineering Course
' + 'Explore Leading Cloud Providers for GPU-Powered LLM Training
' + 'Best AI Communities for Artificial Intelligence Enthusiasts
' + 'Best Workstations for Deep Learning
' + 'Best Laptops for Deep Learning
' + 'Best Machine Learning Books
' + 'Machine Learning Algorithms
' + 'Neural Networks Tutorial
' + 'Best Public Datasets for Machine Learning
' + 'Neural Network Types
' + 'NLP Tutorial
' + 'Best Data Science Books
' + 'Monte Carlo Simulation Tutorial
' + 'Recommender System Tutorial
' + 'Linear Algebra for Deep Learning Tutorial
' + 'Google Colab Introduction
' + 'Decision Trees in Machine Learning
' + 'Principal Component Analysis (PCA) Tutorial
' + 'Linear Regression from Zero to Hero
'+ '

', /* + '

Join thousands of data leaders on the AI newsletter. It’s free, we don’t spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

',*/ ]; var replaceText = { '': '', '': '', '
': '
' + ctaLinks + '
', }; Object.keys(replaceText).forEach((txtorig) => { //txtorig is the key in replacetext object const txtnew = replaceText[txtorig]; //txtnew is the value of the key in replacetext object let entryFooter = document.querySelector('article .entry-footer'); if (document.querySelectorAll('.single-post').length > 0) { //console.log('Article found.'); const text = entryFooter.innerHTML; entryFooter.innerHTML = text.replace(txtorig, txtnew); } else { // console.log('Article not found.'); //removing comment 09/04/24 } }); var css = document.createElement('style'); css.type = 'text/css'; css.innerHTML = '.post-tags { display:none !important } .article-cta a { font-size: 18px; }'; document.body.appendChild(css); //Extra //This function adds some accessibility needs to the site. function addAlly() { // In this function JQuery is replaced with vanilla javascript functions const imgCont = document.querySelector('.uw-imgcont'); imgCont.setAttribute('aria-label', 'AI news, latest developments'); imgCont.title = 'AI news, latest developments'; imgCont.rel = 'noopener'; document.querySelector('.page-mobile-menu-logo a').title = 'Towards AI Home'; document.querySelector('a.social-link').rel = 'noopener'; document.querySelector('a.uw-text').rel = 'noopener'; document.querySelector('a.uw-w-branding').rel = 'noopener'; document.querySelector('.blog h2.heading').innerHTML = 'Publication'; const popupSearch = document.querySelector$('a.btn-open-popup-search'); popupSearch.setAttribute('role', 'button'); popupSearch.title = 'Search'; const searchClose = document.querySelector('a.popup-search-close'); searchClose.setAttribute('role', 'button'); searchClose.title = 'Close search page'; // document // .querySelector('a.btn-open-popup-search') // .setAttribute( // 'href', // 'https://medium.com/towards-artificial-intelligence/search' // ); } // Add external attributes to 302 sticky and editorial links function extLink() { // Sticky 302 links, this fuction opens the link we send to Medium on a new tab and adds a "noopener" rel to them var stickyLinks = document.querySelectorAll('.grid-item.sticky a'); for (var i = 0; i < stickyLinks.length; i++) { /* stickyLinks[i].setAttribute('target', '_blank'); stickyLinks[i].setAttribute('rel', 'noopener'); */ } // Editorial 302 links, same here var editLinks = document.querySelectorAll( '.grid-item.category-editorial a' ); for (var i = 0; i < editLinks.length; i++) { editLinks[i].setAttribute('target', '_blank'); editLinks[i].setAttribute('rel', 'noopener'); } } // Add current year to copyright notices document.getElementById( 'js-current-year' ).textContent = new Date().getFullYear(); // Call functions after page load extLink(); //addAlly(); setTimeout(function() { //addAlly(); //ideally we should only need to run it once ↑ }, 5000); }; function closeCookieDialog (){ document.getElementById("cookie-consent").style.display = "none"; return false; } setTimeout ( function () { closeCookieDialog(); }, 15000); console.log(`%c 🚀🚀🚀 ███ █████ ███████ █████████ ███████████ █████████████ ███████████████ ███████ ███████ ███████ ┌───────────────────────────────────────────────────────────────────┐ │ │ │ Towards AI is looking for contributors! │ │ Join us in creating awesome AI content. │ │ Let's build the future of AI together → │ │ https://towardsai.net/contribute │ │ │ └───────────────────────────────────────────────────────────────────┘ `, `background: ; color: #00adff; font-size: large`); //Remove latest category across site document.querySelectorAll('a[rel="category tag"]').forEach(function(el) { if (el.textContent.trim() === 'Latest') { // Remove the two consecutive spaces (  ) if (el.nextSibling && el.nextSibling.nodeValue.includes('\u00A0\u00A0')) { el.nextSibling.nodeValue = ''; // Remove the spaces } el.style.display = 'none'; // Hide the element } }); // Add cross-domain measurement, anonymize IPs 'use strict'; //var ga = gtag; ga('config', 'G-9D3HKKFV1Q', 'auto', { /*'allowLinker': true,*/ 'anonymize_ip': true/*, 'linker': { 'domains': [ 'medium.com/towards-artificial-intelligence', 'datasets.towardsai.net', 'rss.towardsai.net', 'feed.towardsai.net', 'contribute.towardsai.net', 'members.towardsai.net', 'pub.towardsai.net', 'news.towardsai.net' ] } */ }); ga('send', 'pageview'); -->