Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: pub@towardsai.net
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

Face Off: Practical Face-Swapping with Machine Learning
Artificial Intelligence   Computer Vision   Latest   Machine Learning

Face Off: Practical Face-Swapping with Machine Learning

Author(s): Aliaksei Mikhailiuk

Originally published on Towards AI.

Image generated with SDXL

I first laid my hands on the face-swapping at Snap, working on a face swap lens. Diving deep into the subject, I have realized that there is a myriad of technological branches that address various parts of the problem.

While very impressive, face swap technology still requires a huge amount of manual work to be made indistinguishable from the real videos. It also requires computational resources to achieve high-quality results, making it hard for the current methods to operate in real-time at scale.

Below I try to answer the following three questions: What matters for a high-quality face swap method? What has been done in the past? And what to do in practice?

While writing this article I have also found a paper-stack with face-swap papers, you might want to check it out!

What matters?

Generally there two categories into which Face-Swapping methods fall — 3D based, where the method first reconstructs the 3D model of both faces and after that finds a transformation from one face to another; and machine learning based methods, where, given two photos we learn an end-to-end mapping to the face with the transitioned identity of one image and preserved attributes of another.

In this article, I will focus on machine learning methods. 3D methods although with an initial momentum (mainly due to the fact that we didn’t have sufficient resources and knowledge to efficiently train larger neural networks) suffer from lower quality and lower processing speeds.

Going over the papers, I have noticed that the proposed solutions generally try to improve one of the three key aspects: quality, identity, and attribute preservation. Typically, these aspects are covered by either inventing or modifying existing identity and attribute losses, looking for ways to efficiently inject identity information into the attribute encoders, augmenting datasets with diverse images, and introducing the model architecture tweaks to better transfer the features from the attribute image.

Key areas of focus for face-swapping methods. Image by the Author.

Dataset considerations

One of the aspects that each of the methods listed below acknowledges is the need for a diverse dataset with high-quality images of human faces. Ideally a dataset for training face swap methods would contain versatile:

  1. Facial expressions: raised eyebrows, open/closed mouth, open/closed/squinted eyes and various positions of head pose.
  2. Facial occlusions: glasses, hair bangs, beards, head covers (these (can be pre-generated or sampled from datasets, for example: egohands, gtea hand2k, ShapeNet).
  3. Diversity attributes: skin-colours, disabilities, gender, age

Typical datasets used for training and evaluation are: Celeb-HQ, CalebA-HQ, CelebV-HQ, VoxCeleb2, FaceForensics++ and FFHQ. These datasets have a large number of real images; however, they suffer from a limited number of facial expressions and occlusions, and, perhaps unsurprising, they have a large number of celebrities, which look fairly different from a real population sample. However, with various data augmentations, for example, through pasting objects into the image, simulating occlusions, or using synthetic datasets by re-sampling a pre-trained generator for human faces with different pitch, yaw, and expression settings, these datasets could achieve the required quality.

What has been done?

First, we need to agree on terminology. Face-swap methods typically have two inputs: the face that will be used for identity extraction and the face into which the identity is to be posted. In the literature, typically, the names source and target images are used, however, these might be used interchangeably, depending on the article, and to avoid any confusion, I will call the image from which we take the identity as identity image and the one from which we extract attributes, I will call the attribute image.

Without further due, lets jump into the history and present of face swapping!

DeepFaceLab: Integrated, flexible and extensible face-swapping framework, 2018, code

DF and LIAE Variants of the model. Image by Petrov I.

DeepFaceLab is perhaps the most well-known method for face-swapping — back in 2018; it managed to successfully build an easy-to-use pipeline and community around it on encouraging data and checkpoint charing.

The method comes with a big limitation — it can only be tuned for a specific pair of faces and wouldn’t generalize beyond these. The method has two variations (DF) and (LIAE), in the DF variation, identity and attribute mask+image share the same encoder and interconnected (inter) layers and have their own decoders; (ii) LIAE Structure was proposed to tackle the lightning transfer problem share the encoder, but have separate interconnected layers, followed by a shared decoder that takes concatenated features from InterAB and InterB for A and B images respectively.

The method uses several losses, weighted mask SSIM (more on eyes), DSSIM, MSE, and the difference between the features of the idenity and attribute images after inter layer, to make sure that the information about one information is disentangled from the other.

FaceShifter: Towards High Fidelity And Occlusion Aware Face Swapping, 2019, code

FaceShifter Pipeline. Image by Li L.

This work is perhaps the first successful method that didn’t require fine-tuning for pair-specific source and target faces.

The faces are first aligned, and then the target identity is extracted with an identity encoding network. The attribute image is fed through UNet like decoder extracting multiple-res features. The target identity and source features are normalized and blended together.

The method has four losses: identity, adversarial, reconstruction (when source and target are the same), and attribute preservation loss, which penalizes the difference in the source encoder embeddings from the generated and source images. On top of the method, runs a refinement network that ensures that occluded regions in the source image are preserved — without it, parts of the occluded face might experience ghost artifacts.

SimSwap: An Efficient Framework For High Fidelity Face Swapping, 2021, code

Image by Chen R.

The main idea is to extract source identity feature vector using face recognition neural (arcface) and modulate it into encoded target features (using AdaIN).

The loss then consists of identity loss (cosine between source identity vector and generated identity vector), reconstruction loss (L1 between images, used only if the source and target identities are same), adversarial loss, weak feature matching loss (L1 between several latest discriminator layers for gt vs. generated).

One Shot Face Swapping on Megapixels, 2021, code

Image by Zhu Y.

While not the first one (the work builds on the paper exploring face swapping through latent space manipulation) the methods attains superior results, taking a different angle to the problem of face-swapping compared to other methods. It looks in exchanging the identity via non-linear latent space manipulation of StyleGan2 via GAN-inversion. Each of the modules in the paper can be trained separately, making fine-tuning on large-scale images feasible in the realm of constrained GPU resources.

The model uses reconstruction, LPIPS for quality preservation, identity and landmark losses.

Smooth-Swap: A Simple Enhancement for Face-Swapping with Smoothness, 2021

Image by Kim J.

There are two interesting parts to this paper, the first is that it is a much simplified architecture compared to the earlier methods — instead of looking into the intricate ways to merge the features from the identity and attribute images the model directly injects identity embeddings into end-to-end architecture.

The second interesting part is an additional contrastive lost term that smoothes the identity embedding space to ensure that the gradient propagation is quicker and the learning is faster. The rest of the losses are adversarial, identity and reconstruction losses (when the identity is the same in the identity and attribute images).

A new face swap method for image and video domains: a technical report [(Ghost)], 2022, code

Image by Chesakov D.

The method presents a simplified architecture of FaceShifter. First, and perhaps the most important improvement is that it does not require two models to produce high quality images. Second instead of generating the whole image from scratch, it pastes the generated face into the source image based on the insertion mask. The mask is shrinked or expanded depending on the size of the face and also has blurry edges to make the transition between the pasted and generated images smooth.

The method also presents several modifications to the loss functions — first is that it relaxes the constraint on the reconstruction loss — not requiring the reconstruction loss to be generated from the output where input identity and attribute images are the same, the only requirement is that these represent these have the same identityand add a new eye loss, where eyes appeared to be important for visual perception of human identity.

MobileFaceSwap: A Lightweight Framework for Video Face Swapping, 2022, code

Image by Xu Z.

The work proposes a lightweight face-swap method that is supposed to be mobile friendly and focuses on the architectural tweaks that would make it lightweight, for example it uses UNet-like architecture with standard convolutions replaced depth-wise and point-wise convolutions, thus making it lighter and faster.

To inject identity to the inference model employ another network that predicts weights for depth-wise and modulation parameters for point-wise (because latter are heavier) parts. Hence the identity network modifies weights of the main network and then the main network is used for inference. ID-network uses ID vectors from ArcFace.

The work proposes to train the final light-weight network using a teacher model along with GAN and ID-ARCFace losses.

Learning Disentangled Representation for One-shot Progressive Face Swapping, 2022, code

Image by Qi L.

Unlike other methods this work focuses specifically on feature disentanglement — once independent representations are available, these can be mixed and match to transfer the identities from one image to another. The method simultaneously performs two face swaps and two reconstructions from identity and attribute images.

The work trains separate encoders for identity and attributes, as well as their own fusion module that employs landmarks and facial mask to inject identity vector — mask and landmarks are used as direct assumptions on where to insert identity information.

The model employs four losses — reconstruction, identity, adversarial and attribute transfer losses.

Migrating Face Swap to Mobile Devices: A lightweight Framework and A Supervised Training Solution, 2022, code

Image by Yu H.

The work follows the track of building practical face-swapping models for mobile. However, the most interesting part of the paper is data augmentation procedure.

In this work the authors take two pictures with the same identity, transform the one of the face images with with ageing or fatting. Then they train the method to paste the unchanged image into the transformed one and since we still have the untransformed original image that was transformed we have the ground truth for face-swap transformation.

The model has six losses — 3 multi-scale discriminators (the decoder is split into three layer chunks each third from the deep to shallower layers is forced to output a valid reconstruction of the face swapped result at various resolutions), VGG, ID and pixel-wise losses.

High-resolution Face Swapping via Latent Semantics Disentanglement, 2022, code

Image by Xu Y.

The work continues the stream of the One Shot Face Swapping on Megapixels — looking at manipulating the W+ latent space, however, has many more stages.

The model first construct a side-output swapped face using a StyleGAN generator, by blending the structure attributes of the attribute and the identity faces in the latent space while reusing the appearance attributes of the attribute image. To further transfer the background of the attribute’s face, the model uses an encoder to generate multi-resolution features from the attribute image, and blend them with the corresponding features from the upsampling blocks of the StyleGAN generator. The blended features are fed into a decoder to synthesis the final swapped face image.

The model relies on adversarial, landmark alignment, identity, reconstruction, style-transfer losses.

Region-Aware Face Swapping, 2022

Image by Xu C.

The model has two branches to encode ID features — local (based on facial features, for example, lips, nose, brows, and eyes) and global (to encode global identity-relevant cues for example wrinkles).

Instead of AdaIN to inject local facial features the model uses transformers in the local branch as too much of irrelevant information is entangled in the ID feature and transformer architecture can encode it. The decoder is StyleGAN2 and interestingly instead of using the blending mask pre-computed with landmarks the method predicts it in a separate branch.

Total loss is fairly straight forward: reconstruction (L2), perceptual and ID losses.

FastSwap: A Lightweight One-Stage Framework for Real-Time Face Swapping, 2023, code.

Image by Yoo S.

The model consists of three modules: 1) Identity Encoder which extracts the identity feature and provides the skip connections to the generator, 2) Pose Network which extracts pose from target image and decodes the spatial pose feature, and 3) Decoder with TAN Block which effectively integrates the features from 1) and 2) in an adaptive fashion.

The model is trained in a self-supervised manner — both identity and attribute image have the same identity. During the training process both original identity and attribute images have their colour distorted, making the model to put more effort in identifying the right transformation guided by the additional “attribute image” which gives guidance to the lightning and skin color image.

Loss functions include — reconstruction (L2), perceptual, adversarial, ID, and pose losses.

Diffusion based face-swapping

The methods described above are based on CNN-based models that combine several loss functions and are trained in a GAN-like manner. However, in the past two years, we have seen the growth of diffusion models due to their superior quality. Despite the differences in the intrinsic mechanisms for image generation, the overall principle of building a diffusion-based face-swapping method remains the same: encoding the attribute information and guiding the diffusion process with the ID vectors and landmarks.

Evaluation

Each of the three aspects mentioned above: quality, identity, and attribute preservation, needs to be measured, and typically, each of these is measured separately. The following methods can be used to evaluate your method:

  • Identity preservation: face-net and compare the identity between the target and generated with cosine similarity
  • Artefacts: realism metrics, e.g. FID , however, these are not fine-tuned for human faces
  • Source image attributes: comparing facial expression embeddings and eye landmarks for gaze location.

Although these metrics can provide some insights into the performance of the face-swapping methods, running subjective studies is instrumental in making reliable conclusions about the models. Ideally the method should be tested on the videos — photos might be limited in their ability to exhibit artefacts that would otherwise be noticeable in the videos, for example temporal consistency or flickering.

What to do in practice?

Once you are ready to design your own face-swapping app the first step is to see if you can utilise the available solutions. Depending on your needs, either one of the existing solutions would work, or, for example if there tight computational constraints (porting your face-swap model to mobile) you could use the following recipe:

  • Prepare a dataset with diverse identity and attribute images;
  • Generate swapped identity images with the best method from the literature;
  • Distill the dataset to the model that would fit your computational requirements with a supervised training, having paired input — output (swapped identity reference).

Liked the author? Stay connected!

Have I missed anything? Do not hesitate to leave a note, comment or message me directly on LinkedIn or Twitter!

Deep Video Inpainting

Removing unwanted objects from videos with deep neural networks. Problem set up and state-of-the-art review.

towardsdatascience.com

Perceptual Losses for Deep Image Restoration

From mean squared error to GANs — what makes a good perceptual loss function?

towardsdatascience.com

On the edge — deploying deep learning applications on mobile

Techniques on striking the efficiency-accuracy trade-off for deep neural networks on constrained devices

towardsdatascience.com

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓

Sign Up for the Course
`; } else { console.error('Element with id="subscribe" not found within the page with class "home".'); } } }); // Remove duplicate text from articles /* Backup: 09/11/24 function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag elements.forEach(el => { const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 2) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); */ // Remove duplicate text from articles function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag // List of classes to be excluded const excludedClasses = ['medium-author', 'post-widget-title']; elements.forEach(el => { // Skip elements with any of the excluded classes if (excludedClasses.some(cls => el.classList.contains(cls))) { return; // Skip this element if it has any of the excluded classes } const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 10) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); //Remove unnecessary text in blog excerpts document.querySelectorAll('.blog p').forEach(function(paragraph) { // Replace the unwanted text pattern for each paragraph paragraph.innerHTML = paragraph.innerHTML .replace(/Author\(s\): [\w\s]+ Originally published on Towards AI\.?/g, '') // Removes 'Author(s): XYZ Originally published on Towards AI' .replace(/This member-only story is on us\. Upgrade to access all of Medium\./g, ''); // Removes 'This member-only story...' }); //Load ionic icons and cache them if ('localStorage' in window && window['localStorage'] !== null) { const cssLink = 'https://code.ionicframework.com/ionicons/2.0.1/css/ionicons.min.css'; const storedCss = localStorage.getItem('ionicons'); if (storedCss) { loadCSS(storedCss); } else { fetch(cssLink).then(response => response.text()).then(css => { localStorage.setItem('ionicons', css); loadCSS(css); }); } } function loadCSS(css) { const style = document.createElement('style'); style.innerHTML = css; document.head.appendChild(style); } //Remove elements from imported content automatically function removeStrongFromHeadings() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, h6, span'); elements.forEach(el => { const strongTags = el.querySelectorAll('strong'); strongTags.forEach(strongTag => { while (strongTag.firstChild) { strongTag.parentNode.insertBefore(strongTag.firstChild, strongTag); } strongTag.remove(); }); }); } removeStrongFromHeadings(); "use strict"; window.onload = () => { /* //This is an object for each category of subjects and in that there are kewords and link to the keywods let keywordsAndLinks = { //you can add more categories and define their keywords and add a link ds: { keywords: [ //you can add more keywords here they are detected and replaced with achor tag automatically 'data science', 'Data science', 'Data Science', 'data Science', 'DATA SCIENCE', ], //we will replace the linktext with the keyword later on in the code //you can easily change links for each category here //(include class="ml-link" and linktext) link: 'linktext', }, ml: { keywords: [ //Add more keywords 'machine learning', 'Machine learning', 'Machine Learning', 'machine Learning', 'MACHINE LEARNING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ai: { keywords: [ 'artificial intelligence', 'Artificial intelligence', 'Artificial Intelligence', 'artificial Intelligence', 'ARTIFICIAL INTELLIGENCE', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, nl: { keywords: [ 'NLP', 'nlp', 'natural language processing', 'Natural Language Processing', 'NATURAL LANGUAGE PROCESSING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, des: { keywords: [ 'data engineering services', 'Data Engineering Services', 'DATA ENGINEERING SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, td: { keywords: [ 'training data', 'Training Data', 'training Data', 'TRAINING DATA', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ias: { keywords: [ 'image annotation services', 'Image annotation services', 'image Annotation services', 'image annotation Services', 'Image Annotation Services', 'IMAGE ANNOTATION SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, l: { keywords: [ 'labeling', 'labelling', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, pbp: { keywords: [ 'previous blog posts', 'previous blog post', 'latest', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, mlc: { keywords: [ 'machine learning course', 'machine learning class', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, }; //Articles to skip let articleIdsToSkip = ['post-2651', 'post-3414', 'post-3540']; //keyword with its related achortag is recieved here along with article id function searchAndReplace(keyword, anchorTag, articleId) { //selects the h3 h4 and p tags that are inside of the article let content = document.querySelector(`#${articleId} .entry-content`); //replaces the "linktext" in achor tag with the keyword that will be searched and replaced let newLink = anchorTag.replace('linktext', keyword); //regular expression to search keyword var re = new RegExp('(' + keyword + ')', 'g'); //this replaces the keywords in h3 h4 and p tags content with achor tag content.innerHTML = content.innerHTML.replace(re, newLink); } function articleFilter(keyword, anchorTag) { //gets all the articles var articles = document.querySelectorAll('article'); //if its zero or less then there are no articles if (articles.length > 0) { for (let x = 0; x < articles.length; x++) { //articles to skip is an array in which there are ids of articles which should not get effected //if the current article's id is also in that array then do not call search and replace with its data if (!articleIdsToSkip.includes(articles[x].id)) { //search and replace is called on articles which should get effected searchAndReplace(keyword, anchorTag, articles[x].id, key); } else { console.log( `Cannot replace the keywords in article with id ${articles[x].id}` ); } } } else { console.log('No articles found.'); } } let key; //not part of script, added for (key in keywordsAndLinks) { //key is the object in keywords and links object i.e ds, ml, ai for (let i = 0; i < keywordsAndLinks[key].keywords.length; i++) { //keywordsAndLinks[key].keywords is the array of keywords for key (ds, ml, ai) //keywordsAndLinks[key].keywords[i] is the keyword and keywordsAndLinks[key].link is the link //keyword and link is sent to searchreplace where it is then replaced using regular expression and replace function articleFilter( keywordsAndLinks[key].keywords[i], keywordsAndLinks[key].link ); } } function cleanLinks() { // (making smal functions is for DRY) this function gets the links and only keeps the first 2 and from the rest removes the anchor tag and replaces it with its text function removeLinks(links) { if (links.length > 1) { for (let i = 2; i < links.length; i++) { links[i].outerHTML = links[i].textContent; } } } //arrays which will contain all the achor tags found with the class (ds-link, ml-link, ailink) in each article inserted using search and replace let dslinks; let mllinks; let ailinks; let nllinks; let deslinks; let tdlinks; let iaslinks; let llinks; let pbplinks; let mlclinks; const content = document.querySelectorAll('article'); //all articles content.forEach((c) => { //to skip the articles with specific ids if (!articleIdsToSkip.includes(c.id)) { //getting all the anchor tags in each article one by one dslinks = document.querySelectorAll(`#${c.id} .entry-content a.ds-link`); mllinks = document.querySelectorAll(`#${c.id} .entry-content a.ml-link`); ailinks = document.querySelectorAll(`#${c.id} .entry-content a.ai-link`); nllinks = document.querySelectorAll(`#${c.id} .entry-content a.ntrl-link`); deslinks = document.querySelectorAll(`#${c.id} .entry-content a.des-link`); tdlinks = document.querySelectorAll(`#${c.id} .entry-content a.td-link`); iaslinks = document.querySelectorAll(`#${c.id} .entry-content a.ias-link`); mlclinks = document.querySelectorAll(`#${c.id} .entry-content a.mlc-link`); llinks = document.querySelectorAll(`#${c.id} .entry-content a.l-link`); pbplinks = document.querySelectorAll(`#${c.id} .entry-content a.pbp-link`); //sending the anchor tags list of each article one by one to remove extra anchor tags removeLinks(dslinks); removeLinks(mllinks); removeLinks(ailinks); removeLinks(nllinks); removeLinks(deslinks); removeLinks(tdlinks); removeLinks(iaslinks); removeLinks(mlclinks); removeLinks(llinks); removeLinks(pbplinks); } }); } //To remove extra achor tags of each category (ds, ml, ai) and only have 2 of each category per article cleanLinks(); */ //Recommended Articles var ctaLinks = [ /* ' ' + '

Subscribe to our AI newsletter!

' + */ '

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

'+ '

Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

' + '
' + '' + '' + '

Note: Content contains the views of the contributing authors and not Towards AI.
Disclosure: This website may contain sponsored content and affiliate links.

' + 'Discover Your Dream AI Career at Towards AI Jobs' + '

Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 10,000 live jobs today with Towards AI Jobs!

' + '
' + '

🔥 Recommended Articles 🔥

' + 'Why Become an LLM Developer? Launching Towards AI’s New One-Stop Conversion Course'+ 'Testing Launchpad.sh: A Container-based GPU Cloud for Inference and Fine-tuning'+ 'The Top 13 AI-Powered CRM Platforms
' + 'Top 11 AI Call Center Software for 2024
' + 'Learn Prompting 101—Prompt Engineering Course
' + 'Explore Leading Cloud Providers for GPU-Powered LLM Training
' + 'Best AI Communities for Artificial Intelligence Enthusiasts
' + 'Best Workstations for Deep Learning
' + 'Best Laptops for Deep Learning
' + 'Best Machine Learning Books
' + 'Machine Learning Algorithms
' + 'Neural Networks Tutorial
' + 'Best Public Datasets for Machine Learning
' + 'Neural Network Types
' + 'NLP Tutorial
' + 'Best Data Science Books
' + 'Monte Carlo Simulation Tutorial
' + 'Recommender System Tutorial
' + 'Linear Algebra for Deep Learning Tutorial
' + 'Google Colab Introduction
' + 'Decision Trees in Machine Learning
' + 'Principal Component Analysis (PCA) Tutorial
' + 'Linear Regression from Zero to Hero
'+ '

', /* + '

Join thousands of data leaders on the AI newsletter. It’s free, we don’t spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

',*/ ]; var replaceText = { '': '', '': '', '
': '
' + ctaLinks + '
', }; Object.keys(replaceText).forEach((txtorig) => { //txtorig is the key in replacetext object const txtnew = replaceText[txtorig]; //txtnew is the value of the key in replacetext object let entryFooter = document.querySelector('article .entry-footer'); if (document.querySelectorAll('.single-post').length > 0) { //console.log('Article found.'); const text = entryFooter.innerHTML; entryFooter.innerHTML = text.replace(txtorig, txtnew); } else { // console.log('Article not found.'); //removing comment 09/04/24 } }); var css = document.createElement('style'); css.type = 'text/css'; css.innerHTML = '.post-tags { display:none !important } .article-cta a { font-size: 18px; }'; document.body.appendChild(css); //Extra //This function adds some accessibility needs to the site. function addAlly() { // In this function JQuery is replaced with vanilla javascript functions const imgCont = document.querySelector('.uw-imgcont'); imgCont.setAttribute('aria-label', 'AI news, latest developments'); imgCont.title = 'AI news, latest developments'; imgCont.rel = 'noopener'; document.querySelector('.page-mobile-menu-logo a').title = 'Towards AI Home'; document.querySelector('a.social-link').rel = 'noopener'; document.querySelector('a.uw-text').rel = 'noopener'; document.querySelector('a.uw-w-branding').rel = 'noopener'; document.querySelector('.blog h2.heading').innerHTML = 'Publication'; const popupSearch = document.querySelector$('a.btn-open-popup-search'); popupSearch.setAttribute('role', 'button'); popupSearch.title = 'Search'; const searchClose = document.querySelector('a.popup-search-close'); searchClose.setAttribute('role', 'button'); searchClose.title = 'Close search page'; // document // .querySelector('a.btn-open-popup-search') // .setAttribute( // 'href', // 'https://medium.com/towards-artificial-intelligence/search' // ); } // Add external attributes to 302 sticky and editorial links function extLink() { // Sticky 302 links, this fuction opens the link we send to Medium on a new tab and adds a "noopener" rel to them var stickyLinks = document.querySelectorAll('.grid-item.sticky a'); for (var i = 0; i < stickyLinks.length; i++) { /* stickyLinks[i].setAttribute('target', '_blank'); stickyLinks[i].setAttribute('rel', 'noopener'); */ } // Editorial 302 links, same here var editLinks = document.querySelectorAll( '.grid-item.category-editorial a' ); for (var i = 0; i < editLinks.length; i++) { editLinks[i].setAttribute('target', '_blank'); editLinks[i].setAttribute('rel', 'noopener'); } } // Add current year to copyright notices document.getElementById( 'js-current-year' ).textContent = new Date().getFullYear(); // Call functions after page load extLink(); //addAlly(); setTimeout(function() { //addAlly(); //ideally we should only need to run it once ↑ }, 5000); }; function closeCookieDialog (){ document.getElementById("cookie-consent").style.display = "none"; return false; } setTimeout ( function () { closeCookieDialog(); }, 15000); console.log(`%c 🚀🚀🚀 ███ █████ ███████ █████████ ███████████ █████████████ ███████████████ ███████ ███████ ███████ ┌───────────────────────────────────────────────────────────────────┐ │ │ │ Towards AI is looking for contributors! │ │ Join us in creating awesome AI content. │ │ Let's build the future of AI together → │ │ https://towardsai.net/contribute │ │ │ └───────────────────────────────────────────────────────────────────┘ `, `background: ; color: #00adff; font-size: large`); //Remove latest category across site document.querySelectorAll('a[rel="category tag"]').forEach(function(el) { if (el.textContent.trim() === 'Latest') { // Remove the two consecutive spaces (  ) if (el.nextSibling && el.nextSibling.nodeValue.includes('\u00A0\u00A0')) { el.nextSibling.nodeValue = ''; // Remove the spaces } el.style.display = 'none'; // Hide the element } }); // Add cross-domain measurement, anonymize IPs 'use strict'; //var ga = gtag; ga('config', 'G-9D3HKKFV1Q', 'auto', { /*'allowLinker': true,*/ 'anonymize_ip': true/*, 'linker': { 'domains': [ 'medium.com/towards-artificial-intelligence', 'datasets.towardsai.net', 'rss.towardsai.net', 'feed.towardsai.net', 'contribute.towardsai.net', 'members.towardsai.net', 'pub.towardsai.net', 'news.towardsai.net' ] } */ }); ga('send', 'pageview'); -->