Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: pub@towardsai.net
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

Deep Learning Applied to Physics and Fluids
Latest   Machine Learning

Deep Learning Applied to Physics and Fluids

Last Updated on November 6, 2023 by Editorial Team

Author(s): Eduardo Vitalbrasil

Originally published on Towards AI.

Numerical simulations have been used for years to understand the behavior of physical systems; how the fluids interact with a structure, how a geometry is deformed under stress, or even the thermal distribution under heating conditions. Applied in the more diverse domains such as aero spatial, automobile, energy, etc., those calculations allow dimensioning prototypes and ensuring safe processes without having to build them. Notwithstanding, they can be computationally expensive and take many hours, days, or even weeks. That’s where Machine Learning, and specifically Deep Learning, shines, abbreviating the processing time to mere minutes!

Computational Fluid Dynamics simulations

A common numerical simulation can describe physical systems by solving a set of Partial Differential Equations (PDE), which typically have the form:

???? represent s the differential operator over the domain Ω ∈ ℝ , bounds ????Ω and parameters ???? . The solution ????(????, ????) of the system relies on spatial coordinates and time, with subscripts denoting partial derivatives. The set of equations can be solved by discretizing the physical domain into small parts (finite elements or finite volumes) to get a linearized system. This approach finds particular application in fluid dynamics.

In fluid dynamics, the system is represented mainly by the Navier-Stokes equations, a set of laws with no analytical resolution to describe the behavior of every fluid based on the mass and force balances. In a simpler 2D form, they can be described as:

Where ???? is the velocity along x-axis, ???? is the velocity in y, ???? is the pressure, ???? is the density and ???? is the viscosity.

Computational Fluid Dynamics (CFD) simulations consist of resolving the discretized linearized system along with its boundary conditions, such as the pressure and velocity at the limits of the domain, by iterative multigrid solution methods. Direct methods are impractical for real-world applications, where the inversion of the matrix for a 3D Cartesian and equally spaced grid ( elements) achieves i⁷ complexity.

Even with efficient solvers working in an HPC parallel environment, the computational cost of such operations can achieve long hours and become detrimental for a dynamic engineering process. The solution? As we see more and more, AI!

Surrogate models

When a possible input-output relationship is present, AI arises as a candidate to model such behavior. This scenario aligns perfectly with CFD, where the geometric setup, parametrized as a grid and its elements, in addition to the boundary conditions, can be linked to the output: the physical fields (pressure, velocity, etc.) in each point of the grid. The models built can act as meshless solvers which can replace traditional simulators with a lower computational cost.

In a general manner, we want to learn the mapping between the PDE parameters (????, ????, ????): ???? ∈ ℝⁿ and its solution ????(????, ????): ???? ∈ ℝⁿ . In other words, we aim to find the predictive function ????: (????, ????, ????) → ????(????, ????) where ???? is sometimes found to be constant (steady state analysis). Thus, we can imagine different ways to do that.

Simplified models

The easiest approach to model the relationship is to simplify it by reducing the dimensionality of the data. This method can be applied both to the input and the output. For instance, instead of using the full coordinates of grid points, we can represent the previously described geometry with a reduced set of parameters, denoted as ???? ∈ N U+007C ???? < n. A gear has a number of teeth, a primitive radius, a width, etc.

For the output, a viable choice is a global performance metric denoted as s(????)N. Examples are the forces acting on the prototype, the drag and lift coefficient, etc.

The upside of the simplifications is that it allows us to apply more basic and faster AI models. Even a linear/polynomial regression could be used with no bigger issues to learn the function ????: k → ????(k).

The downside is that, when reducing the dimensionality this way, there’s an intrinsic loss of information, and the models become less generalizable when facing data outside of the design space.

Volumetric models

Instead of reducing the dimensionality of the data, we can opt to work with the original volumetric grid (???? ∈ ℝⁿ), which introduces greater complexity and requires the utilization of Deep Learning techniques.

When dealing with an unstructured grid, a common approach involves interpolating it onto a uniform structured mesh. Essential features like freestream velocity and pressure can be embedded within each voxel, enabling corresponding predictions. Consequently, regions devoid of fluid are represented as null values, thereby encoding the geometry.

This voxel-based representation facilitates the use of a technique commonly employed in image recognition tasks: convolution. Convolutional Neural Networks (CNNs) can thus extract local and global features via their filtering approach. Varying scales of feature extraction can be achieved by integrating different stages, leading to increasingly intricate models, including u-nets and auto-encoders/decoders.

Instead of transforming the unordered data to the ordered shape of voxels/pictures, another solution involves actively encoding the coordinates. This permits data description in tabular format, where each point is associated with the corresponding simulation/example index. As we are going to see in the next section, this is what pytorch geometric does!

While it is theoretically plausible to train and apply a model to each point, this approach normally fails to capture inter-node relationships critical for determining local information. Enter Graph Neural Networks, constituting another class of models tailored to address this limitation.

Geometric models

In CFD we are often interested in determining the physical fields not in the volume but in the surface. This prompts a consideration of model-building strategies. While the voxelization method outlined earlier has shown promising results, its application to represent sparse point clouds corresponding to the geometry requires the creation of a uniform structured grid. However, this becomes impractical for intricate geometries, as it leads to either unnecessary computational costs due to encompassing irrelevant information around the shape, or a loss of valuable information due to a coarse grid.

A more efficient solution is offered by the field of Geometric Deep Learning, which is mostly known for its success in object recognition and semantic segmentation. This approach is closely linked to Graph Neural Networks and directly treats a Point Cloud — an unordered set of points — describing the data as a table of coordinates and indexes as we mentioned in the last section. Luckily, that’s exactly what we have with an unstructured mesh!

In order to do describe the data in such a format, we can use pytorch geometric, an extension of the state-of-the-art pythorch framework tailored for GNNS and geometric models. It also contains a series of graph datasets we can use, like AirfRANS⁴, a 2D NACA wings dataset for RANS simulations. Let’s briefly explore how this translates into actual code:

from torch_geometric.datasets import AirfRANS
from matplotlib import pyplot as plt
dataset = AirfRANS(root='/tmp/AirfRANS', task='full', train=True) # Downloads
example = dataset[0]
fig, ax = plt.subplots(figsize=(12,5))
ax.set_aspect('equal', adjustable="datalim")
# Scatters the Point Cloud using coordinates x, y and the Pressure as color
im = ax.scatter(*example.pos.T, s=0.5, c=example.y[:, 2])
fig.colorbar(im, ax=ax)

With a grasp of the dataset’s nature, the question arises: How can we train models using it? We transform it to the shape described above with a little help of Pytorch Geometric. The DataLoader allows us to loop through the dataset in mini-batchs thanks to the batch attribute, a vector that maps each node to its respective graph in the batch. . This proves crucial for employing aggregation functions in each simulation, as we’ll delve into shortly.

from torch_geometric.loader import DataLoader
# We are not differentiating train and test datasets, but only analysing the data
loader = DataLoader(dataset, batch_size=2) # Batch of 2 simulations
for data in loader: # Loops through the dataset, returning the batchs
print(data), print(data.batch)
break
>>> DataBatch(x=[351974, 5], y=[351974, 4], pos=[351974, 2], surf=[351974], name=[2], batch=[351974], ptr=[3])
>>> tensor([0, 0, 0, ..., 1, 1, 1])

Having established the data description, what types of models are suitable? Pytorch geometric offers a range of pre-designed neural layers and operations that can be very useful. One of the most traditional ones is, perhaps, PointNet¹. One of the pioneers of the field, this architecture proposed in 2017 introduced two major advantages that are until these days, very important.

The key to Pointnet’s success lies in its pooling operation. By condensing information of an entire simulation into a single vector, it captures global insights and enables the transition from multiple points to a singular global value. This effectively addresses the issue, for example, of applying a model to a single point and not being able to change the dimensions of the model. In other words, we’ll consider the dataset ???? = {Dᵢ U+007C i ∈ N, 1 ≤ i ≤ m}, m being the total number of simulations. We can define for each simulation Dᵢ the set of points X = {Xⱼ(pⱼ, fⱼ) U+007C j ∈ N , 1 ≤ i ≤ n}. The concatenated feature vector (pⱼ, fⱼ) ∈ ℝʰ U+007C h ∈ N. In this way, pooling performs the following transformation:

This is necessary for classification tasks where the Point Cloud should be reduced to a single value or vector. Another advantage of pooling is infusing global information into each point, useful in segmentation tasks. The global information vector can be concatenated to the feature vectors of each point, increasing its dimension. In this way, the model applied to each point will not only have its punctual information, but a general idea of the entire example. Formally, considering z as the total number of points in the unstructured dataset, the series of operations Pointnet architecture performs can be described, in a simplified way:

Where the same MLP is applied to each point/row and columns represent the features.

More complex models were proposed in the sequence. Pointnet++², for example, proposed the idea of using Pointnet at different hierarchy levels. This approach not only concatenates global features but also incorporates vectors representing smaller-scale phenomena. This proves particularly valuable when tackling CFD problems. For instance, in simulating an entire aircraft, distinct behaviors arise across different geometrical components. Wings exhibit different phenomena compared to turbines. Even on a finer scale, variations exist between the back and front of a wing, leading to characteristic pressure distributions.

Graph Neural Networks generalize the aforementioned concepts by dby depicting data as edges and vertices, reflecting relationships between nodes. In this way, it is equivalent to CNNs in a way that it aggregates local features but, instead of using a regular grid, it can be directly applied to an unordered set of points.

A typical GNN encodes a node as vertices (vⱼ = Xⱼ), having its coordinates as features. The edge represents the connectivity between nodes and encodes, for example, the distance between a specific one and its neighbors (or more complex formulas), e.g., E = {(vᵢ, vⱼ) U+007C ‖pᵢ, pⱼ‖₂ < r} being p the coordinates, v the neighbor's vertices i and j within a distance r.

Once this graph is computed, it undergoes iterative updates to propagate local information, generating non-linearity and increasingly complex embeddings. The more update steps are done, the more nodes acquire information from further reaches of the graph. Different graph architectures can be built by defining the update function, the number of updates, the way the graph is initially built, how the neighbors are computed, and many other possible intermediate procedures (for instance residual connections or even an attention mechanism). A simple updating procedure employs aggregating functions in a way:

Where ???? is the aggregating function of a vertex in a neighborhood, updated by the non linear function ???? , and ???? the function to update the edges, like the Euclidian distance we defined above.

A generalizable model should also have the inductive characteristics of being invariant to rotation, translation, and permutation. A classic graph architecture is GraphSage³. Its light computational cost allows it to efficiently train on geometrical data with great success. It uses the formula above by specifying f as simple concatenating operation followed by a weight matrix multiplication with Wᵏ. Suggested aggregators, in its turn, are average or max pooling; while neighbors are uniformly sampled with fixed-size from the whole set.

Besides from the simpler models described, more complex and modern graph architectures have been developed in later years and are an promising field of research.

Represent the physics

Now that we have identified suitable models, the question arises: How can we effectively incorporate the unique characteristics of Physical Systems? One commonly used approach involves enforcing physical laws through soft constraints or penalization methods. Specifically, Physically Informed Neural Networks (PINNs) implement this strategy by penalizing the model using the residual of PDEs, thereby ensuring their accurate adherence. When referring to CFD, those are the Navier-Stokes equations. The partial derivatives in each equation can be computed from the model’s prediction by AD using the Deep Learning framework. Similarly, both Dirichlet and Neumann boundary conditions can be incorporated using the same methodology. Consequently, this approach leads to a refined format for the loss function:

Where LG and LB represent, respectively, the loss of the data and the boundary conditions, constituting a supervised function with two arguments; Lᵣₑₛ is the residue function of the PDE; Xb and xb are the input data of the model and the PDE on boundary conditions points; B retrieves the boundary conditions values themselves; G???? is the forward function of the network with parameters ????.

Consequently, the model learns to respect both the pressure and velocity balance, and generalizes better to unseen data.

The loss expression highlights that PINNs can be used in a supervised approach, being data and physics driven. Notwithstanding, the LG term is optional, and the network can be trained in an unsupervised manner. This will have a much large space of solutions. As initial predictions are inherently random, optimizing the network proves challenging, and identifying an appropriate search objective remains an open question in Machine Learning. Various solutions can be imagined, from the alternative loss functions and optimization strategies to enforcing the boundary conditions by hard constraint, where a component of the network must directly meet the specified values.

High-frequency signals

Geometric Deep Learning models have encountered a challenge in terms of slow convergence and the struggle to learn high-frequency functions. This phenomenon is particularly prominent in fluid dynamics, especially when dealing with turbulent flows. To address this concern, the field of Implicit Neural Representations offers strategies that prove effective.

These transformations are rooted by the fact that deep networks are biased toward learning lower-frequency functions. Consequently, those techniques can be useful to improve the representation of higher frequency functions. An example is the expansion by Fourier features:

Employing this technique with Neural Networks facilitates learning high-frequency functions within low-dimensional problem domains like geometric contexts. Adjusting the frequency parameter allows for the manipulation of the spectrum of frequencies the model can grasp.

In practical terms, superior outcomes are achieved by setting aⱼ = 1 and selecting bⱼ from a random distribution. Fine-tuning can be performed on the standard deviation of this distribution, usually Gaussian. A broader distribution expedites convergence for high-frequency components, leading to improved results (notably in image-related tasks, resulting in higher definition). Conversely, an excessively wide distribution can introduce artifacts into the output (yielding a noisy image), presenting a trade-off between underfitting and overfitting.

A more modern approach can be achieved by using sinusoidal Representation Networks (SIRENs). They propose periodic activation functions in the form:

Beyond improving the high-frequency representation, SIRENs are also differentiable. That happens because the derivative of a SIREN is itself a SIREN, similar to how the derivative of the sine is a cosine: a phase-shifted sine. Not all common activation functions possess this quality; for instance, ReLU has a discontinuous derivative and zero second derivative throughout. Some other functions do present this desired capability, such as Softplus, Tanh, or ELU; however, its derivatives can be not well-behaved and fail to represent the fine details searched.

Therefore, they are well-suited to represent inverse problems, like the PDEs we are so interested in. Furthermore, SIRENs were proved to converge faster than other architectures.

To achieve the intended outcomes, an appropriate initialization scheme is essential. This initialization preserves the activation distribution throughout the network to ensure the final output remains independent of the number of layers. The solution lies in adopting a uniform initialization scheme in the following manner:

, so that the input to each unit is normally distributed with a standard deviation of 1. Moreover, the first layer of the network should span multiple periods over [-1, 1], which can be achieved using ω₀ = 30 in sin(ω₀ ⋅ ???????? + ????) ; this value should be changed depending on the modeled function frequency and the number of observations. The limitation of using a single frequency was later treated with the called Modulated SIRENs.

Conclusion

Hopefully, you now have a clearer understanding of how Deep Learning can create surrogate models for numerical simulations, even when dealing with unstructured and noisy data. Improving the network’s ability to generalize can be achieved through various techniques. These range from incorporating the physics of the underlying partial differential equations (PDEs) to using Implicit Neural Representation, among others we haven’t had a chance to explore. This dynamic research field is poised to expand significantly in the coming years as it becomes more reliable. Despite its name, this approach doesn’t seek to replace numerical simulations. Instead, it offers a quicker alternative that leverages simulations themselves and even experimental data. If we can combine fluid dynamics simulations, physical equations, and Deep Learning, why restrain to just one of them?

References

[1]: PointNet

C. R. Qi, H. Su, K. Mo, L. J. Guibas, PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation, arXiv:1612.00593 [cs]ArXiv: 1612.00593 (Apr. 2017). URL http://arxiv.org/abs/1612.00593

[2]: PointNet++

C. R. Qi, L. Yi, H. Su, L. J. Guibas, PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space, arXiv:1706.02413[cs] ArXiv: 1706.02413 (Jun. 2017). URL http://arxiv.org/abs/1706.02413

[3]: GraphSage

W. L. Hamilton, R. Ying, J. Leskovec, Inductive Representation Learning on Large Graphs, arXiv:1706.02216 [cs, stat]ArXiv: 1706.02216 (Sep. 2018). URL http://arxiv.org/abs/1706.02216

[4]: AirfRANS

F. Bonnet, A. J. Mazari, P. Cinnella, P. Gallinari, Airfrans: High fidelity computational fluid dynamics dataset for approximating reynolds averaged navier-stokes solutions (2023). arXiv:2212.07564.

[5]: Point-GNN

W. Shi, R. R. Rajkumar, Point-GNN: Graph Neural Network for 3D Object Detection in a Point Cloud (2020) IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2020): 1708–1716.

[6]: Scientific Machine Learning through Physics-Informed Neural Networks: Where we are and What’s next

S. Cuomo, V.S. Di Cola, F. Giampaolo, Scientific Machine Learning Through Physics–Informed Neural Networks: Where we are and What’s Next. J Sci Comput 92, 88 (2022). https://doi.org/10.1007/s10915-022-01939-z

[7]: Learning differentiable solvers for systems with hard constraints

G. Negiar, M.W. Mahoney, A.S. Krishnapriyan, Learning differentiable solvers for systems with hard constraints (2022). ArXiv, abs/2207.08675.

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓

Sign Up for the Course
`; } else { console.error('Element with id="subscribe" not found within the page with class "home".'); } } }); // Remove duplicate text from articles /* Backup: 09/11/24 function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag elements.forEach(el => { const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 2) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); */ // Remove duplicate text from articles function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag // List of classes to be excluded const excludedClasses = ['medium-author', 'post-widget-title']; elements.forEach(el => { // Skip elements with any of the excluded classes if (excludedClasses.some(cls => el.classList.contains(cls))) { return; // Skip this element if it has any of the excluded classes } const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 10) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); //Remove unnecessary text in blog excerpts document.querySelectorAll('.blog p').forEach(function(paragraph) { // Replace the unwanted text pattern for each paragraph paragraph.innerHTML = paragraph.innerHTML .replace(/Author\(s\): [\w\s]+ Originally published on Towards AI\.?/g, '') // Removes 'Author(s): XYZ Originally published on Towards AI' .replace(/This member-only story is on us\. Upgrade to access all of Medium\./g, ''); // Removes 'This member-only story...' }); //Load ionic icons and cache them if ('localStorage' in window && window['localStorage'] !== null) { const cssLink = 'https://code.ionicframework.com/ionicons/2.0.1/css/ionicons.min.css'; const storedCss = localStorage.getItem('ionicons'); if (storedCss) { loadCSS(storedCss); } else { fetch(cssLink).then(response => response.text()).then(css => { localStorage.setItem('ionicons', css); loadCSS(css); }); } } function loadCSS(css) { const style = document.createElement('style'); style.innerHTML = css; document.head.appendChild(style); } //Remove elements from imported content automatically function removeStrongFromHeadings() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, h6, span'); elements.forEach(el => { const strongTags = el.querySelectorAll('strong'); strongTags.forEach(strongTag => { while (strongTag.firstChild) { strongTag.parentNode.insertBefore(strongTag.firstChild, strongTag); } strongTag.remove(); }); }); } removeStrongFromHeadings(); "use strict"; window.onload = () => { /* //This is an object for each category of subjects and in that there are kewords and link to the keywods let keywordsAndLinks = { //you can add more categories and define their keywords and add a link ds: { keywords: [ //you can add more keywords here they are detected and replaced with achor tag automatically 'data science', 'Data science', 'Data Science', 'data Science', 'DATA SCIENCE', ], //we will replace the linktext with the keyword later on in the code //you can easily change links for each category here //(include class="ml-link" and linktext) link: 'linktext', }, ml: { keywords: [ //Add more keywords 'machine learning', 'Machine learning', 'Machine Learning', 'machine Learning', 'MACHINE LEARNING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ai: { keywords: [ 'artificial intelligence', 'Artificial intelligence', 'Artificial Intelligence', 'artificial Intelligence', 'ARTIFICIAL INTELLIGENCE', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, nl: { keywords: [ 'NLP', 'nlp', 'natural language processing', 'Natural Language Processing', 'NATURAL LANGUAGE PROCESSING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, des: { keywords: [ 'data engineering services', 'Data Engineering Services', 'DATA ENGINEERING SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, td: { keywords: [ 'training data', 'Training Data', 'training Data', 'TRAINING DATA', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ias: { keywords: [ 'image annotation services', 'Image annotation services', 'image Annotation services', 'image annotation Services', 'Image Annotation Services', 'IMAGE ANNOTATION SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, l: { keywords: [ 'labeling', 'labelling', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, pbp: { keywords: [ 'previous blog posts', 'previous blog post', 'latest', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, mlc: { keywords: [ 'machine learning course', 'machine learning class', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, }; //Articles to skip let articleIdsToSkip = ['post-2651', 'post-3414', 'post-3540']; //keyword with its related achortag is recieved here along with article id function searchAndReplace(keyword, anchorTag, articleId) { //selects the h3 h4 and p tags that are inside of the article let content = document.querySelector(`#${articleId} .entry-content`); //replaces the "linktext" in achor tag with the keyword that will be searched and replaced let newLink = anchorTag.replace('linktext', keyword); //regular expression to search keyword var re = new RegExp('(' + keyword + ')', 'g'); //this replaces the keywords in h3 h4 and p tags content with achor tag content.innerHTML = content.innerHTML.replace(re, newLink); } function articleFilter(keyword, anchorTag) { //gets all the articles var articles = document.querySelectorAll('article'); //if its zero or less then there are no articles if (articles.length > 0) { for (let x = 0; x < articles.length; x++) { //articles to skip is an array in which there are ids of articles which should not get effected //if the current article's id is also in that array then do not call search and replace with its data if (!articleIdsToSkip.includes(articles[x].id)) { //search and replace is called on articles which should get effected searchAndReplace(keyword, anchorTag, articles[x].id, key); } else { console.log( `Cannot replace the keywords in article with id ${articles[x].id}` ); } } } else { console.log('No articles found.'); } } let key; //not part of script, added for (key in keywordsAndLinks) { //key is the object in keywords and links object i.e ds, ml, ai for (let i = 0; i < keywordsAndLinks[key].keywords.length; i++) { //keywordsAndLinks[key].keywords is the array of keywords for key (ds, ml, ai) //keywordsAndLinks[key].keywords[i] is the keyword and keywordsAndLinks[key].link is the link //keyword and link is sent to searchreplace where it is then replaced using regular expression and replace function articleFilter( keywordsAndLinks[key].keywords[i], keywordsAndLinks[key].link ); } } function cleanLinks() { // (making smal functions is for DRY) this function gets the links and only keeps the first 2 and from the rest removes the anchor tag and replaces it with its text function removeLinks(links) { if (links.length > 1) { for (let i = 2; i < links.length; i++) { links[i].outerHTML = links[i].textContent; } } } //arrays which will contain all the achor tags found with the class (ds-link, ml-link, ailink) in each article inserted using search and replace let dslinks; let mllinks; let ailinks; let nllinks; let deslinks; let tdlinks; let iaslinks; let llinks; let pbplinks; let mlclinks; const content = document.querySelectorAll('article'); //all articles content.forEach((c) => { //to skip the articles with specific ids if (!articleIdsToSkip.includes(c.id)) { //getting all the anchor tags in each article one by one dslinks = document.querySelectorAll(`#${c.id} .entry-content a.ds-link`); mllinks = document.querySelectorAll(`#${c.id} .entry-content a.ml-link`); ailinks = document.querySelectorAll(`#${c.id} .entry-content a.ai-link`); nllinks = document.querySelectorAll(`#${c.id} .entry-content a.ntrl-link`); deslinks = document.querySelectorAll(`#${c.id} .entry-content a.des-link`); tdlinks = document.querySelectorAll(`#${c.id} .entry-content a.td-link`); iaslinks = document.querySelectorAll(`#${c.id} .entry-content a.ias-link`); mlclinks = document.querySelectorAll(`#${c.id} .entry-content a.mlc-link`); llinks = document.querySelectorAll(`#${c.id} .entry-content a.l-link`); pbplinks = document.querySelectorAll(`#${c.id} .entry-content a.pbp-link`); //sending the anchor tags list of each article one by one to remove extra anchor tags removeLinks(dslinks); removeLinks(mllinks); removeLinks(ailinks); removeLinks(nllinks); removeLinks(deslinks); removeLinks(tdlinks); removeLinks(iaslinks); removeLinks(mlclinks); removeLinks(llinks); removeLinks(pbplinks); } }); } //To remove extra achor tags of each category (ds, ml, ai) and only have 2 of each category per article cleanLinks(); */ //Recommended Articles var ctaLinks = [ /* ' ' + '

Subscribe to our AI newsletter!

' + */ '

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

'+ '

Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

' + '
' + '' + '' + '

Note: Content contains the views of the contributing authors and not Towards AI.
Disclosure: This website may contain sponsored content and affiliate links.

' + 'Discover Your Dream AI Career at Towards AI Jobs' + '

Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 10,000 live jobs today with Towards AI Jobs!

' + '
' + '

🔥 Recommended Articles 🔥

' + 'Why Become an LLM Developer? Launching Towards AI’s New One-Stop Conversion Course'+ 'Testing Launchpad.sh: A Container-based GPU Cloud for Inference and Fine-tuning'+ 'The Top 13 AI-Powered CRM Platforms
' + 'Top 11 AI Call Center Software for 2024
' + 'Learn Prompting 101—Prompt Engineering Course
' + 'Explore Leading Cloud Providers for GPU-Powered LLM Training
' + 'Best AI Communities for Artificial Intelligence Enthusiasts
' + 'Best Workstations for Deep Learning
' + 'Best Laptops for Deep Learning
' + 'Best Machine Learning Books
' + 'Machine Learning Algorithms
' + 'Neural Networks Tutorial
' + 'Best Public Datasets for Machine Learning
' + 'Neural Network Types
' + 'NLP Tutorial
' + 'Best Data Science Books
' + 'Monte Carlo Simulation Tutorial
' + 'Recommender System Tutorial
' + 'Linear Algebra for Deep Learning Tutorial
' + 'Google Colab Introduction
' + 'Decision Trees in Machine Learning
' + 'Principal Component Analysis (PCA) Tutorial
' + 'Linear Regression from Zero to Hero
'+ '

', /* + '

Join thousands of data leaders on the AI newsletter. It’s free, we don’t spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

',*/ ]; var replaceText = { '': '', '': '', '
': '
' + ctaLinks + '
', }; Object.keys(replaceText).forEach((txtorig) => { //txtorig is the key in replacetext object const txtnew = replaceText[txtorig]; //txtnew is the value of the key in replacetext object let entryFooter = document.querySelector('article .entry-footer'); if (document.querySelectorAll('.single-post').length > 0) { //console.log('Article found.'); const text = entryFooter.innerHTML; entryFooter.innerHTML = text.replace(txtorig, txtnew); } else { // console.log('Article not found.'); //removing comment 09/04/24 } }); var css = document.createElement('style'); css.type = 'text/css'; css.innerHTML = '.post-tags { display:none !important } .article-cta a { font-size: 18px; }'; document.body.appendChild(css); //Extra //This function adds some accessibility needs to the site. function addAlly() { // In this function JQuery is replaced with vanilla javascript functions const imgCont = document.querySelector('.uw-imgcont'); imgCont.setAttribute('aria-label', 'AI news, latest developments'); imgCont.title = 'AI news, latest developments'; imgCont.rel = 'noopener'; document.querySelector('.page-mobile-menu-logo a').title = 'Towards AI Home'; document.querySelector('a.social-link').rel = 'noopener'; document.querySelector('a.uw-text').rel = 'noopener'; document.querySelector('a.uw-w-branding').rel = 'noopener'; document.querySelector('.blog h2.heading').innerHTML = 'Publication'; const popupSearch = document.querySelector$('a.btn-open-popup-search'); popupSearch.setAttribute('role', 'button'); popupSearch.title = 'Search'; const searchClose = document.querySelector('a.popup-search-close'); searchClose.setAttribute('role', 'button'); searchClose.title = 'Close search page'; // document // .querySelector('a.btn-open-popup-search') // .setAttribute( // 'href', // 'https://medium.com/towards-artificial-intelligence/search' // ); } // Add external attributes to 302 sticky and editorial links function extLink() { // Sticky 302 links, this fuction opens the link we send to Medium on a new tab and adds a "noopener" rel to them var stickyLinks = document.querySelectorAll('.grid-item.sticky a'); for (var i = 0; i < stickyLinks.length; i++) { /* stickyLinks[i].setAttribute('target', '_blank'); stickyLinks[i].setAttribute('rel', 'noopener'); */ } // Editorial 302 links, same here var editLinks = document.querySelectorAll( '.grid-item.category-editorial a' ); for (var i = 0; i < editLinks.length; i++) { editLinks[i].setAttribute('target', '_blank'); editLinks[i].setAttribute('rel', 'noopener'); } } // Add current year to copyright notices document.getElementById( 'js-current-year' ).textContent = new Date().getFullYear(); // Call functions after page load extLink(); //addAlly(); setTimeout(function() { //addAlly(); //ideally we should only need to run it once ↑ }, 5000); }; function closeCookieDialog (){ document.getElementById("cookie-consent").style.display = "none"; return false; } setTimeout ( function () { closeCookieDialog(); }, 15000); console.log(`%c 🚀🚀🚀 ███ █████ ███████ █████████ ███████████ █████████████ ███████████████ ███████ ███████ ███████ ┌───────────────────────────────────────────────────────────────────┐ │ │ │ Towards AI is looking for contributors! │ │ Join us in creating awesome AI content. │ │ Let's build the future of AI together → │ │ https://towardsai.net/contribute │ │ │ └───────────────────────────────────────────────────────────────────┘ `, `background: ; color: #00adff; font-size: large`); //Remove latest category across site document.querySelectorAll('a[rel="category tag"]').forEach(function(el) { if (el.textContent.trim() === 'Latest') { // Remove the two consecutive spaces (  ) if (el.nextSibling && el.nextSibling.nodeValue.includes('\u00A0\u00A0')) { el.nextSibling.nodeValue = ''; // Remove the spaces } el.style.display = 'none'; // Hide the element } }); // Add cross-domain measurement, anonymize IPs 'use strict'; //var ga = gtag; ga('config', 'G-9D3HKKFV1Q', 'auto', { /*'allowLinker': true,*/ 'anonymize_ip': true/*, 'linker': { 'domains': [ 'medium.com/towards-artificial-intelligence', 'datasets.towardsai.net', 'rss.towardsai.net', 'feed.towardsai.net', 'contribute.towardsai.net', 'members.towardsai.net', 'pub.towardsai.net', 'news.towardsai.net' ] } */ }); ga('send', 'pageview'); -->