Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: pub@towardsai.net
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

Knowledge Graph QA using Gemini and NebulaGraph Lite
Latest   Machine Learning

Knowledge Graph QA using Gemini and NebulaGraph Lite

Last Updated on March 25, 2024 by Editorial Team

Author(s): Rajesh K

Originally published on Towards AI.

Graph databases and knowledge graphs are among the most widely adopted solutions for managing data represented as graphs, consisting of nodes (entities) and edges (relationships). A graph database stores information using a graph-based data model, enabling querying and traversal of the graph structure through specialized graph query languages.

Knowledge graphs extend the capabilities of graph databases by incorporating mechanisms to infer and derive new knowledge from the existing graph data. This added expressiveness allows for more advanced data analysis and extraction of insights across the interconnected data points within the graph.

This article will delve into brief introductin on knowledge graphs followed by generation of knowledge graph using llamaindex and nebulagraph-lite.

What is a Knowledge graphs?

A Knowledge Graph serves as a graphical portrayal of interconnected ideas, items, and their relationships, depicted as a network. It includes real-world entities like objects, people, places, and situations. At its core, a Knowledge Graph usually relies on a graph database, which is specifically crafted to manage data by storing discrete pieces of information and the connections between them.

The core components of KG are :

Entities are real-world items or concepts. These consist of humans, places, activities, and summary thoughts. In graph shape, that is a manner of visualizing relationships among information units, entities are represented as nodes or factors on a graph.

Examples :

  • Person/s: Barack Obama, Serena Williams
  • Places/Locations: New York City, The Great Pyramids
  • Events/Events: World War II, 2008 economic crisis
  • Abstraction/Ideology: Democracy, Gravity

Relationships describe relationships between objects and describe how they interact or interact with each other. Relationships are represented as edges connecting the corresponding nodes within the expertise graph. The route of this flow can be unidirectional or bidirectional, relying on the character of the entity courting

Categories Of Knowledge graphs

Knowledge graphs has the ability to:

  • Effectively manipulate and visualize heterogeneous information: This includes dealing with records of numerous systems inside a unified framework, facilitating clean and insightful illustration.
  • Integrate with newer data resources: Knowledge graphs has the inherent flexibility to deal and incorporate facts from novel sources, fostering non-stop expansion of the know-how base.
  • Comprehend and depict relationships across any information store: They can discover and represent the interconnections among entities dwelling within various information repositories, allowing a holistic understanding of the underlying relationships.

Following are the graph categories:

  • RDF (Resource Description Framework) Triple Stores: These class focus on storing and dealing with information dependent according to the RDF framework, which utilizes triples (situation, predicate, object) to represent understanding.
  • Labeled Property Graphs: This class makes a speciality of graphs in which nodes and edges are enriched with informative labels, presenting a more expressive and nuanced illustration of the facts.

RDF (Resource Description Framework) Graphs

RDF graphs, standing for Resource Description Framework graphs, are a way to represent statistics of the web similar to network structure. They are essentially a collection of statements constructed around topics, predicates, and objects.

Imagine a sentence like “Paris is the capital of France”. In an RDF graph, “Paris” will be the issue, “is the capital of” would be the predicate, and “France” will be the item. Together, these three elements form a single “triple” that represents a fact. An RDF graph can incorporate many such triples, constructing an internet of interconnected records.

The Resource Description Framework (RDF) Triple Store constitutes a standardized facts version for know-how representation. Within this model, every element is assigned a unique identifier the use of Uniform Resource Identifiers (URIs). This mechanism ensures machine-readable identity of topics, predicates, and gadgets. Furthermore, RDF Triple Stores leverage a standardized question language known as SPARQL. This language allows the retrieval of records from the store. Owing to the standardized nature of both information representation and querying, RDF Triple Stores show off interoperability with another understanding graph that clings to the RDF framework

source

The graph above depicts individuals (round nodes) inside a social community. Lines (directed links) constitute friendships among them. Additionally, hasincome is related to every individual (dark-rimmed node). Diamond-fashioned nodes depict the opportunity of extra data current within the community (triples).

Pros of RDF graphs:

  • Interoperability: RDF is a W3C trendy, which means that specific systems can apprehend and alternate records stored in RDF graphs. This makes it a great choice for sharing data throughout platforms and packages.
  • Standardization: Due to the standardized format, RDF graphs include a trendy question language known as SPARQL. This makes it less difficult to discover and examine the records saved inside the graph.
  • Reasoning and Inference: RDF graphs can leverage ontologies (think of them as formal descriptions of ideas) to purpose about the information. This allows the machine to infer new statistics that isn’t explicitly stated inside the graph.
  • Flexibility: RDF graphs can represent an extensive form of facts types and relationships. This makes them appropriate for modeling complicated domains and integrating facts from exceptional sources.

Cons of RDF graphs:

  • Complexity for Deep Searches: Traversing large RDF graphs for deep searches can be computationally expensive. This can gradual down queries that need to discover many connections.
  • Strict Structure: RDF information is saved as "triples" (difficulty, predicate, object). This can be less flexible than different graph fashions that allow for properties on entities or relationships themselves.
  • Steeper Learning Curve: Understanding and operating with RDF requires a good hold close of the underlying standards and the SPARQL question language. This can pose a challenge for new users.

Labeled Property Graphs (LPGs)

Labeled property graphs (LPGs) are a special type of graph database model used to represent information with interconnected services and their relationships. Here is a breakdown of their highlights:

  • Nodes: These represent masculine or feminine elements in realities. Each node has a very unique description and can be assigned one or more labels to indicate its type or size (e.g., “person”, “product”).
  • Properties: Nodes can have key charge pairs associated with them, which store additional data about the entity. These fields allow for basic definitions of the elements in the graph.
  • Edges: These represent connections between nodes, and show relationships between entities. Like nodes, edges can be marked with a selection of methods (e.g., “knows”, “buys”) and can also have their own properties associated with them.

Key characteristics of LPGs:

  • Rich data structure: The ability to have properties on both nodes and edges allows for denser and more informative data representation compared to other models like RDF.
  • Efficient storage and querying: The LPG structure often leads to efficient storage mechanisms and faster traversal of connections within the graph for queries.
  • Flexibility: LPGs are flexible due to the lack of a predefined schema. This allows for modeling diverse data relationships.

RDF vs. Property Graphs

Property Graphs QA with LLM

Property graphs and Large Language Models (LLMs) are powerful tools that can be used together to gain new insights from data. Here’s how they can work together:

Data Augmentation:

  • LLMs can be used to generate text descriptions for nodes and edges in a property graph. This can enrich the data and make it easier for other tools or users to understand the relationships.
  • LLMs can also be used to generate new nodes and edges based on existing data in the graph. This can be helpful for tasks like anomaly detection or fraud prediction.

Querying and Exploration:

  • LLMs can be used to create natural language interfaces for querying property graphs. This allows users to ask questions about the data in a more intuitive way than using a traditional graph query language.
  • LLMs can also be used to summarize the results of graph queries and generate explanations for the findings.

Reasoning and Inference:

  • LLMs can be used to perform reasoning tasks over property graphs. This could involve inferring new relationships between nodes based on existing data or identifying inconsistencies in the graph.
Source

Below is demonstration for the stepwise implementation of Knowledge Graph using Llamaindex KnowledgeGraphIndex and Nebula graph Lite Reference using Google Gemini LLM and Collab

Generate API Key for Gemini

Head on to https://aistudio.google.com/app/prompts/new_chat and generate a new API Key . W

Loading then PDF document

! mkdir ad && cd ad
! curl https://arxiv.org/pdf/2106.07178.pdf --output AD1.pdf
! mv *.pdf ad/
! pip install -q transformers

%pip install llama_index pyvis Ipython langchain pypdf llama-index-llms-huggingface llama-index-embeddings-langchain llama-index-embeddings-huggingface
%pip install --upgrade --quiet llama-index-llms-gemini google-generativeai
%pip install --upgrade --quiet llama-index-graph-stores-nebula nebulagraph-lite

Importing the google API key

import os

from google.colab import userdata
GOOGLE_API_KEY = userdata.get('GOOGLE_API_KEY')
os.environ["GOOGLE_API_KEY"] = GOOGLE_API_KEY

Importing the required modules and libraries

import logging
import sys

logging.basicConfig(stream=sys.stdout, level=logging.INFO)
logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))
from llama_index.core import (
ServiceContext,
KnowledgeGraphIndex)
from llama_index.core import SimpleDirectoryReader
from llama_index.core.storage.storage_context import StorageContext
from pyvis.network import Network

from llama_index.llms.huggingface import HuggingFaceLLM

Check for supported gemini Models . here we will be using Gemini 1.0 pro model

import google.generativeai as genai


for m in genai.list_models():
if "generateContent" in m.supported_generation_methods:
print(m.name)
print(m)
from llama_index.llms.gemini import Gemini

llm = Gemini(model="models/gemini-1.0-pro-latest")

Import the BGE embedding

from llama_index.embeddings.huggingface import HuggingFaceEmbedding
from llama_index.core import ServiceContext


embed_model = HuggingFaceEmbedding(model_name="BAAI/bge-small-en-v1.5")

Loading the contents of the ad directory

documents = SimpleDirectoryReader("/content/ad").load_data()
print(len(documents))

Starting the Nebula graphstore lite version docker instance locally .

from nebulagraph_lite import nebulagraph_let
n = nebulagraph_let(debug=False)
n.start()

Setting up namepsace named “nebula_ad “ and nodesin nebula store

%ngql --address 127.0.0.1 --port 9669 --user root --password nebula
# If not, create it with the following commands from NebulaGraph's console:
%ngql CREATE SPACE nebula_ad(vid_type=FIXED_STRING(256), partition_num=1, replica_factor=1)
import time

print("Waiting...")

# Delay for 10 seconds
time.sleep(10)

%ngql --address 127.0.0.1 --port 9669 --user root --password nebula
%ngql USE nebula_ad;
%ngql CREATE TAG entity(name string);
%ngql CREATE EDGE relationship(relationship string);

Loading the document data to the graph store

import os
os.environ["NEBULA_USER"] = "root"
os.environ["NEBULA_PASSWORD"] = "nebula" # default is "nebula"
os.environ[
"NEBULA_ADDRESS"
] = "127.0.0.1:9669" # assumed we have NebulaGraph installed locally

space_name = "nebula_ad"
edge_types, rel_prop_names = ["relationship"], [
"relationship"
] # default, could be omit if create from an empty kg
tags = ["entity"] # default, could be omit if create from an empty kg
from llama_index.core import StorageContext
from llama_index.graph_stores.nebula import NebulaGraphStore

graph_store = NebulaGraphStore(
space_name=space_name,
edge_types=edge_types,
rel_prop_names=rel_prop_names,
tags=tags,
)
storage_context = StorageContext.from_defaults(graph_store=graph_store)
from llama_index.core import Settings

Settings.llm = llm
Settings.embed_model = embed_model
Settings.chunk_size = 512

updating the nodes data in graph store


# NOTE: can take a while!
index = KnowledgeGraphIndex.from_documents(
documents,
storage_context=storage_context,
max_triplets_per_chunk=10,
space_name=space_name,
edge_types=edge_types,
rel_prop_names=rel_prop_names,
tags=tags,
include_embeddings=True
)

Checking the inserted graph data in nebula store

# Query some random Relationships with Cypher
%ngql USE nebula_ad;
%ngql MATCH ()-[e]->() RETURN e LIMIT 10

output

Now query the indexed data

query_engine = index.as_query_engine()
from IPython.display import display, Markdown

response = query_engine.query(
"Tell me about Anomaly?",
)
display(Markdown(f"<b>{response}</b>"))

Anomalies, also known as outliers, exceptions, peculiarities, rarities, novelties, etc., in different application fields, refer to abnormal objects that are significantly different from the standard, normal, or expected

from IPython.display import display, Markdown

response = query_engine.query(
"What are graph anomolies?",
)
display(Markdown(f"<b>{response}</b>"))

Graph anomalies can be defined as structural anomalies.

Conclusion.

These simple Knowledge graphs demonstrably capture intricate relationships between entities. This capability facilitates significantly more precise, diverse, and complex querying and reasoning. These can also be extended to complex RDF-based ontology graphs. In the future, we will refine more on the different aspects of how Knowledge graphs can be utilized for RAG and LLM fine-tuning.

Happy Graphing!

References

https://link.springer.com/chapter/10.1007/978-3-642-21295-6_13

https://github.com/nebula-contrib/nebulagraph-lite

https://docs.llamaindex.ai/en/stable/

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓

Sign Up for the Course
`; } else { console.error('Element with id="subscribe" not found within the page with class "home".'); } } }); // Remove duplicate text from articles /* Backup: 09/11/24 function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag elements.forEach(el => { const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 2) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); */ // Remove duplicate text from articles function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag // List of classes to be excluded const excludedClasses = ['medium-author', 'post-widget-title']; elements.forEach(el => { // Skip elements with any of the excluded classes if (excludedClasses.some(cls => el.classList.contains(cls))) { return; // Skip this element if it has any of the excluded classes } const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 10) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); //Remove unnecessary text in blog excerpts document.querySelectorAll('.blog p').forEach(function(paragraph) { // Replace the unwanted text pattern for each paragraph paragraph.innerHTML = paragraph.innerHTML .replace(/Author\(s\): [\w\s]+ Originally published on Towards AI\.?/g, '') // Removes 'Author(s): XYZ Originally published on Towards AI' .replace(/This member-only story is on us\. Upgrade to access all of Medium\./g, ''); // Removes 'This member-only story...' }); //Load ionic icons and cache them if ('localStorage' in window && window['localStorage'] !== null) { const cssLink = 'https://code.ionicframework.com/ionicons/2.0.1/css/ionicons.min.css'; const storedCss = localStorage.getItem('ionicons'); if (storedCss) { loadCSS(storedCss); } else { fetch(cssLink).then(response => response.text()).then(css => { localStorage.setItem('ionicons', css); loadCSS(css); }); } } function loadCSS(css) { const style = document.createElement('style'); style.innerHTML = css; document.head.appendChild(style); } //Remove elements from imported content automatically function removeStrongFromHeadings() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, h6, span'); elements.forEach(el => { const strongTags = el.querySelectorAll('strong'); strongTags.forEach(strongTag => { while (strongTag.firstChild) { strongTag.parentNode.insertBefore(strongTag.firstChild, strongTag); } strongTag.remove(); }); }); } removeStrongFromHeadings(); "use strict"; window.onload = () => { /* //This is an object for each category of subjects and in that there are kewords and link to the keywods let keywordsAndLinks = { //you can add more categories and define their keywords and add a link ds: { keywords: [ //you can add more keywords here they are detected and replaced with achor tag automatically 'data science', 'Data science', 'Data Science', 'data Science', 'DATA SCIENCE', ], //we will replace the linktext with the keyword later on in the code //you can easily change links for each category here //(include class="ml-link" and linktext) link: 'linktext', }, ml: { keywords: [ //Add more keywords 'machine learning', 'Machine learning', 'Machine Learning', 'machine Learning', 'MACHINE LEARNING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ai: { keywords: [ 'artificial intelligence', 'Artificial intelligence', 'Artificial Intelligence', 'artificial Intelligence', 'ARTIFICIAL INTELLIGENCE', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, nl: { keywords: [ 'NLP', 'nlp', 'natural language processing', 'Natural Language Processing', 'NATURAL LANGUAGE PROCESSING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, des: { keywords: [ 'data engineering services', 'Data Engineering Services', 'DATA ENGINEERING SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, td: { keywords: [ 'training data', 'Training Data', 'training Data', 'TRAINING DATA', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ias: { keywords: [ 'image annotation services', 'Image annotation services', 'image Annotation services', 'image annotation Services', 'Image Annotation Services', 'IMAGE ANNOTATION SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, l: { keywords: [ 'labeling', 'labelling', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, pbp: { keywords: [ 'previous blog posts', 'previous blog post', 'latest', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, mlc: { keywords: [ 'machine learning course', 'machine learning class', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, }; //Articles to skip let articleIdsToSkip = ['post-2651', 'post-3414', 'post-3540']; //keyword with its related achortag is recieved here along with article id function searchAndReplace(keyword, anchorTag, articleId) { //selects the h3 h4 and p tags that are inside of the article let content = document.querySelector(`#${articleId} .entry-content`); //replaces the "linktext" in achor tag with the keyword that will be searched and replaced let newLink = anchorTag.replace('linktext', keyword); //regular expression to search keyword var re = new RegExp('(' + keyword + ')', 'g'); //this replaces the keywords in h3 h4 and p tags content with achor tag content.innerHTML = content.innerHTML.replace(re, newLink); } function articleFilter(keyword, anchorTag) { //gets all the articles var articles = document.querySelectorAll('article'); //if its zero or less then there are no articles if (articles.length > 0) { for (let x = 0; x < articles.length; x++) { //articles to skip is an array in which there are ids of articles which should not get effected //if the current article's id is also in that array then do not call search and replace with its data if (!articleIdsToSkip.includes(articles[x].id)) { //search and replace is called on articles which should get effected searchAndReplace(keyword, anchorTag, articles[x].id, key); } else { console.log( `Cannot replace the keywords in article with id ${articles[x].id}` ); } } } else { console.log('No articles found.'); } } let key; //not part of script, added for (key in keywordsAndLinks) { //key is the object in keywords and links object i.e ds, ml, ai for (let i = 0; i < keywordsAndLinks[key].keywords.length; i++) { //keywordsAndLinks[key].keywords is the array of keywords for key (ds, ml, ai) //keywordsAndLinks[key].keywords[i] is the keyword and keywordsAndLinks[key].link is the link //keyword and link is sent to searchreplace where it is then replaced using regular expression and replace function articleFilter( keywordsAndLinks[key].keywords[i], keywordsAndLinks[key].link ); } } function cleanLinks() { // (making smal functions is for DRY) this function gets the links and only keeps the first 2 and from the rest removes the anchor tag and replaces it with its text function removeLinks(links) { if (links.length > 1) { for (let i = 2; i < links.length; i++) { links[i].outerHTML = links[i].textContent; } } } //arrays which will contain all the achor tags found with the class (ds-link, ml-link, ailink) in each article inserted using search and replace let dslinks; let mllinks; let ailinks; let nllinks; let deslinks; let tdlinks; let iaslinks; let llinks; let pbplinks; let mlclinks; const content = document.querySelectorAll('article'); //all articles content.forEach((c) => { //to skip the articles with specific ids if (!articleIdsToSkip.includes(c.id)) { //getting all the anchor tags in each article one by one dslinks = document.querySelectorAll(`#${c.id} .entry-content a.ds-link`); mllinks = document.querySelectorAll(`#${c.id} .entry-content a.ml-link`); ailinks = document.querySelectorAll(`#${c.id} .entry-content a.ai-link`); nllinks = document.querySelectorAll(`#${c.id} .entry-content a.ntrl-link`); deslinks = document.querySelectorAll(`#${c.id} .entry-content a.des-link`); tdlinks = document.querySelectorAll(`#${c.id} .entry-content a.td-link`); iaslinks = document.querySelectorAll(`#${c.id} .entry-content a.ias-link`); mlclinks = document.querySelectorAll(`#${c.id} .entry-content a.mlc-link`); llinks = document.querySelectorAll(`#${c.id} .entry-content a.l-link`); pbplinks = document.querySelectorAll(`#${c.id} .entry-content a.pbp-link`); //sending the anchor tags list of each article one by one to remove extra anchor tags removeLinks(dslinks); removeLinks(mllinks); removeLinks(ailinks); removeLinks(nllinks); removeLinks(deslinks); removeLinks(tdlinks); removeLinks(iaslinks); removeLinks(mlclinks); removeLinks(llinks); removeLinks(pbplinks); } }); } //To remove extra achor tags of each category (ds, ml, ai) and only have 2 of each category per article cleanLinks(); */ //Recommended Articles var ctaLinks = [ /* ' ' + '

Subscribe to our AI newsletter!

' + */ '

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

'+ '

Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

' + '
' + '' + '' + '

Note: Content contains the views of the contributing authors and not Towards AI.
Disclosure: This website may contain sponsored content and affiliate links.

' + 'Discover Your Dream AI Career at Towards AI Jobs' + '

Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 10,000 live jobs today with Towards AI Jobs!

' + '
' + '

🔥 Recommended Articles 🔥

' + 'Why Become an LLM Developer? Launching Towards AI’s New One-Stop Conversion Course'+ 'Testing Launchpad.sh: A Container-based GPU Cloud for Inference and Fine-tuning'+ 'The Top 13 AI-Powered CRM Platforms
' + 'Top 11 AI Call Center Software for 2024
' + 'Learn Prompting 101—Prompt Engineering Course
' + 'Explore Leading Cloud Providers for GPU-Powered LLM Training
' + 'Best AI Communities for Artificial Intelligence Enthusiasts
' + 'Best Workstations for Deep Learning
' + 'Best Laptops for Deep Learning
' + 'Best Machine Learning Books
' + 'Machine Learning Algorithms
' + 'Neural Networks Tutorial
' + 'Best Public Datasets for Machine Learning
' + 'Neural Network Types
' + 'NLP Tutorial
' + 'Best Data Science Books
' + 'Monte Carlo Simulation Tutorial
' + 'Recommender System Tutorial
' + 'Linear Algebra for Deep Learning Tutorial
' + 'Google Colab Introduction
' + 'Decision Trees in Machine Learning
' + 'Principal Component Analysis (PCA) Tutorial
' + 'Linear Regression from Zero to Hero
'+ '

', /* + '

Join thousands of data leaders on the AI newsletter. It’s free, we don’t spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

',*/ ]; var replaceText = { '': '', '': '', '
': '
' + ctaLinks + '
', }; Object.keys(replaceText).forEach((txtorig) => { //txtorig is the key in replacetext object const txtnew = replaceText[txtorig]; //txtnew is the value of the key in replacetext object let entryFooter = document.querySelector('article .entry-footer'); if (document.querySelectorAll('.single-post').length > 0) { //console.log('Article found.'); const text = entryFooter.innerHTML; entryFooter.innerHTML = text.replace(txtorig, txtnew); } else { // console.log('Article not found.'); //removing comment 09/04/24 } }); var css = document.createElement('style'); css.type = 'text/css'; css.innerHTML = '.post-tags { display:none !important } .article-cta a { font-size: 18px; }'; document.body.appendChild(css); //Extra //This function adds some accessibility needs to the site. function addAlly() { // In this function JQuery is replaced with vanilla javascript functions const imgCont = document.querySelector('.uw-imgcont'); imgCont.setAttribute('aria-label', 'AI news, latest developments'); imgCont.title = 'AI news, latest developments'; imgCont.rel = 'noopener'; document.querySelector('.page-mobile-menu-logo a').title = 'Towards AI Home'; document.querySelector('a.social-link').rel = 'noopener'; document.querySelector('a.uw-text').rel = 'noopener'; document.querySelector('a.uw-w-branding').rel = 'noopener'; document.querySelector('.blog h2.heading').innerHTML = 'Publication'; const popupSearch = document.querySelector$('a.btn-open-popup-search'); popupSearch.setAttribute('role', 'button'); popupSearch.title = 'Search'; const searchClose = document.querySelector('a.popup-search-close'); searchClose.setAttribute('role', 'button'); searchClose.title = 'Close search page'; // document // .querySelector('a.btn-open-popup-search') // .setAttribute( // 'href', // 'https://medium.com/towards-artificial-intelligence/search' // ); } // Add external attributes to 302 sticky and editorial links function extLink() { // Sticky 302 links, this fuction opens the link we send to Medium on a new tab and adds a "noopener" rel to them var stickyLinks = document.querySelectorAll('.grid-item.sticky a'); for (var i = 0; i < stickyLinks.length; i++) { /* stickyLinks[i].setAttribute('target', '_blank'); stickyLinks[i].setAttribute('rel', 'noopener'); */ } // Editorial 302 links, same here var editLinks = document.querySelectorAll( '.grid-item.category-editorial a' ); for (var i = 0; i < editLinks.length; i++) { editLinks[i].setAttribute('target', '_blank'); editLinks[i].setAttribute('rel', 'noopener'); } } // Add current year to copyright notices document.getElementById( 'js-current-year' ).textContent = new Date().getFullYear(); // Call functions after page load extLink(); //addAlly(); setTimeout(function() { //addAlly(); //ideally we should only need to run it once ↑ }, 5000); }; function closeCookieDialog (){ document.getElementById("cookie-consent").style.display = "none"; return false; } setTimeout ( function () { closeCookieDialog(); }, 15000); console.log(`%c 🚀🚀🚀 ███ █████ ███████ █████████ ███████████ █████████████ ███████████████ ███████ ███████ ███████ ┌───────────────────────────────────────────────────────────────────┐ │ │ │ Towards AI is looking for contributors! │ │ Join us in creating awesome AI content. │ │ Let's build the future of AI together → │ │ https://towardsai.net/contribute │ │ │ └───────────────────────────────────────────────────────────────────┘ `, `background: ; color: #00adff; font-size: large`); //Remove latest category across site document.querySelectorAll('a[rel="category tag"]').forEach(function(el) { if (el.textContent.trim() === 'Latest') { // Remove the two consecutive spaces (  ) if (el.nextSibling && el.nextSibling.nodeValue.includes('\u00A0\u00A0')) { el.nextSibling.nodeValue = ''; // Remove the spaces } el.style.display = 'none'; // Hide the element } }); // Add cross-domain measurement, anonymize IPs 'use strict'; //var ga = gtag; ga('config', 'G-9D3HKKFV1Q', 'auto', { /*'allowLinker': true,*/ 'anonymize_ip': true/*, 'linker': { 'domains': [ 'medium.com/towards-artificial-intelligence', 'datasets.towardsai.net', 'rss.towardsai.net', 'feed.towardsai.net', 'contribute.towardsai.net', 'members.towardsai.net', 'pub.towardsai.net', 'news.towardsai.net' ] } */ }); ga('send', 'pageview'); -->