Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: pub@towardsai.net
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

OpenAI’s O3 Mini
Data Science   Latest   Machine Learning

OpenAI’s O3 Mini

Last Updated on February 5, 2025 by Editorial Team

Author(s): Naveen Krishnan

Originally published on Towards AI.

1. Introduction

In this blog, we see all about OpenAI’s O3‑mini model — a lightweight but powerful reasoning model, O3‑mini is making advanced reasoning and natural language processing more accessible and cost‑effective. OpenAI’s O3‑mini is the latest evolution in the series of cost‑efficient reasoning models. It is designed to provide high performance for math, coding, and science applications while minimizing latency and expense. Unlike earlier iterations such as O1‑mini, O3‑mini introduces new capabilities such as customizable reasoning effort levels (low, medium, high), structured outputs, and integrated function calling, which together enable developers to fine‑tune responses for their specific needs.

Key highlights of O3‑mini include:

  • Enhanced STEM Reasoning: Exceptional performance in scientific computations, mathematics, and coding.
  • Customizable Reasoning Effort: Developers can choose between different reasoning levels to balance between speed and accuracy.
  • Improved Latency and Cost Efficiency: Optimized architecture leads to faster responses and lower operational costs.
  • Seamless Integration with Azure AI Foundry: Leveraging Azure’s secure and scalable infrastructure, O3‑mini can be deployed in diverse environments and integrated into AI agents.

2. Overview of OpenAI’s O3‑mini Model

In this section, we examine the inner workings of O3‑mini, its key features, and its benefits compared to earlier models and other competitors in the market.

2.1. Key Features and Capabilities

O3‑mini offers an array of features that make it uniquely suited for technical applications:

  • Function Calling: Allows the model to execute custom functions based on the context provided. This is particularly useful for interactive applications where dynamic behavior is required.
  • Structured Outputs: Supports generating well‑defined outputs (e.g., JSON, CSV), which simplifies data handling and further processing.
  • Customizable Reasoning Effort: Developers can set the reasoning level to “low,” “medium,” or “high” depending on the complexity of the task. For instance, a quick lookup might use low effort, whereas complex mathematical problem solving might require high effort.
  • Streaming Responses: Reduces latency by delivering parts of the response as they are generated. This is critical for real‑time applications like chatbots.
  • Optimized for STEM: Enhanced capabilities in mathematical computations, scientific reasoning, and code generation set it apart from general‑purpose models.

These features are integrated into the O3‑mini architecture to deliver high‑quality results while ensuring efficiency and scalability.

2.2. Optimizations for STEM Reasoning

One primary focus of O3‑mini is its superior performance on STEM-related tasks. Here are some of the optimizations that contribute to its strength in these domains:

  • Mathematical Computations: The model has been trained extensively on datasets that include complex mathematical problems and coding challenges, leading to higher accuracy in computations and algorithmic reasoning.
  • Coding Capabilities: Whether it’s generating code snippets, debugging, or suggesting improvements, O3‑mini has shown marked improvements over its predecessors.
  • Scientific Reasoning: In areas such as physics, chemistry, and biology, the model can process and understand technical literature to provide accurate and contextually relevant responses.

Benchmarks and internal evaluations (see Section 6) have shown that O3‑mini, even at medium reasoning effort, can match or exceed the performance of older models like O1‑mini on rigorous tests such as AIME (American Invitational Mathematics Examination) problems and GPQA (Graduate-level Problem-solving Assessments).

2.3. Cost Efficiency and Latency Improvements

In addition to its reasoning capabilities, O3‑mini has been engineered to offer significant cost savings and lower latency:

  • Cost Efficiency: By optimizing the model size and inference process, OpenAI has reduced per‑token pricing by up to 95% compared to larger models. This makes O3‑mini a very attractive option for applications that require high volumes of processing without incurring exorbitant costs.
  • Lower Latency: Optimizations in the model’s architecture and inference pipeline have resulted in a reduced time to first token. Early adopters have reported a reduction in latency of over 25% when switching from O1‑mini to O3‑mini. This is crucial for interactive applications like chatbots and real‑time data processing systems.

Together, these factors make O3‑mini an excellent choice for developers looking to deploy advanced reasoning solutions at scale while managing costs effectively.

3. Azure AI Foundry: Your Gateway to O3‑mini

Azure AI Foundry is integrated platform that brings together powerful AI models, tools, and infrastructure. In this section, we discuss how to set up and use Azure AI Foundry to deploy and interact with O3‑mini.

3.1. Setting Up an Azure AI Foundry Account

Before you can begin using O3‑mini, you must have an Azure account and set up an Azure AI Foundry resource. Here’s a step‑by‑step guide to getting started:

  1. Create an Azure Account:
    If you haven’t already, sign up for an Azure account to take advantage of free credits and a broad suite of services.
  2. Access Azure AI Foundry:
    Navigate to the Azure AI Foundry portal. You can do this by logging into the Azure portal and selecting the AI Foundry resource from the available services.
  3. Create a New Project:
    In the AI Foundry portal, click on “+ Create project” and follow the prompts to set up a new project. You’ll need to provide a unique project name and select a hub or workspace.
  4. Deploy Your Model:
    Within the project, navigate to the “Models + endpoints” section. Here, you can deploy the O3‑mini model by selecting “+ Deploy model” and choosing the O3‑mini option. Follow the on‑screen instructions to complete deployment.
  5. Verify Deployment:
    Once deployed, you will see O3‑mini listed alongside other models. You can test the model in the portal’s playground before integrating it into your application.

3.2. Retrieving Your API Key and Endpoint

To connect to O3‑mini from your code, you need your API key and endpoint URL. These credentials are available in the Azure AI Foundry portal:

  1. Locate Your Deployment:
    In the “Models + endpoints” section, click on your O3‑mini deployment. On the model details page, you will find the “Keys & Endpoint” section.
  2. Copy the Credentials:
    Copy your API key (you may have two keys available for rotation) and the endpoint URL. The endpoint typically looks like:
    https://<your-resource-name>.openai.azure.com/
  3. Store Credentials Securely:
    It is best practice to store these credentials in environment variables or a secure vault (such as Azure Key Vault) rather than hard‑coding them in your application.

For example, on a Windows system you might set them as follows:

setx AZURE_OPENAI_API_KEY "your_api_key_here"
setx AZURE_OPENAI_ENDPOINT "https://your-resource-name.openai.azure.com/"

Alternatively, you can store them in a .env file (see Appendix A for a full code listing) and use the python-dotenv package to load them in your Python application

3.3. Navigating the Azure AI Foundry Portal

The Azure AI Foundry portal offers a rich set of tools and dashboards to manage your AI models:

  • Dashboard Overview: Get an overview of your deployed models, usage statistics, and health metrics.
  • Deployment Details: Review detailed logs, error messages, and performance metrics for each deployment.
  • Playground: Experiment with O3‑mini in a sandbox environment, test various prompts, and see immediate outputs.
  • Resource Management: Easily rotate keys, configure endpoint settings, and set usage quotas to optimize cost and performance.

Familiarizing yourself with these tools will help you monitor and optimize your deployments as your application scales.

4. Connecting to O3‑mini Using the Azure AI Foundry API

Once you have your API key and endpoint, the next step is to integrate O3‑mini into your application. In this section, we provide a detailed walkthrough of connecting to O3‑mini using Python.

4.1. API Key–Based Authentication

Authentication with Azure AI Foundry is performed via API keys. The key must be included in every request to ensure secure access to your deployed model. The basic steps are as follows:

Install the Required Libraries:
Ensure you have the openai Python package installed

pip install openai python-dotenv

Set Up Environment Variables:
Create a .env file in your project directory with the following content:

AZURE_OPENAI_API_KEY=your_api_key_here
AZURE_OPENAI_ENDPOINT=https://your-resource-name.openai.azure.com/
AZURE_OPENAI_MODEL_NAME=o3-mini
AZURE_OPENAI_DEPLOYMENT_NAME=o3-mini-deployment
AZURE_OPENAI_API_VERSION=2024-02-01

Load Environment Variables in Your Code:
Use the python-dotenv package to load these variables:

from dotenv import load_dotenv
import os

load_dotenv()

api_key = os.getenv("AZURE_OPENAI_API_KEY")
endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
model_name = os.getenv("AZURE_OPENAI_MODEL_NAME")
deployment_name = os.getenv("AZURE_OPENAI_DEPLOYMENT_NAME")
api_version = os.getenv("AZURE_OPENAI_API_VERSION")

4.2. Step‑by‑Step Code Walkthrough

Below is a sample Python script that connects to the O3‑mini model using Azure AI Foundry and retrieves a response. This example demonstrates how to send a prompt and print the resulting output:

import os
import openai
from dotenv import load_dotenv

# Load environment variables
load_dotenv()

# Retrieve environment variables
AZURE_OPENAI_API_KEY = os.getenv("AZURE_OPENAI_API_KEY")
AZURE_OPENAI_ENDPOINT = os.getenv("AZURE_OPENAI_ENDPOINT")
AZURE_OPENAI_MODEL_NAME = os.getenv("AZURE_OPENAI_MODEL_NAME")
AZURE_OPENAI_DEPLOYMENT_NAME = os.getenv("AZURE_OPENAI_DEPLOYMENT_NAME")
AZURE_OPENAI_API_VERSION = os.getenv("AZURE_OPENAI_API_VERSION")

# Configure the OpenAI library for Azure
openai.api_type = "azure"
openai.api_key = AZURE_OPENAI_API_KEY
openai.api_base = AZURE_OPENAI_ENDPOINT
openai.api_version = AZURE_OPENAI_API_VERSION

# Define a function to get a response from O3-mini
def get_o3mini_response(prompt, max_tokens=150):
try:
response = openai.ChatCompletion.create(
deployment_id=AZURE_OPENAI_DEPLOYMENT_NAME, # Deployment name from Azure AI Foundry
model=AZURE_OPENAI_MODEL_NAME,
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt}
],
max_tokens=max_tokens,
temperature=0.7,
stream=False # Set to True if you want streaming responses
)
return response["choices"][0]["message"]["content"].strip()
except Exception as e:
print("Error obtaining response:", e)
return None

# Example usage
if __name__ == "__main__":
user_prompt = "Explain the significance of the Pythagorean theorem in modern mathematics."
output = get_o3mini_response(user_prompt)
if output:
print("O3-mini Response:")
print(output)

Explanation of the Code:

  • Library Configuration: We configure the openai library to use Azure’s API by setting api_type, api_key, api_base, and api_version.
  • Deployment ID: The deployment_id parameter ensures that our request targets the correct deployment of O3‑mini on Azure AI Foundry.
  • Message Sequence: The sample sends a system message (“You are a helpful assistant.”) followed by a user message containing the prompt.
  • Error Handling: If an exception occurs, it prints an error message.

This sample is a baseline for more complex integrations and can be extended with additional error handling, logging, and asynchronous support as needed.

A more robust version of the code might include these elements to ensure reliability in production environments.

5. Performance Benchmarks and Comparisons

When comparing O3‑mini to models like GPT‑4 Turbo, Gemini, and Mistral, several factors come into play:

  • Reasoning Accuracy: Evaluations using standardized tests (e.g., AIME, GPQA) indicate that O3‑mini, even with medium reasoning effort, can match or exceed older models such as O1‑mini.
  • Latency: Benchmarks show that O3‑mini responds 20%–25% faster on average than comparable models. The improved architecture reduces the time to first token by over 2 seconds in many scenarios.
  • Cost Efficiency: With per‑token pricing reduced by up to 95% compared to larger models, O3‑mini is significantly more cost‑effective for high‑volume applications.
  • Scalability: Due to its lightweight design, O3‑mini scales horizontally very well on cloud infrastructure, particularly within Azure AI Foundry.

Below is a simplified comparison table summarizing key metrics:

Note: The values shown are approximate and derived from internal benchmarks and independent testing

6. Fine‑Tuning and Customization

One of the most powerful aspects of O3‑mini is its ability to be fine‑tuned and customized to meet specific needs. Fine‑tuning involves adapting the model on a custom dataset so that it performs better on tasks specific to your application. Here’s how to approach fine‑tuning:

  • Data Collection: Assemble a high‑quality dataset that reflects the domain or tasks you want the model to excel in.
  • Preprocessing: Clean and tokenize your data. Ensure that it meets the model’s input requirements (e.g., token limits).
  • Training Configuration: Use parameters such as batch size, learning rate, and number of epochs to control the training process.
  • Validation: Continuously evaluate the fine‑tuned model on a validation set to monitor improvements and avoid overfitting.
  • Deployment: Once satisfied with the performance, deploy the fine‑tuned model via Azure AI Foundry.

Azure AI Foundry provides an integrated interface for fine‑tuning, allowing you to experiment with different configurations easily

Custom Function Calling and Structured Outputs

O3‑mini supports advanced features that allow for customization:

  • Custom Function Calling: Define functions that the model can invoke based on user inputs. This is particularly useful in scenarios where dynamic behavior is required.
  • Structured Outputs: Specify output formats (e.g., JSON) to streamline integration with other systems. This makes the model’s outputs easier to parse and use in downstream applications.

For example, you can instruct O3‑mini to generate a structured JSON response summarizing a user query, which can then be parsed and used to trigger specific workflows.

7. Conclusion

In this extensive guide, we have covered:

  • An Introduction to O3‑mini: What it is and why it matters.
  • Model Capabilities: Detailed analysis of O3‑mini’s features, STEM optimizations, cost efficiency, and latency improvements.
  • Azure AI Foundry Integration: Step‑by‑step instructions on setting up your Azure account, retrieving API keys, and navigating the portal.
  • API Integration: Sample code and best practices for connecting to O3‑mini, handling errors, and securing your credentials.
  • Performance Benchmarks: Comparative evaluations with other reasoning models and analysis of efficiency metrics.
  • Customization: Discussion on customization options available through fine‑tuning.

OpenAI’s O3‑mini is not just another model — it represents a significant advancement in making sophisticated reasoning both accessible and affordable. Whether you are a startup looking to scale your application, an enterprise aiming for efficiency, or a developer eager to explore the frontier of AI, O3‑mini offers a versatile solution that fits a wide array of needs.

Happy coding and innovating!

Thank You!

Thanks for taking the time to read my story! If you enjoyed it and found it valuable, please consider giving it a clap (or 50!) to show your support. Your claps help others discover this content and motivate me to keep creating more.

Also, don’t forget to follow me for more insights and updates on AI. Your support means a lot and helps me continue sharing valuable content with you. Thank you!

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓

Sign Up for the Course
`; } else { console.error('Element with id="subscribe" not found within the page with class "home".'); } } }); // Remove duplicate text from articles /* Backup: 09/11/24 function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag elements.forEach(el => { const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 2) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); */ // Remove duplicate text from articles function removeDuplicateText() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, strong'); // Select the desired elements const seenTexts = new Set(); // A set to keep track of seen texts const tagCounters = {}; // Object to track instances of each tag // List of classes to be excluded const excludedClasses = ['medium-author', 'post-widget-title']; elements.forEach(el => { // Skip elements with any of the excluded classes if (excludedClasses.some(cls => el.classList.contains(cls))) { return; // Skip this element if it has any of the excluded classes } const tagName = el.tagName.toLowerCase(); // Get the tag name (e.g., 'h1', 'h2', etc.) // Initialize a counter for each tag if not already done if (!tagCounters[tagName]) { tagCounters[tagName] = 0; } // Only process the first 10 elements of each tag type if (tagCounters[tagName] >= 10) { return; // Skip if the number of elements exceeds 10 } const text = el.textContent.trim(); // Get the text content const words = text.split(/\s+/); // Split the text into words if (words.length >= 4) { // Ensure at least 4 words const significantPart = words.slice(0, 5).join(' '); // Get first 5 words for matching // Check if the text (not the tag) has been seen before if (seenTexts.has(significantPart)) { // console.log('Duplicate found, removing:', el); // Log duplicate el.remove(); // Remove duplicate element } else { seenTexts.add(significantPart); // Add the text to the set } } tagCounters[tagName]++; // Increment the counter for this tag }); } removeDuplicateText(); //Remove unnecessary text in blog excerpts document.querySelectorAll('.blog p').forEach(function(paragraph) { // Replace the unwanted text pattern for each paragraph paragraph.innerHTML = paragraph.innerHTML .replace(/Author\(s\): [\w\s]+ Originally published on Towards AI\.?/g, '') // Removes 'Author(s): XYZ Originally published on Towards AI' .replace(/This member-only story is on us\. Upgrade to access all of Medium\./g, ''); // Removes 'This member-only story...' }); //Load ionic icons and cache them if ('localStorage' in window && window['localStorage'] !== null) { const cssLink = 'https://code.ionicframework.com/ionicons/2.0.1/css/ionicons.min.css'; const storedCss = localStorage.getItem('ionicons'); if (storedCss) { loadCSS(storedCss); } else { fetch(cssLink).then(response => response.text()).then(css => { localStorage.setItem('ionicons', css); loadCSS(css); }); } } function loadCSS(css) { const style = document.createElement('style'); style.innerHTML = css; document.head.appendChild(style); } //Remove elements from imported content automatically function removeStrongFromHeadings() { const elements = document.querySelectorAll('h1, h2, h3, h4, h5, h6, span'); elements.forEach(el => { const strongTags = el.querySelectorAll('strong'); strongTags.forEach(strongTag => { while (strongTag.firstChild) { strongTag.parentNode.insertBefore(strongTag.firstChild, strongTag); } strongTag.remove(); }); }); } removeStrongFromHeadings(); "use strict"; window.onload = () => { /* //This is an object for each category of subjects and in that there are kewords and link to the keywods let keywordsAndLinks = { //you can add more categories and define their keywords and add a link ds: { keywords: [ //you can add more keywords here they are detected and replaced with achor tag automatically 'data science', 'Data science', 'Data Science', 'data Science', 'DATA SCIENCE', ], //we will replace the linktext with the keyword later on in the code //you can easily change links for each category here //(include class="ml-link" and linktext) link: 'linktext', }, ml: { keywords: [ //Add more keywords 'machine learning', 'Machine learning', 'Machine Learning', 'machine Learning', 'MACHINE LEARNING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ai: { keywords: [ 'artificial intelligence', 'Artificial intelligence', 'Artificial Intelligence', 'artificial Intelligence', 'ARTIFICIAL INTELLIGENCE', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, nl: { keywords: [ 'NLP', 'nlp', 'natural language processing', 'Natural Language Processing', 'NATURAL LANGUAGE PROCESSING', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, des: { keywords: [ 'data engineering services', 'Data Engineering Services', 'DATA ENGINEERING SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, td: { keywords: [ 'training data', 'Training Data', 'training Data', 'TRAINING DATA', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, ias: { keywords: [ 'image annotation services', 'Image annotation services', 'image Annotation services', 'image annotation Services', 'Image Annotation Services', 'IMAGE ANNOTATION SERVICES', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, l: { keywords: [ 'labeling', 'labelling', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, pbp: { keywords: [ 'previous blog posts', 'previous blog post', 'latest', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, mlc: { keywords: [ 'machine learning course', 'machine learning class', ], //Change your article link (include class="ml-link" and linktext) link: 'linktext', }, }; //Articles to skip let articleIdsToSkip = ['post-2651', 'post-3414', 'post-3540']; //keyword with its related achortag is recieved here along with article id function searchAndReplace(keyword, anchorTag, articleId) { //selects the h3 h4 and p tags that are inside of the article let content = document.querySelector(`#${articleId} .entry-content`); //replaces the "linktext" in achor tag with the keyword that will be searched and replaced let newLink = anchorTag.replace('linktext', keyword); //regular expression to search keyword var re = new RegExp('(' + keyword + ')', 'g'); //this replaces the keywords in h3 h4 and p tags content with achor tag content.innerHTML = content.innerHTML.replace(re, newLink); } function articleFilter(keyword, anchorTag) { //gets all the articles var articles = document.querySelectorAll('article'); //if its zero or less then there are no articles if (articles.length > 0) { for (let x = 0; x < articles.length; x++) { //articles to skip is an array in which there are ids of articles which should not get effected //if the current article's id is also in that array then do not call search and replace with its data if (!articleIdsToSkip.includes(articles[x].id)) { //search and replace is called on articles which should get effected searchAndReplace(keyword, anchorTag, articles[x].id, key); } else { console.log( `Cannot replace the keywords in article with id ${articles[x].id}` ); } } } else { console.log('No articles found.'); } } let key; //not part of script, added for (key in keywordsAndLinks) { //key is the object in keywords and links object i.e ds, ml, ai for (let i = 0; i < keywordsAndLinks[key].keywords.length; i++) { //keywordsAndLinks[key].keywords is the array of keywords for key (ds, ml, ai) //keywordsAndLinks[key].keywords[i] is the keyword and keywordsAndLinks[key].link is the link //keyword and link is sent to searchreplace where it is then replaced using regular expression and replace function articleFilter( keywordsAndLinks[key].keywords[i], keywordsAndLinks[key].link ); } } function cleanLinks() { // (making smal functions is for DRY) this function gets the links and only keeps the first 2 and from the rest removes the anchor tag and replaces it with its text function removeLinks(links) { if (links.length > 1) { for (let i = 2; i < links.length; i++) { links[i].outerHTML = links[i].textContent; } } } //arrays which will contain all the achor tags found with the class (ds-link, ml-link, ailink) in each article inserted using search and replace let dslinks; let mllinks; let ailinks; let nllinks; let deslinks; let tdlinks; let iaslinks; let llinks; let pbplinks; let mlclinks; const content = document.querySelectorAll('article'); //all articles content.forEach((c) => { //to skip the articles with specific ids if (!articleIdsToSkip.includes(c.id)) { //getting all the anchor tags in each article one by one dslinks = document.querySelectorAll(`#${c.id} .entry-content a.ds-link`); mllinks = document.querySelectorAll(`#${c.id} .entry-content a.ml-link`); ailinks = document.querySelectorAll(`#${c.id} .entry-content a.ai-link`); nllinks = document.querySelectorAll(`#${c.id} .entry-content a.ntrl-link`); deslinks = document.querySelectorAll(`#${c.id} .entry-content a.des-link`); tdlinks = document.querySelectorAll(`#${c.id} .entry-content a.td-link`); iaslinks = document.querySelectorAll(`#${c.id} .entry-content a.ias-link`); mlclinks = document.querySelectorAll(`#${c.id} .entry-content a.mlc-link`); llinks = document.querySelectorAll(`#${c.id} .entry-content a.l-link`); pbplinks = document.querySelectorAll(`#${c.id} .entry-content a.pbp-link`); //sending the anchor tags list of each article one by one to remove extra anchor tags removeLinks(dslinks); removeLinks(mllinks); removeLinks(ailinks); removeLinks(nllinks); removeLinks(deslinks); removeLinks(tdlinks); removeLinks(iaslinks); removeLinks(mlclinks); removeLinks(llinks); removeLinks(pbplinks); } }); } //To remove extra achor tags of each category (ds, ml, ai) and only have 2 of each category per article cleanLinks(); */ //Recommended Articles var ctaLinks = [ /* ' ' + '

Subscribe to our AI newsletter!

' + */ '

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

'+ '

Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!

' + '
' + '' + '' + '

Note: Content contains the views of the contributing authors and not Towards AI.
Disclosure: This website may contain sponsored content and affiliate links.

' + 'Discover Your Dream AI Career at Towards AI Jobs' + '

Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 10,000 live jobs today with Towards AI Jobs!

' + '
' + '

🔥 Recommended Articles 🔥

' + 'Why Become an LLM Developer? Launching Towards AI’s New One-Stop Conversion Course'+ 'Testing Launchpad.sh: A Container-based GPU Cloud for Inference and Fine-tuning'+ 'The Top 13 AI-Powered CRM Platforms
' + 'Top 11 AI Call Center Software for 2024
' + 'Learn Prompting 101—Prompt Engineering Course
' + 'Explore Leading Cloud Providers for GPU-Powered LLM Training
' + 'Best AI Communities for Artificial Intelligence Enthusiasts
' + 'Best Workstations for Deep Learning
' + 'Best Laptops for Deep Learning
' + 'Best Machine Learning Books
' + 'Machine Learning Algorithms
' + 'Neural Networks Tutorial
' + 'Best Public Datasets for Machine Learning
' + 'Neural Network Types
' + 'NLP Tutorial
' + 'Best Data Science Books
' + 'Monte Carlo Simulation Tutorial
' + 'Recommender System Tutorial
' + 'Linear Algebra for Deep Learning Tutorial
' + 'Google Colab Introduction
' + 'Decision Trees in Machine Learning
' + 'Principal Component Analysis (PCA) Tutorial
' + 'Linear Regression from Zero to Hero
'+ '

', /* + '

Join thousands of data leaders on the AI newsletter. It’s free, we don’t spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

',*/ ]; var replaceText = { '': '', '': '', '
': '
' + ctaLinks + '
', }; Object.keys(replaceText).forEach((txtorig) => { //txtorig is the key in replacetext object const txtnew = replaceText[txtorig]; //txtnew is the value of the key in replacetext object let entryFooter = document.querySelector('article .entry-footer'); if (document.querySelectorAll('.single-post').length > 0) { //console.log('Article found.'); const text = entryFooter.innerHTML; entryFooter.innerHTML = text.replace(txtorig, txtnew); } else { // console.log('Article not found.'); //removing comment 09/04/24 } }); var css = document.createElement('style'); css.type = 'text/css'; css.innerHTML = '.post-tags { display:none !important } .article-cta a { font-size: 18px; }'; document.body.appendChild(css); //Extra //This function adds some accessibility needs to the site. function addAlly() { // In this function JQuery is replaced with vanilla javascript functions const imgCont = document.querySelector('.uw-imgcont'); imgCont.setAttribute('aria-label', 'AI news, latest developments'); imgCont.title = 'AI news, latest developments'; imgCont.rel = 'noopener'; document.querySelector('.page-mobile-menu-logo a').title = 'Towards AI Home'; document.querySelector('a.social-link').rel = 'noopener'; document.querySelector('a.uw-text').rel = 'noopener'; document.querySelector('a.uw-w-branding').rel = 'noopener'; document.querySelector('.blog h2.heading').innerHTML = 'Publication'; const popupSearch = document.querySelector$('a.btn-open-popup-search'); popupSearch.setAttribute('role', 'button'); popupSearch.title = 'Search'; const searchClose = document.querySelector('a.popup-search-close'); searchClose.setAttribute('role', 'button'); searchClose.title = 'Close search page'; // document // .querySelector('a.btn-open-popup-search') // .setAttribute( // 'href', // 'https://medium.com/towards-artificial-intelligence/search' // ); } // Add external attributes to 302 sticky and editorial links function extLink() { // Sticky 302 links, this fuction opens the link we send to Medium on a new tab and adds a "noopener" rel to them var stickyLinks = document.querySelectorAll('.grid-item.sticky a'); for (var i = 0; i < stickyLinks.length; i++) { /* stickyLinks[i].setAttribute('target', '_blank'); stickyLinks[i].setAttribute('rel', 'noopener'); */ } // Editorial 302 links, same here var editLinks = document.querySelectorAll( '.grid-item.category-editorial a' ); for (var i = 0; i < editLinks.length; i++) { editLinks[i].setAttribute('target', '_blank'); editLinks[i].setAttribute('rel', 'noopener'); } } // Add current year to copyright notices document.getElementById( 'js-current-year' ).textContent = new Date().getFullYear(); // Call functions after page load extLink(); //addAlly(); setTimeout(function() { //addAlly(); //ideally we should only need to run it once ↑ }, 5000); }; function closeCookieDialog (){ document.getElementById("cookie-consent").style.display = "none"; return false; } setTimeout ( function () { closeCookieDialog(); }, 15000); console.log(`%c 🚀🚀🚀 ███ █████ ███████ █████████ ███████████ █████████████ ███████████████ ███████ ███████ ███████ ┌───────────────────────────────────────────────────────────────────┐ │ │ │ Towards AI is looking for contributors! │ │ Join us in creating awesome AI content. │ │ Let's build the future of AI together → │ │ https://towardsai.net/contribute │ │ │ └───────────────────────────────────────────────────────────────────┘ `, `background: ; color: #00adff; font-size: large`); //Remove latest category across site document.querySelectorAll('a[rel="category tag"]').forEach(function(el) { if (el.textContent.trim() === 'Latest') { // Remove the two consecutive spaces (  ) if (el.nextSibling && el.nextSibling.nodeValue.includes('\u00A0\u00A0')) { el.nextSibling.nodeValue = ''; // Remove the spaces } el.style.display = 'none'; // Hide the element } }); // Add cross-domain measurement, anonymize IPs 'use strict'; //var ga = gtag; ga('config', 'G-9D3HKKFV1Q', 'auto', { /*'allowLinker': true,*/ 'anonymize_ip': true/*, 'linker': { 'domains': [ 'medium.com/towards-artificial-intelligence', 'datasets.towardsai.net', 'rss.towardsai.net', 'feed.towardsai.net', 'contribute.towardsai.net', 'members.towardsai.net', 'pub.towardsai.net', 'news.towardsai.net' ] } */ }); ga('send', 'pageview'); -->