Adversarial Machine Learning: Defense Strategies
Author(s): Michał Oleszak Originally published on Towards AI. Know thine enemy and protect your machine learning systems. The growing prevalence of ML models in business-critical applications results in an increased incentive for malicious actors to attack the models for their benefit. Developing …
Demystifying the Black Box: Advanced Techniques in Interpretable AI with SHAP and LIME
Author(s): saeed garmsiri Originally published on Towards AI. Photo by Andrea De Santis on Unsplash Demystifying the Black Box: Advanced Techniques in Interpretable AI with SHAP and LIME Hey ML Engs out there! Ever felt like you’ve created a brilliant machine learning …
#32 Understanding AdaBoost From Its Original 1997 Paper
Author(s): Towards AI Editorial Team Originally published on Towards AI. Good morning, AI enthusiasts! This week, I have a very exciting announcement to make. I have partnered with O’Reilly to create two specific “shortcut” video series on LLMs and GenAI research. The …
Fine-Tuning LLMs with Synthetic Data for High-Quality Content Generation
Author(s): Vin Busquet Originally published on Towards AI. Evaluation data analysis featured in this article. (Photo of the author’s monitor) Table of Contents · Table of Contents· The POC Trek Begins· Fine-Tuning VS RAG ∘ What is fine-tuning? ∘ So, what is …
Quantization: Post Training Quantization, Quantization Error, and Quantization Aware Training
Author(s): JAIGANESAN Originally published on Towards AI. Photo by Jason Leung on Unsplash Most of us used open-source Large Language Models, VLMs, and Multi-Modal Models in our system, colab, or Kaggle notebook. You might have noticed that most of the time we …
GraphRAG Is the Logical Step From RAG — So Why the Sudden Hype?
Author(s): Daniel Voyce Originally published on Towards AI. Photo by Steve Johnson on Unsplash It seems like everyone is currently talking about GraphRAG as the successor to RAG (Retrieval-Augmented Generation) in the Generative AI / LLM world right now. But is it …
In-Depth Understanding of Vector Search for RAG and Generative AI Applications
Author(s): Talib Originally published on Towards AI. I will start by describing why we need a vector search for RAG, and how vectors and vector databases work, and then focus on the Azure AI search. You might have used large language models …
How Nvidia trained Nemotron, better agents, and more #31
Author(s): Towards AI Editorial Team Originally published on Towards AI. Good morning, AI enthusiasts! We are excited to announce that ‘Building LLMs for Production’ is now also available to readers across the globe on the O-Reilly learning platform. But that’s not all. …
TAI #107: What do enterprise customers need from LLMs?
Author(s): Towards AI Editorial Team Originally published on Towards AI. What happened this week in AI by Louie This week in AI saw hints of progress in multi-modal LLMs outside of OpenAI and Google, with SenseNova 5o from SenseTime and Kyutai unveiling …
Meta’s Chameleon, RAG with Autoencoder-Transformed Embeddings, and more #30
Author(s): Towards AI Editorial Team Originally published on Towards AI. Good morning, AI enthusiasts! This week we are diving into some interesting discussions on transformers, BERT, and RAG, along with some interesting collaboration opportunities for building a bot, a productivity app, and …