Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

#55 Want To Create a Standout Portfolio Project With the Latest Models?
Latest   Machine Learning

#55 Want To Create a Standout Portfolio Project With the Latest Models?

Author(s): Towards AI Editorial Team

Originally published on Towards AI.

Good Morning, AI Enthusiasts! This week, we’ve got a lineup of hands-on tutorials perfect for enhancing your portfolio projects. If you haven’t already checked it out, we’ve also launched an extremely in-depth course to help you land a 6-figure job as an LLM developer. You can take a look at the free preview here.

Plus, this week’s issue is packed with exciting collaboration opportunities, valuable resources, and discussions. Enjoy the read πŸ™‚

What’s AI Weekly

Fine-tuning is not a competitor to RAG but can complement it. By training the model on specific data, the retriever also finds more relevant documents, and the generator can give more accurate answers since both understand the topic better. This process cuts down on errors, making responses clearer and more precise. There are various techniques to fine-tune your model more optimally. This week, let’s dive into some of these techniques and understand how they can enhance your RAG pipeline. Read the complete article here or watch the video on YouTube!

— Louis-François Bouchard, Towards AI Co-founder & Head of Community

Learn AI Together Community section!

AI poll of the week!

I won’t disagree with that, Towards AI is all about making AI accessible. But, all the rules of learning that apply to AI, machine learning, and NLP don’t always apply to LLMs, especially if you are building something or looking for a high-paying job. Knowing what is NOT possible with LLMs is just as important as knowing what is possible. And with the pace of LLM development, you don’t have the time to reinvent the wheel, that’s where I personally reply on curated and tested paid courses. Would love to hear your thoughts on this and understand what your barriers are to paying for a course. Share it all in the thread, and let’s talk!

Collaboration Opportunities

The Learn AI Together Discord community is flooding with collaboration opportunities. If you are excited to dive into applied AI, want a study partner, or even want to find a partner for your passion project, join the collaboration channel! Keep an eye on this section, too β€” we share cool opportunities every week!

1. Urfavalm is developing an AI-based mobile app to help people with disabilities and is looking for one or two developers with experience in mobile app development and NLP or computer vision. If this sounds relevant and interesting to you, reach out in the thread!

2. Lufofz__ is looking for 1–2 engineers to work on a multi-agentic project that is getting some traction. If you want to know more about this, connect in the thread!

3. Shubhamgaur. is looking to collaborate with someone on an ML-based project β€” deep learning, Pytorch. If this sounds exciting, contact them in the thread!

Meme of the week!

Meme shared by bigbuxchungus

TAI Curated section

Article of the week

Decoding Latent Variables: Comparing Bayesian, EM, and VAE Approaches By Shenggang Li

The article presents a comprehensive analysis of three statistical methods for uncovering hidden patterns in data: Expectation Maximization (EM), Bayesian Estimation, and Variational Autoencoders (VAEs). It demonstrates how each method handles incomplete data and reveals underlying patterns using A/B testing scenarios in marketing campaigns. The study shows how EM iteratively refines missing information, Bayesian estimation incorporates prior knowledge with new data for confident results, and VAEs use neural networks to generate new possibilities and simulate outcomes. Through practical code implementations, mathematical intuition, and real-world examples, this article compares these methods’ strengths and limitations. It particularly highlights VAEs’ potential in A/B testing for handling missing data, simulating outcomes, and quantifying uncertainty, offering insights for future applications in marketing, healthcare, and recommendation systems.

Our must-read articles

1. Unveiling LLM-Enhanced Search Technologies By Florian June

This article examines the evolution of search technology, contrasting traditional keyword-based search with the emerging field of LLM-enhanced search. Traditional search returns lists of links, requiring users to synthesize information. LLM-enhanced search leverages Large Language Models (LLMs), Retrieval-Augmented Generation (RAG), and agent technologies to provide direct, concise answers through conversational interfaces. It explores several architectures, including Search with Lepton and MindSearch, highlighting their distinct workflows and components. Key features discussed include result reranking (using methods like OpenPerplex and custom logic) and knowledge graph integration for complex, multi-hop queries. The role of PDF parsing and text chunking is also addressed, showcasing LangChain and PyPDF2 implementations. While acknowledging limitations such as performance on location-based queries and the need for more sophisticated RAG modules, it concludes that LLM-enhanced search represents a promising, albeit nascent, area with substantial potential for future development.

2. Gemini 2.0 Flash + Local Multimodal RAG + Context-aware Python Project: Easy AI/Chat for your Docs By Gao Dalie (ι«˜ι”ηƒˆ)

This article details creating a local, multimodal RAG-powered chatbot using Gemini 2.0 Flash. It highlights Gemini 2.0 Flash’s speed and multimodal capabilities (handling images, audio, video) surpassing its predecessor. The chatbot leverages a knowledge graph built from uploaded PDFs processed via PyMuPDF and Pillow to create images and embeddings. A dual-agent approach uses Gemini to summarize and analyze PDF content, identifying relevant passages using embedding similarity. The system, built with Python and Langchain, provides a user-friendly interface (via Streamlit) for querying and receiving answers, demonstrating efficient handling of complex data.

3. Building a Knowledge Graph from Unstructured Text Data: A Step-by-Step Guide By ANSHUL SHIVHARE

This article presents a step-by-step guide to constructing a knowledge graph from unstructured text data. It leverages OLLAMA for local LLM deployment, enhancing performance and privacy. The process begins by loading and chunking text documents, then using a custom function (graphPrompt) and the Zephyr LLM to extract entities and relationships. These are organized into a DataFrame, and contextual proximity edges are added to improve graph connectivity. The resulting data is used to create a NetworkX graph, which undergoes community detection for visualization. Finally, PyVis generates an HTML representation of the knowledge graph, showcasing nodes, relationships, and community structures.

4. How to Summarize, Analyze, and Query Videos with Qwen2-VL Multimodal AI By Isuru Lakshan Ekanayaka

This article explains how to use the Qwen2-VL multimodal AI model to analyze videos. It details setting up a suitable environment (locally or using Google Colab), installing necessary packages, and loading the Qwen2-VL model. The core process begins with structuring prompts that combine video input (specifying filename, maximum pixel resolution, and frames per second β€” FPS β€” to control processing load) with textual questions or requests for summaries. The article then details how to process this video data and prepare it for the model. Finally, it explains how to use the Qwen2-VL model to generate text outputs, including comprehensive video summaries or answers to specific questions posed within the prompts, along with techniques for optimizing performance and troubleshooting common issues like CUDA out-of-memory errors. Finally, it explores real-world applications across various sectors and offers best practices for effective video analysis with Qwen2-VL.

If you are interested in publishing with Towards AI, check our guidelines and sign up. We will publish your work to our network if it meets our editorial policies and standards.

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.

Published via Towards AI

Feedback ↓