Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

This AI newsletter is all you need #49
Latest   Machine Learning

This AI newsletter is all you need #49

Last Updated on July 25, 2023 by Editorial Team

Author(s): Towards AI Editorial Team

Originally published on Towards AI.

What happened this week in AI by Louie

New AI model releases and announcements continued at pace this week, though step-change progress in capabilities feels slower than earlier this year. AI remained prominent in the mainstream news including a focus on Nvidia and the increased pace of GPU rollouts in datacentres for LLM inference and training. AI risks and regulation were also in the news again with yet another letter, this time signed by most leading AI CEOs and researchers, including Sam Altman of OpenAI, Demis Hassabis of Deepmind, and Geoffery Hinton, warning on the need to manage AI risks. The statement released by the Center for AI Safety was simple and to the point; β€œMitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”. We agree again here. While other risks from AI such as inaccuracies (including hallucinations appearing in court in legal research!) still feel closer to hand, increasing the probabilities of all the positive outcomes from AI and reducing the risks of the negative should now be at the forefront of policymaker’s minds.

– Louie Peters β€” Towards AI Co-founder and CEO

Hottest News

1.Introducing speech-to-text, text-to-speech, and more for 1,100+ languages

Meta’s Massively Multilingual Speech project utilizes self-supervised learning with wav2vec 2.0 and a unique dataset to empower AI in comprehending and generating speech across more than 1,100 languages. This groundbreaking initiative outperforms existing models, boasting reduced character error rates and expanded language coverage.

2. Intel Announces Aurora genAI, Generative AI Model With 1 Trillion Parameters

Intel has announced the Aurora Generative AI model, with 1 trillion parameters. This model will be specifically trained on scientific texts and structured scientific data, with a focus on cancer research, systems biology, cosmology, polymer chemistry, materials science, and climate science. The Aurora Gen AI model will be powered by the Aurora supercomputer.

3. Google is starting to let users into its AI search experiment

Google’s Search Labs is now open for experimentation, providing AI-generated summaries at the top of search results. These summaries are globally compatible with multiple languages, potentially altering the business model of the web and impacting SEO.

4. Sam Altman shares his optimistic view of our AI future

Last week, OpenAI CEO Sam Altman engaged in discussions with heads of governments and startup communities in Europe regarding AI regulation and beyond. Altman highlighted the current significance of AI, emphasizing its proficiency in various domains and its proven ability to enhance productivity across diverse job sectors.

5. New superbug-killing antibiotic discovered using AI

Researchers have made a groundbreaking discovery by utilizing AI to identify a compound that effectively eliminates Acinetobacter baumannii while displaying minimal signs of resistance. This achievement showcases the immense potential of AI in identifying prospective antibiotics and accelerating the pace of new treatment discoveries.

5-minute reads/videos to keep you learning

1.Making LLMs even more accessible with bits and bytes, 4-bit quantization and QLoRA

HuggingFace introduced QLoRA, an efficient finetuning approach that reduces memory usage enough to finetune a 65B parameter model on a single 48GB GPU while preserving full 16-bit finetuning task performance. QLoRA backpropagates gradients through a frozen, 4-bit quantized pre-trained language model into Low Rank Adapters~(LoRA).

2. How To Finetune GPT Like Large Language Models on a Custom Dataset

Lit-Parrot is a nanoGPT-based tool developed by Lightning AI that offers clean and optimized LLM implementations for fine-tuning on custom data. It includes prompt adaptation, LLM approximation, and LLM cascade for cost reduction. Lightning AI offers step-by-step guidance on installation, dataset preparation, and model fine-tuning. Check out the article to learn more.

3. The hard stuff no one talks about building on LLMs

Language models are fantastic new tools with high potential. However, they suffer from a set of challenging issues that makes them hard to deploy in production. This post discusses prompt reliability, monitoring, and more. It gives a glimpse into systems you’d need to build to deploy language models in your applications.

4. Transform Any Image with a Single Movement of Your Mouse: DragGan Explained

The new DragGan approach allows users to edit images by simply dragging points from A to B, revolutionizing the way we interact with image editing. The AI realistically adapts the entire image, modifying the object’s position, pose, shape, expressions, and other frame elements. This video tutorial explains how it works.

5. How to use ChatGPT with your Google Drive in 30 lines of Python

This tutorial explains how to build a Python app powered by GPT and GDrive. This combination is made possible by LangChain’s Google Drive document loader and serves as a stellar foundation for an infinite number of great apps.

Papers & Repositories

1.LIMA: Less Is More for Alignment

LIMA is a language model that can improve alignment without reinforcement learning or human preference modeling, trained on 1,000 curated prompts and responses. It outperformed GPT-4 in 43% of cases without human feedback and highlights the importance of pretraining over large-scale instruction tuning and reinforcement learning.

2. Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training

Sophia, Second-order Clipped Stochastic Optimization, a simple scalable second-order optimizer that uses a lightweight estimate of the diagonal Hessian as the pre-conditioner. It achieves the same validation pre-training loss with 50% fewer steps than previous optimizers and can be easily integrated into existing training pipelines.

3. The False Promise of Imitating Proprietary LLMs

Google researchers have discovered that utilizing imitation models to enhance a weaker language model may demonstrate initial improvements. However, these improvements are restricted to tasks for which the training data provides significant support and cannot effectively bridge the gap for unsupported tasks. The approach outlined in this paper involves fine-tuning a sequence of language models that imitate ChatGPT, followed by an evaluation of these models using crowd raters and canonical NLP benchmarks.

4. LLMs as Factual Reasoners: Insights from Existing Benchmarks and Beyond

A recent paper delves into the accuracy of large language models (LLMs) in detecting factual inconsistencies. While certain LLMs exhibited strong performance, the majority faced difficulties when dealing with complex formulations, indicating issues with current benchmarks. To address this, researchers developed the new SUMM EDITS benchmark, which evaluates the LLMs’ capability to identify factual inconsistencies.

5. Reasoning with Language Model is Planning with World Model

The paper introduces a novel framework called Reasoning via Planning (RAP), which reimagines the LLM as both a world model and a reasoning agent. It incorporates a well-structured planning algorithm (based on Monte Carlo Tree Search) for strategic exploration within the expansive reasoning space.

Enjoy these papers and news summaries? Get a daily recap in your inbox!

The Learn AI Together Community section!

Weekly AI Podcast

In this week’s episode of the β€œWhat’s AI” podcast, Louis Bouchard is in conversation with Yotam Azriel, the co-founder of TensorLeap, where he shares his fascinating journey, insights, and vision for the future of Explainable AI. In this episode, Yotam will take you on a journey through his remarkable experiences, mind-boggling insights, and vision that will leave you amazed. Discover how the power of passion, curiosity, and focus can accomplish wonders! Tune in on YouTube, Spotify, or Apple Podcasts and expand your knowledge of Explainable AI.

Meme of the week!

Meme shared by NEON#8052

Featured Community post from the Discord

Elvitronic#5445 presents an innovative approach to developing AI models that mimic human-like cognitive abilities. The Roadmapping Cognitive Development in AI through Human Experience Data Analysis project aims to investigate and model the cognitive development process in artificial intelligence, guided by the fundamental principles of human cognitive development. They propose to construct tests that trigger the least possible areas of the brain, thereby isolating these basic skills, to train AI models using transformer agents to master each skill individually. If you wish to collaborate, reach out in the thread here and support a fellow community member by sharing your feedback!

AI poll of the week!

Join the discussion on Discord.

TAI Curated section

Article of the week

Holy Cow! Introducing DragGAN by Dr. Mandar Karhade, MD. PhD.

Synthesizing visual content that meets users’ needs often requires flexible and precise controllability. Existing approaches gain controllability of GANs via manually annotated training data or a prior 3D model, which often lack flexibility, precision, and generality. In this work, you will study a powerful way of controlling GANs, that is, to β€œdrag” any points of the image to precisely reach target points in a user-interactive manner

Our must-read articles

From Pixels to Artificial Perception by Ali Moezzi

Policy Gradient Algorithm’s Mathematics Explained with PyTorch Implementation by Ebrahim Pichka

FineTuning Local Large Language Models on Your Data Using LangChain by Serop Baghdadlian

If you are interested in publishing with Towards AI, check our guidelines and sign up. We will publish your work to our network if it meets our editorial policies and standards.

Job offers

Senior Manager, Instrument Software @Deepcell (Menlo Park, CA, USA/Hybrid)

Senior Data Engineer @Jupiter (Bangalore, India)

Software Engineer (Infrastructure) @X1 (Remote)

Full Stack Software Engineer β€” CBM+ @Redhorse (Remote)

Senior Software Engineer @H1 (Remote)

Engineering Manager, Machine Learning β€” NLP @BenchSci (Remote)

Interested in sharing a job opportunity here? Contact [email protected].

If you are preparing your next machine learning interview, don’t hesitate to check out our leading interview preparation website, confetti!

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.

Published via Towards AI

Feedback ↓