Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

This AI newsletter is all you need #65
Artificial Intelligence   Data Science   Latest   Machine Learning

This AI newsletter is all you need #65

Last Updated on November 5, 2023 by Editorial Team

Author(s): Towards AI Editorial Team

Originally published on Towards AI.

What happened this week in AI by Louie

This week in AI, we had developments in AI regulation from the Capitol: tech leaders such as Elon Musk and Mark Zuckerberg joined over 60 senators to chat about AI, and guess what? They all agree β€” it’s high time for some ground rules. Elon Musk even said the meeting β€œmay go down in history as very important to the future of civilization.” The wheels of government might not be turning super fast on this, but the conversation is heating up about the need for Uncle Sam to step in with regulation.

In an exciting development this week, OpenAI and Google are reportedly neck-and-neck in a race to release the next generation of LLMs, known as multimodal models. These AI systems have the unique ability to process both text and images seamlessly, promising to revolutionize everything from web design to data analysis. While Google has already previewed its upcoming Gemini multimodal model to some third parties, OpenAI is not far behind and aims to beat Google to public launch with multimodal capabilities. We are excited to experiment with powerful multimodal models as they become available and expect this to release a new wave of capabilities and applications in the AI landscape.

– Louie Peters β€” Towards AI Co-founder and CEO

Towards X FlowGPT: Prompt Hackathon

We are excited to announce our partnership with FlowGPT, which is hosting a Prompt Hackathon from 15th September to 14th October. Join their Discord community and explore the prompt hackathon.

They are offering rewards of over $15,000 in cash, and this event is sponsored by Google! Additionally, FlowGPT will be hosting some thrilling AI/NLP-related events this month and the next.

We are also collaborating with FlowGPT to organize one of our Learn AI Together Discord community workshops, where Ruiqi Zhong will provide insights into AI Alignment. Sign up for the event and learn more.

Hottest News

  1. Stable Audio

London-based startup Stability AI, renowned for its AI model Stable Diffusion, has introduced Stable Audio, an AI model capable of generating high-quality commercial music with greater control over synthesized audio.

2. Google Nears Release of AI Software Gemini, the Information Reports

Google nears the release of its conversational AI Software called Gemini. It is an advanced language model that is intended to compete with OpenAI’s GPT-4 model. It is currently in the early testing phase and offers a range of functionalities, including chatbots, text summarization, and code-writing assistance.

3. Microsoft Releases Prompt Flow

Microsoft has introduced Prompt Flow, a development suite for LLM-based apps. It offers a range of functionalities, including creating executable workflows, debugging and iterating flows, assessing flow quality and performance with larger datasets, integrating testing and evaluation into CI/CD systems, and easily deploying flows to selected serving platforms or app code bases.

4. IBM Releases MoE LLMs

IBM has recently launched MoE LLMs, including models with 4B and 8B parameters. These models provide computational efficiency comparable to dense models with fewer parameters. They have been trained on a large dataset and employ the ModuleFormer architecture.

5. Pulitzer Prize Winner and Others Sue OpenAI

Pulitzer Prize-winning US novelist Michael Chabon and several other writers have filed a proposed class action accusing OpenAI of copyright infringement, alleging it pulled their work into the datasets used to train the models behind ChatGPT. OpenAI argues that its language learning models are protected by β€œfair use,” igniting discussions on AI and copyright law in the field.

Five 5-minute reads/videos to keep you learning

  1. LLM Training: RLHF and Its Alternatives.

This article breaks down RLHF in a step-by-step manner to provide a reference for understanding its central idea and importance. It presents five different approaches with corresponding research papers, such as Constitutional AI, The Wisdom of Hindsight, Direct Preference Optimization, and more.

2. New AI Usage Data Shows Who’s Using AI β€” And Uncovers a Population of β€˜Super-Users’.

Salesforce has released Generative AI Snapshot Research: β€œThe AI Divide,” which is a survey of more than 4,000 people across the United States, UK, Australia, and India. The survey indicates that nearly half of the population utilizes it, with a third using it daily. Younger generations, particularly Gen Z and Millennials, are the β€œsuper users” of generative AI.

3. Overview of Natively Supported Quantization Schemes in U+1F917 Transformers.

Quantization schemes in Transformers, such as BitsandBytes and Auto-GPTQ, offer methods for running large models on smaller devices. This article aims to provide a clear overview of the pros and cons of each quantization scheme supported in transformers to help you decide which one you should opt for.

4. Why Open Source AI Will Win.

Open source is likely to have a more significant impact on the future of LLMs and image models than the broader public believes. This article presents the current arguments against open source and its limitations and delves into its future and importance.

5. Validating Large Language Model Outputs. LLMs Are Powerful but Can Produce Inconsistent Results.

Validating outputs is essential for reliable and accurate applications. This article discusses LLM output validation and provides examples of how to implement it using an open-source package called Guardrails AI.

Papers & Repositories

  1. From Sparse to Dense: GPT-4 Summarization With Chain of Density Prompting

A recent study introduced the β€œChain of Density” (CoD) prompting technique, which generates dense summaries using GPT-4. By iteratively adding important entities without increasing the length, the resulting abstract summaries outperformed standard prompt summaries in terms of abstractive quality and reduced lead bias.

2. Large Language Models for Compiler Optimization

This paper introduces a 7B-parameter transformer model trained from scratch to optimize LLVM assembly for code size. The model surpasses baselines and exhibits exceptional code reasoning capabilities, resulting in a 3% reduction in instruction counts. It generates compilable code 91% of the time and perfectly emulates the compiler’s output 70% of the time.

3. When Less Is More: Investigating Data Pruning for Pretraining LLMs at Scale

In this work, researchers take a wider view and explore scalable estimates of data quality that can be used to systematically measure the quality of pretraining data. They found that perplexity is a more effective method than complex scoring techniques for pruning pretraining data for language models.

4. NExT-GPT: Any-to-Any Multimodal LLM

NExT-GPT is an end-to-end general-purpose any-to-any MM-LLM system. It can process and generate content in various modalities such as text, images, videos, and audio. It achieves this by utilizing already-trained encoders and decoders, with minimal parameter tuning required.

5. Clinical Text Summarization: Adapting Large Language Models Can Outperform Human Experts

This work employs domain adaptation methods on eight LLMs, covering six datasets and four distinct summarization tasks: radiology reports, patient questions, progress notes, and doctor-patient dialogue. This research is the first to demonstrate that LLMs outperform humans in multiple clinical summarization tasks.

Enjoy these papers and news summaries? Get a daily recap in your inbox!

The Learn AI Together Community section!

Weekly AI Podcast

In this week’s episode of the β€œWhat’s AI” podcast, Louis Bouchard interviews Petar VeličkoviΔ‡, a research scientist at DeepMind. They discuss his academic background and his journey from competitive programming to machine learning. Petar also shares insights on the value of a Ph.D., emphasizing its role as an entry ticket into research and the opportunity it provides to build connections and adaptability. He highlights the evolving landscape of AI research, where diverse backgrounds and contributions are essential. Overall, the interview offers valuable perspectives on academia, industry, and the importance of curiosity in driving impactful research. Listen to the full episode on Spotify or Apple podcasts!

Upcoming Community Events

The Learn AI Together Discord community hosts weekly AI seminars to help the community learn from industry experts, ask questions, and get a deeper insight into the latest research in AI. Join us for free, interactive video sessions hosted live on Discord weekly by attending our upcoming events.

  1. Explaining AI Alignment

In the Webinar, Ruiqi Zhong will give a talk on AI Alignment, hosted on the server as part of the Prompt Hackathon series of events. Learn more about Ruiqi and AI alignment before the talk in his post β€œExplaining AI Alignment as an NLPer and Why I am Working on It.”

Join the event here!

Date & Time: 28th September 2023, 12:00 pm EST

Add our Google calendar to see all our free AI events!

Meme of the week!

Meme shared by rucha8062

Featured Community post from the Discord

Penguin is working on a website that aims to make it easier to discover recent research papers in AI, ML, NLP, Computer Vision, and Robotics. This website is a valuable resource for AI enthusiasts and professionals who wish to stay updated on the latest research in the field. Check it out here and support a fellow community member! Share your feedback and join the conversation.

AI poll of the week!

Join the discussion on Discord.

TAI Curated section

Article of the week

Top Important Computer Vision Papers for the Week from 4/9 to 10/9 by Youssef Hosni

This article will provide a comprehensive overview of the most significant papers published in the first week of September 2023, highlighting the latest research and advancements in computer vision. Whether you’re a researcher, practitioner, or enthusiast, this article will provide valuable insights into the state-of-the-art techniques and tools in computer vision.

Our must-read articles

Towards 3D Deep Learning: Artificial Neural Networks with Python by Florent Poux, Ph.D.

PyTorch LSTM β€” Shapes of Input, Hidden State, Cell State, And Output by Sujeeth Kumaravel

Walkthrough of Graph Attention Network (GAT) with Visualized Implementation by David Winer

If you are interested in publishing with Towards AI, check our guidelines and sign up. We will publish your work to our network if it meets our editorial policies and standards.

Job offers

Senior Deep Learning Algorithm Engineer @NVIDIA (Santa Clara, CA, USA)

Senior Software Engineer β€” Python Backend @Teramind (Remote)

Staff Deep Learning NLP Engineer @H1 (Remote)

Sr. Machine Learning Researcher @Casetext (Remote)

Machine Learning Success Manager @Snorkel AI (Remote)

Software Engineer @Sonera (Berkeley, CA, USA)

Interested in sharing a job opportunity here? Contact [email protected].

If you are preparing your next machine learning interview, don’t hesitate to check out our leading interview preparation website, confetti!

https://www.confetti.ai/

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.

Published via Towards AI

Feedback ↓