Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

This AI newsletter is all you need #51
Latest   Machine Learning

This AI newsletter is all you need #51

Last Updated on July 25, 2023 by Editorial Team

Author(s): Towards AI Editorial Team

Originally published on Towards AI.

What happened this week in AI by Louie

While the focus lately has been on language foundation models β€” we are also excited for AI to be used to discover new science and optimize algorithms. Deep Mind recently published a paper introducing the AlphaDev model that can speed up a sorting algorithm by 70% for small inputs. Their approach identified a redundant move instruction in the sorting algorithm. It leverages the AlphaZero reinforcement learning model to form a game that finds the most efficient algorithm iteratively.

There have been ongoing debates regarding the usefulness of this approach. Firstly, it is essential to note that this pipeline is specifically designed to find a better β€œsorting algorithm.” Therefore, we must initiate a new training process for different problems and start from scratch. Also, a professor from the University of Wisconsin demonstrated an attempt to replicate the improvement by passing the compiled assembly code to GPT-4 and prompting it to find an improvement. So, with this method, there was no reinforcement learning involved. While it is true that AlphaDev itself is limited in scope and may have some shortcomings, we are optimistic about the potential for these methods to open new doors for the future.

– Louie Peters β€” Towards AI Co-founder and CEO

Hottest News

  1. AlphaDev Discovers Faster Sorting Algorithms

AlphaDev, an AI system utilizing reinforcement learning, has successfully developed faster sorting algorithms for data organization. The system accomplishes this by starting from scratch and employing reinforcement learning to select computer assembly instructions. The new algorithms are up to 70% faster for shorter sequences and are integrated into the LLVM libc++ standard library.

2. Building the Data Framework for LLMs

LlamaIndex has successfully raised $8.5M in seed funding and has created a toolkit for seamlessly integrating user data with LLMs. This integration empowers the development of knowledge-intensive LLM apps, including search engines, chatbots, and analytics helpers. The project has gained remarkable traction, with 16K stars on Github, 20K Twitter followers, and 200K monthly downloads

3. Japan Goes All In: Copyright Doesn’t Apply To AI Training

Japan has made a significant announcement stating that it will no longer enforce copyrights on data utilized for AI training. This decision aims to facilitate unrestricted AI research and foster healthy competition with Western counterparts. The policy permits AI to utilize any data β€œregardless of whether it is for non-profit or commercial purposes, whether it is an act other than reproduction, or whether it is content obtained from illegal sites or otherwise.

4. RedPajama 7B is Now Available

The newly introduced RedPajama-INCITE models, specifically designed for few-shot tasks, demonstrate superior performance compared to similar models on HELM benchmarks. The project thoroughly analyzed the disparities with previous models and incorporated valuable feedback from the community. These models are now accessible to AI professionals under the Apache 2.0 license.

5. Bard Is Getting Better at Logic and Reasoning

Google has successfully combined the capabilities of advanced language models and traditional code to enhance Bard’s reasoning and math abilities. By employing this innovative method of implicit code execution, Bard’s accuracy has been significantly improved, achieving a remarkable boost of 30%.

Five 5-minute reads/videos to keep you learning

  1. U+1F917 Open LLM Leaderboard

The HuggingFace Open LLM Leaderboard serves as a valuable resource for staying informed and conducting comparisons between LLMs and chatbot models, by enabling researchers to monitor the progress of LLMs and chatbots by submitting their Transformers models for automated evaluation on a GPU cluster. The leaderboard evaluates these models across multiple tasks, encompassing science questions, inference, multitasking accuracy, and truthful answers.

2. GPT Best Practices by OpenAI

This guide on GPT best practices delves into strategies and tactics for leveraging GPTs effectively. It highlights the significance of providing context and specific details to enhance the quality of results produced by GPTs. The guide also proposes tactics such as breaking down complex tasks into manageable components and measuring performance as a means to optimize GPT usage.

3. Are AI Startups Too Easy to Copy?

AI startups encounter formidable competition, and investors express concerns about their capacity to distinguish themselves within a highly saturated market. In this article, venture capitalists emphasize the importance of network effects and proprietary datasets when considering investments in AI startups that gain quick traction.

4. Is AI Killing the Stock Industry? A Data Perspective

This article aims to address several questions, such as the potential future of the stock industry, whether one should consider quitting photography or stock entirely, or if one should fully commit to producing AI-generated images from a data perspective.

5. Why AI Will Save the World

The article explores the potential of AI to revolutionize various fields. While there are concerns regarding its negative impact, the benefits of AI outweigh the risks when developed ethically and safely. AI can enhance human intelligence and contribute to better outcomes in all domains of activity.

Papers & Repositories

  1. Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding

Video-LLaMA is a new language model designed for video understanding. It is built upon BLIP-2 and MiniGPT-4, incorporating two key components: the Vision-Language component and the Audio-Language component. Video-LLaMA enhances video accessibility by assisting in automated captioning, search, and navigation.

2. Fine-Grained Human Feedback Gives Better Rewards for Language Model Training

This paper introduces Fine-Grained RLHF as a solution to enhance the output quality of language models. It offers detailed rewards to provide explicit training signals and allows for tailoring the language model to specific requirements. This method outperforms traditional methods, achieving superior performance.

3. Orca: Progressive Learning from Complex Explanation Traces of GPT-4

This research introduces Orca, a 13-billion parameter model that learns to imitate the reasoning process of logical framework models (LFMs). It enhances the capabilities of AI models through imitation learning and surpasses other models in complex reasoning benchmarks. Furthermore, it demonstrates impressive performance in professional and academic exams such as LSAT, GMAT, SAT, and GRE.

4. Simple and Controllable Music Generation

This paper introduces MusicGen, a single Language Model (LM) that operates on multiple streams of compressed discrete music representation, i.e., tokens. It is a unified Language Model that generates high-quality music while being conditioned on textual descriptions or melodic features, providing control over the generated output. It uses an unsupervised melody conditioning technique to follow specific harmonic and melodic structures.

5. Tracking Everything Everywhere All at Once

This paper introduces OmniMotion, a novel test-time optimization method for estimating dense and long-range motion from a video sequence. OmniMotion surpasses traditional optical flow and particle video tracking methods in terms of motion estimation. It employs a globally consistent motion representation to guarantee precise tracking, estimate complete motion trajectories for every pixel, and model camera and object motion.

Enjoy these papers and news summaries? Get a daily recap in your inbox!

The Learn AI Together Community section!

Weekly AI Podcast

This week’s episode of the β€œWhat’s AI” podcast features Felix Tao, CEO of Mindverse AI, and his years of experience working as a researcher at Facebook and Alibaba, mostly involved in language applications and AI. In this interview, Felix provides valuable insights into the evolution of AI, the advancements in large language models, and the delicate balance between research and practical applications. Tune in on YouTube, Spotify, or Apple Podcasts. Youtube, Spotify, and Apple podcasts!

Meme of the week!

Meme shared by dimkiriakos#2286

Featured Community post from the Discord

MattDev#8623 has developed an open-source autonomous AI agent framework called SuperAGI, designed with a focus on developers. This framework empowers developers to construct, manage, and deploy effective autonomous agents efficiently. SuperAGI offers a range of features, including the ability to define agent clusters, fine-tune agent trajectories, monitor agent performance, and manage resources. Check it out on GitHub and support a fellow community member. Share your feedback, feature requests, and integration requests in the thread here!

AI poll of the week!

Join the discussion on Discord.

TAI Curated section

Article of the week

Optimizing Object Avoidance With Genetic Algorithm in Python by Kong You Liow

This article demonstrates the principles of the genetic algorithm on a 2-dimensional obstacle avoidance problem. A genetic algorithm is a metaheuristic that leverages the principles of natural selection and genetic inheritance to uncover near-optimal or optimal solutions. The primary focus of our discussion centers around the algorithm itself, specifically its fundamental operators: selection, crossover, and mutation.

Our must-read articles

Making Models Smart: GPT-4 and Scikit-Learn by Ulrik Thyge Pedersen

Unlimiformer: Long-Range Transformers with Unlimited Length Input by Reza Yazdanfar

Computer Vision and Its Application in Facial Recognition and Object Classification by Raman Rounak

If you want to publish with Towards AI, check our guidelines and sign up. We will publish your work to our network if it meets our editorial policies and standards.

Job offers

Deep Learning Engineer in Artificial Intelligence Start-up @Gemmo (Remote)

Deep Learning Engineer @EnsoData (Remote)

Senior Machine Learning Engineer / Senior Python Developer @LoopMe (Ukraine)

Machine Learning Engineer @Teramind (Remote)

Mid/Sr Machine Learning Engineer @Zipdev (Remote)

Data Scientist (Machine Learning) @Qualitest (Remote)

Artificial Engineer @Inbox Business Technologies (Remote/Freelance)

Interested in sharing a job opportunity here? Contact [email protected].

If you are preparing your next machine learning interview, don’t hesitate to check out our leading interview preparation website, confetti!

https://www.confetti.ai/

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.

Published via Towards AI

Feedback ↓