This AI newsletter is all you need #52
Last Updated on June 20, 2023 by Editorial Team
Towards AI β Free in-depth practical Generative AI course announcement!
Weβre excited to announce our collaboration with Activeloop and the Intel Disruptor Initiative in creating Gen AI 360: The Foundational Model Certification. The inaugural course of the three-part series focuses on LangChain and Deep Lake, a vector database for all AI data in production. With over 50 lessons including 10 practical projects across 8 modules and 5 full days of study time, the course offers a deep dive into AI technology, guiding you through creating innovative and production-grade AI tools and solutions.
Enroll today for free atΒ learn.activeloop.ai.
The curriculum centers aroundΒ LangChainΒ andΒ Deep Lake, essential tools for operating with Large Language Models (LLMs), covering a broad array of topics, from the basics of LLMs to building automated sales or customer support agents with LangChain and Deep Lake.
The course is offered free of charge, and we invite you to join ourΒ Discord ChannelΒ and theΒ Activeloop Slack communityΒ for any queries. Our primary goal is to educate and upskill our community of over 385,000 AI developers and assist them in driving AI adoption in their respective fields.
What happened this week in AI by Louie
Are we on the right track to achieve Artificial General Intelligence (AGI)? This is a discussion thatβs popping up all over the place now that ChatGPT and Large Language Models (LLMs) are taking over. First, what does AGI mean? It refers to the development of AI systems that possess the ability to understand, learn, and perform tasks across a wide range of domains with the same level of proficiency as a human being.
Metaβs AI Chief scientist, Yann Lecun, believes we have aΒ long way to go! He pointed out that the current challenge lies in the fact that LLMs primarily focus on language and lack emotions, creativity, sentience, or consciousness. It makes him believe that these models are not even as smart as dogs. Furthermore, the Allen Institute for AI and three universities have collaborated onΒ a projectΒ to investigate the constraints and limitations of the current NLP cutting-edge architecture, Transformers. Their research showed that these models learn to solve complex problems by pattern matching in a step-by-step manner. It means that they do not develop problem-solving skills.
While itβs true that achieving Artificial General Intelligence (AGI) may still be far off, it is essential to acknowledge the limitations of current technologies, especially during this time of heightened excitement. Identifying and addressing current limitations can pave the way for future advancements!
Hottest News
OpenAIβs GPT-4 language model can now leverage external tools to accomplish tasks, marking a significant improvement over previous versions of GPT. This development holds two major implications: Firstly, it significantly enhances the power of GPT models. Secondly, it replaces some functionalities of open-source libraries that serve the same purpose.
2. AMD reveals new A.I. chip to challenge Nvidiaβs dominance
AMD announced last Tuesday that its highly advanced GPU for artificial intelligence, the MI300X, is set to begin shipping to select customers later this year. This announcement presents the strongest challenge to Nvidia, which currently holds a dominant position in the AI chip market.
3. LLMs arenβt even as smart as dogs, says Metaβs AI chief scientist
According to Yann LeCun, Metaβs AI chief scientist, LLMs are not even as smart as dogs. He argues that LLMs lack true intelligence as they cannot understand, interact with, or comprehend reality; their output is solely based on language training. LeCun asserts that genuine intelligence goes beyond language and highlights that most human knowledge has little to do with language.
4. OpenAI CEO Sam Altman Asks China to Help in AI Regulation
OpenAI CEO Sam Altman has extended an invitation to China, seeking collaboration in the development of βguardrailsβ for the AI sector in response to mounting concerns. China is anticipated to have a draft of its AI regulations ready for review this year, as the country takes steps to manage the proliferation of AI systems inspired by ChatGPT.
Singal is one of the platforms opposing the bill introduced by the UK government that includes provisions to scan usersβ messages for harmful content, among other things. The president of this not-for-profit messaging app expresses her belief that existential warnings about AI enable big tech companies to solidify their power. She also discusses the potential unworkability of the online safety bill.
Five 5-minute reads/videos to keep you learning
The article explores various techniques aimed at accelerating the training and inference processes of large language models (LLMs) while utilizing a substantial context window of up to 100K input tokens. These techniques encompass ALiBi positional embedding, Sparse Attention, FlashAttention, Multi-Query attention, Conditional computation, and the utilization of 80GB A100 GPUs.
2. The New Language Model Stack
The adoption of language model APIs is giving rise to a new technology stack. This article examines data gathered from a survey conducted across 33 companies within the Sequoia network, aiming to gain insights into the applications being developed and the stacks being utilized. Nearly all of the surveyed companies utilize OpenAIβs GPT and consider a retrieval system to be a crucial component of their stack.
3. A machine learning engineerβs Guide to the AI Act
The AI Act marks a significant milestone in the regulatory framework for AI, indicating notable forthcoming changes for machine learning engineers. The enactment of the EU AI Act is expected in early 2024, with full enforcement set for 2026. This article provides essential information that AI/ML teams should be aware of regarding this newly regulated future.
4. Active learning clearly explained
Active learning enables the optimization of dataset annotation and facilitates the training of the best possible model with minimal training data. This tutorial offers an introduction to active learning, delving into its practical application through an innovative tool developed by Encord.
5. Using ChatGPT for Translation Effectively
ChatGPT has showcased its remarkable accuracy in handling translation tasks. In this post, you will learn how to utilize ChatGPT prompts to explore its translation capabilities. Specifically, you will discover how to translate a poem from English to Swedish, convert Julia code to Python, and enhance translation results.
Papers & Repositories
This paper presents a case study on the prevalence of LLM usage among crowd workers. By employing a combination of keystroke detection and synthetic text classification, the study estimates that 33β46% of crowd workers utilized LLMs while completing the assigned tasks.
2. Demystifying GPT Self-Repair for Code Generation
This paper examines the self-repair capabilities of GPT-3.5 and GPT-4 on the challenging APPS dataset, comprising diverse coding challenges. The study finds that only GPT-4 demonstrates effective self-repair. It identifies the feedback stage as a bottleneck, and leveraging GPT-4 for providing feedback on programs generated by GPT-3.5, as well as expert human programmers providing feedback on programs generated by GPT-4, leads to substantial performance improvements.
3. Exploring the MIT Mathematics and EECS Curriculum Using Large Language Models
The paper assesses the performance of large language models in meeting the graduation requirements for Mathematics and EECS majors at MIT. The curated dataset comprises 4,550 questions and solutions from problem sets, midterms, and finals across all relevant courses. GPT-3.5 accomplishes one-third of the MIT curriculum, while prompt-engineered GPT-4 achieves a perfect solve rate on a test set excluding image-based questions.
GPT Engineer is another instance of the βAutoGPTβ line of work. This model is designed for programming applications. Users can specify the desired output, the AI then seeks clarification if needed, and subsequently generates an entire codebase based on the given prompt.
5. Segment Any Point Cloud Sequences by Distilling Vision Foundation Models
This work introduces Seal, a novel framework that utilizes VFMs (Viewpoint Feature Matrices) for segmenting diverse automotive point cloud sequences. Seal exhibits three appealing properties: scalability, consistency, and generalizability. Moreover, Seal demonstrates substantial performance improvements over existing methods across 20 different few-shot fine-tuning tasks.
Enjoy these papers and news summaries?Β Get a daily recap in your inbox!
The Learn AI Together Community section!
Weekly AI Podcast
In this weekβs episode of the βWhatβs AIβ podcast,Β Louis BouchardΒ interviews Luis Serrano, an AI scientist, AI educator, and author known for his popular YouTube channel,Β Serrano.Academy, and the bestselling book βGrokking Machine Learning.β They delve into the fascinating world of artificial intelligence and its diverse applications. In this episode, Luis Serrano takes us on a captivating journey through the realm of LLMs, discussing their mind-blowing capabilities, the significance of AI education, and much more. If you are curious about the groundbreaking advancements in large language models (LLMs) and want to explore the world of AI education, tune in to the episode onΒ YouTube,Β Spotify, orΒ Apple Podcasts.
Meme of the week!
Meme shared byΒ Rucha#8062
Featured Community post from the Discord
CraeniusΒ has just shared their latest project, NeuralDistance, which focuses on utilizing monocular vision to estimate distances and detect objects with remarkable accuracy. This project utilizes the YOLOv3 (You Only Look Once) object detection algorithm to detect objects in images or videos and estimate their distances from the camera. Additionally, the project includes a neural network model for distance estimation based on specific object annotations. Take a look at it onΒ GitHubΒ and support a fellow community member. Share your feedback and questions in the threadΒ here.
AI poll of the week!
Join the discussion on Discord.
TAI Curated section
Article of the week
How AI is Used to Combat Social Engineering Attacks β Part 2Β byΒ John Adeojo
Social engineering attacks have emerged as the preferred form of cyber attack for criminals seeking to gain access to finances and data. This series of articles explores various AI approaches for detecting social engineering attacks. In this particular installment, we delve deeper into more advanced AI strategies, considering the potential of deep learning and generative AI approaches in combating phishing and other forms of social engineering attacks.
Our must-read articles
All About Time Series PitfallsΒ byΒ Shrashti Singhal
Bagging vs. Boosting: The Power of Ensemble Methods in Machine LearningΒ byΒ Thomas A Dorfer
If you are interested in publishing with Towards AI,Β check our guidelines and sign up. We will publish your work to our network if it meets our editorial policies and standards.
Job offers
- Senior Software Engineer @Algolia (Remote)
- Senior Full Stack Engineer @ClosedLoop (Remote)
- Machine Learning Intern @Bayut (Dubai, UAE)
- Data Scientist @Zeal Group (London, UK)
- Software Engineer β AI Focus @Pathway (Remote)
- Data Scientist @Obviously AI (Remote)
- NLP Engineer @Moveo AI (Athens, Greece)
Interested in sharing a job opportunity here? ContactΒ [email protected].
If you are preparing your next machine learning interview, donβt hesitate to check out our leading interview preparation website,Β confetti!
This AI newsletter is all you need #52 was originally published inΒ Towards AIΒ onΒ Medium, where people are continuing the conversation by highlighting and responding to this story.