This AI newsletter is all you need #34
Last Updated on February 15, 2023 by Editorial Team
What happened this week in AI by Louis
This week was rather chaotic in the world of large language models (LLMs) and “Generative AI” as large tech companies scrambled to display their technology in the wake of ChatGPT’s success. Microsoft announced an AI-powered version of the Bing search engine that incorporates OpenAI’s ChatGPT technology into its Edge browser. In response, Alphabet announced their alternative to ChatGPT, named Bard. However, promotional material for Bard contained inaccurate information raising questions as to whether Google rushed out its release. Meanwhile, Chinese web giant Baidu is preparing to launch a generative AI chatbot, ERNIE, later this year.
What people call “Generative AI” is increasingly looking to be the next major platform for founders and startups to use to build new products. The barriers to entry to starting a business have now been reduced. You can rapidly and affordably create a prototype or minimum viable product on the back of prompting or fine-tuning APIs of LLMs like ChatGPT. But it is difficult to know how the ecosystem will play out and what capabilities and products will be built into the LLMs and owned by the likes of OpenAI, Microsoft, and Google and which will be performed by the surrounding startup ecosystem.
This week we published a new blog Learn Prompting 101: Prompt Engineering Course & Challenges as a summary of Prompt Engineering and how to talk to LLMs and get the most out of them. This forms an introduction to the comprehensive open-source Learn Prompting course that we have contributed to.
1. Alphabet Stock Plunge Erases $100 Billion After New AI Chatbot Gives Wrong Answer In Ad
Questions were raised about whether Alphabet rushed the release of its new Bard LLM after promotional material contained inaccurate information. This comes as Microsoft begins to integrate ChatGPT into its Bing search engine and opens a debate about how generative AI will change the way people browse and interact with the internet going forward.
2. Generative AI: The Next Consumer Platform
We’ve entered the age of generative AI. It could be the next major platform upon which founders build category-defining products. This article explores the main consumer categories with opportunities like search and product discovery, education, dating, coaching, e-commerce, and more.
3. Hands-on with the new Bing: Microsoft’s step beyond ChatGPT
Microsoft announced a new AI-powered version of the Bing search engine using the same technology behind ChatGPT. The way Microsoft has integrated these chatbot powers into its Edge browser is different from ChatGPT. This version allows you to ask questions about real-time news and events unfolding by integrating chatbot capabilities into its Edge browser.
4. China’s Baidu reveals generative AI chatbot based on language model bigger than GPT-3
The Chinese web giant, Baidu, has made AI the focus of its hyperscale cloud and is set to launch a generative AI chatbot later this year. According to a Baidu spokesperson, the company plans to complete internal testing in March before making the chatbot available to the public. The spokesperson added that what sets ERNIE apart from other language models is its exceptional understanding and generation capabilities, thanks to its ability to integrate extensive knowledge with massive data.
5. Announcing the launch of the Medical AI Research Center (MedARC)
Medical AI Research Center (MedARC) announced a new open and collaborative research center dedicated to advancing the field of AI in healthcare. MedARC aims to develop large AI models, also known as foundation models, for use in medicine and to build interdisciplinary teams that can address clinical needs.
Three 5-minute reads/videos to keep you learning
1. Understanding Large Language Models — A Transformative Reading List
In just five years, large language models (transformers) have revolutionized the field of natural language processing. To help researchers and practitioners get started with these models, this article provides a chronological reading list of academic research papers. The list covers the main architecture and tasks, scaling laws, improving efficiency, and steering large language models to intended goals and interests.
2. Machines Learn Better if We Teach Them the Basics
Although AI agents have shown impressive performance in certain tasks, they often struggle to generalize to new environments and lack the abstract skills necessary to succeed in diverse contexts. This limitation arises from their limited foundation of concepts and the vast space of possibilities they must explore. To overcome this limitation, computer scientists are developing new techniques to teach machines foundational concepts before unleashing them into the wild. This article delves into the details of these emerging approaches and their potential impact on AI development.
3. Boltus, The God of AI — A four-episode series of learning to use AI with funny production
This Twitter series is a four-episode guide to using AI, covering a range of topics. These include deploying diffusion models at scale, building text-to-image generators, integrating Stable Diffusion into a Slack workspace, and improving the speed of serving Stable Diffusion by 3x.
4. Solving a machine-learning mystery
Researchers are studying a new concept called in-context learning, where a large language model can learn a new task after seeing only a few examples, without updating its parameters. This phenomenon could be explained by smaller, simpler linear models embedded in the larger model that can be trained to complete the new task using only existing information. This research sheds light on the learning algorithms that large models can use and could help models complete new tasks without costly retraining. Scientists from MIT, Google Research, and Stanford University are working to unravel this mystery.
5. The Most Important Job Skill of This Century
A product race is underway in the world of artificial intelligence. AI evangelists believe generative AI will become the overlay for search engines, as well as creative work, memo writing, research, homework, sketching, outlining, storyboarding, and teaching. This means that the future of work could depend on how well people can talk to AI and the skill required to do so: prompt engineering. This article explains why.
Enjoy these papers and news summaries? Get a daily recap in your inbox!
The Learn AI Together Community section!
Upcoming Community Events
The Learn AI Together Discord community hosts weekly AI seminars to help the community learn from industry experts, ask questions, and get a deeper insight into the latest research in AI. Join us for free, interactive video sessions hosted live on Discord weekly by attending our upcoming events.
1. The Neural Network Architecture Seminar (#6)
This week’s session in the (free) nine-part Neural Networks Architectures series will be led by Pablo Duboue (DrDub) and focuses on Transformer Networks. During this session, he will explore topics such as transformers, BERT, GPT, T5, pretraining, transfer learning, zero-shot, and few-shot learning. Find the link to the seminar here or add it to your calendar here.
Date & Time: 14th February, 11 pm EST
Learn AI Together’s weekly reading group offers informative presentations and discussions on the latest developments in AI. It is a great (free) event to learn, ask questions, and interact with community members. Join the upcoming reading group discussion here.
Date & Time: 18th February, 10 pm EST
Add our Google calendar to see all our free AI events!
Meme of the week!
Meme shared by neuralink#7014
Featured Community post from the Discord
Doomlaser#5687 created a post about how he integrated the OpenAI API into a roguelike FPS game, which would generate randomized and emergent character dialogue with live text-to-speech. Instead of manually writing the dialogue, each character would receive prompts based on their background and the situation they are in, leading to unique and dynamic conversations. Check out the full post here and support a fellow community member. Share your feedback in the thread here.
AI poll of the week!
Find out the answer on Discord!
TAI Curated section
Article of the week
Understand the Fundamentals of an Artificial Neural Network by Janik Tinz
ANNs are typically implemented using frameworks like TensorFlow, Keras, or PyTorch, which are suitable for very complex ANNs. However, data scientists need to have a fundamental understanding of ANNs. This article aims to help you understand how a neural network works by covering the introduction to the basics of ANNs, followed by an in-depth explanation of the basic concepts of ANNs.
Our must-read articles
Six Amazing Unknown Python Libraries by Dhilip Subramanian
An Intuitive Explanation of Policy Gradient by Renu Khandelwal
If you are interested in publishing with Towards AI, check our guidelines and sign up. We will publish your work to our network if it meets our editorial policies and standards.
Machine Learning Infrastructure Engineer (Generative AI) @PhotoRoom (Remote)
Senior Data Engineer @Clarity AI (Remote)
Machine Learning Engineer — Central Platform Team (MLOps / MLInfra) @Canva (Remote)
Senior Data Scientist I @Signifyd (Remote)
Staff Data Scientist @Zapier (Remote)
Data Analyst @Pocket Worlds (Remote)
Data Scientist @MyFitnessPal (Remote)
If you are preparing your next machine learning interview, don’t hesitate to check out our leading interview preparation website, confetti!