This AI newsletter is all you need #41
Last Updated on July 25, 2023 by Editorial Team
Author(s): Towards AI Editorial Team
Originally published on Towards AI.
What happened this week in AI by Louie
This week, there was a focus on AI safety, privacy, and regulation. Although the emergence of the next generation of AI models has many advantages, democratization, accessibility, and affordability of generative AI tools, as well as the increased capabilities of LLMs, have created a significant potential for misuse and misinformation, either produced by users or the systems themselves. The current discrepancy between the pace of growth and the development of safety measures and regulations makes the discussion of AI safety even more pressing.
Last week, one of the radical steps toward AI safety was taken with an open letter from The Future of Life Institute (FLI). The letter, signed by over 50,000 people, urges all AI labs to pause the training of AI systems that are more powerful than GPT-4 for at least six months. The letter highlights the risks of AI, criticizing the βout-of-control race to develop and deploy ever more powerful digital minds that no one β not even their creators β can understand, predict, or reliably control.β It also emphasizes the lack of appropriate planning and management for this potentially highly disruptive technology.
The FLI letter has sparked a wide range of opinions on whether LLMs and AI should be regulated, and what the actual risks are. Many responses have been generated, both for and against, with individuals highlighting their own perceived risks. Eliezer Yudkowsky, a researcher at the Machine Intelligence Research Institute, wrote an article expressing his support for the letterβs intentions but explaining why he did not sign it. He believes that the letter understates the severity of the situation and does not request enough action to address it. In contrast, Yann LeCun has been dismissive of the need for regulation, responding with an βOkay doomerβ¦β attitude.
Another recent development towards AI safety is Italyβs ban on the use of ChatGPT due to privacy concerns, citing non-compliance with EU data protection regulations (GDPR) by OpenAI. Consequently, OpenAI has been compelled to implement a complete geo-block on the usage of ChatGPT within Italy.
At Towards AI, we see several potential risks associated with AI, including 1) the amplification of misinformation through automated propaganda or deepfakes and other tools to empower bad actors, 2) over-reliance and overconfidence in systems that still make mistakes, 3) the misalignment of existing laws and regulations such as copyright and GDPR, 4) social and economic disruption resulting from rapid AI adoption and its impact on jobs and existing industries, and 5) existential risks from superintelligence or misaligned AGI. While the open letter has valid points regarding AI risks and the need for regulation, we donβt believe that a pause in AI development would be effective or desirable, as itβs difficult to ensure that other countries like China wouldnβt continue to develop these models. AI progress and adoption are inevitable, and countries that limit its use are likely to fall behind. However, we do believe that more thought, care, and investment should be put into optimizing the odds of positive AI outcomes while minimizing risks. We also think that AI should be regulated, and governments should establish new departments, policies, and internal expertise to pre-empt and manage some of these risks. Although itβs difficult to determine the likelihood of misaligned AGI risks and their timescale, given whatβs at stake, it makes sense to invest heavily in researching and managing these risks, even if the odds are small.
– Louie Peters β Towards AI Co-founder and CEO
Hottest News
Twitter has made its tweet recommendation algorithm code available on GitHub, which provides insight into the factors that determine whether a tweet appears on a userβs timeline. The blog post that accompanies the code release serves as an introduction to how the algorithm selects tweets for a userβs timeline.
2. The Only Way to Deal With the Threat From AI? Shut It Down
Time magazine published an opinion piece by Eliezer Yudkowsky in response to the letter by FLI. He stated, βI refrained from signing because I think the letter is understating the seriousness of the situation and asking for too little to solve it.β According to him, AI poses an existential threat, and we are not adequately prepared to deal with it. Therefore, he argued that it is necessary to βshut it all down.β
3. ChatGPT gets βeyes and earsβ with plugins that can interface AI with the world
OpenAIβs plugins expand ChatGPTβs capabilities to interact with the Internet, enabling functions like flight booking, grocery ordering, web browsing, and more. These plugins are small pieces of code that instruct ChatGPT on how to utilize external online resources. However, some AI researchers worry that giving AI models access to external systems can lead to harm, without the need for consciousness or sentience.
4. What We Still Donβt Know About How A.I. Is Trained
One of the prominent aspects of GPT-4 is its ability to respond to queries with confidence. However, this is both a feature and a bug. The developers of GPT-4 acknowledge in a technical report that it can sometimes make basic reasoning mistakes that are inconsistent with its proficiency across numerous domains.
5. The Italian government has banned ChatGPT
The Italian data protection agency has ordered OpenAI to block ChatGPT in Italy, citing unlawful data gathering. Their main concern is privacy violations, arguing that OpenAI is non-compliant with EU data protection regulations (GDPR). OpenAI has complied with the order by disabling ChatGPT for users in Italy.
Three 5-minute reads/videos to keep you learning
In an experiment, AI was used to generate a comprehensive marketing campaign in just 30 minutes for a new educational game launch. The AI conducted market research, developed a website and social media campaign, and more. This post explores the potential and disruptive power of AI in marketing.
2. Malleable software in the age of LLMs
This article explores the significant changes that LLMs may enable in the creation and distribution of software, as well as in how people interact with software. It answers various questions surrounding topics, such as interaction models, software customization, intent specification, and more.
3. Everything you need to know about prompting
This is an interview with Sander Schulhoff, the creator of learnprompting.org, the largest online resource for prompting. It explores the exciting skill of prompting, which can lead to various opportunities and enhance productivity. It discusses the significance of learning this skill and provides tips for improving it.
4. From Deep to Long Learning?
This article provides a summary of recent work from Stanford aimed at significantly increasing the context window for language models. By enabling longer prompts and outputs, this advancement may lead to new possibilities in tasks such as summarizing entire books, editing entire repositories of code, and generating multimodal videos.
This article offers an overview of the history of Generative Pre-trained Transformer (GPT) research, emphasizing the latest state-of-the-art models and their distinctions. It showcases how the current GPT research is leading to significant advancements in the field.
Papers & Repositories
Vicuna-13B is an open-source chatbot trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT. Preliminary evaluation using GPT-4 as a judge shows that Vicuna-13B achieves over 90% quality of OpenAI ChatGPT and Google Bard while outperforming other models like LLaMA and Stanford Alpaca in over 90% of cases.
2. Meet in the Middle: A New Pre-training Paradigm
This paper introduces a new pre-training paradigm that improves both the training data efficiency and capabilities of LMs in the infilling task. The effectiveness of this paradigm is demonstrated through extensive experiments on both programming and natural language models, where it outperforms strong baselines.
3. LLaMA-Adapter: Efο¬cient Fine-tuning of Language Models
LLaMA-Adapter is an effective method for fine-tuning models into instruction-following ones by using learnable prompts. With multi-modal inputs, it produces high-quality responses and achieves excellent reasoning capabilities. With the help of 52K self-instruct demonstrations, LLaMA-Adapter introduces only 1.2M learnable parameters to the frozen LlaMa 7B model and takes less than one hour to fine-tune on 8 A100 GPUs.
4. ChatGPT Outperforms Crowd-Workers for Text Annotation Tasks
This paper investigates the potential of large language models (LLMs) for text annotation tasks, specifically focusing on ChatGPT. The paper shows that ChatGPT zero-shot classifications, without any additional training, outperform MTurk annotations and achieve this at a significantly lower cost.
5. A Comprehensive Survey of AI-Generated Content
This paper provides a survey of Artificial Intelligence Generated Content (AIGC), highlighting recent advancements in complex modeling and large datasets, and exploring new ways to integrate technologies such as reinforcement learning. It also offers a comprehensive review of the history of generative models, covering both unimodal and multimodal interaction.
Enjoy these papers and news summaries? Get a daily recap in your inbox!
The Learn AI Together Community section!
Weekly AI Podcast
Louis Bouchard has launched a weekly podcast aimed at demystifying the various roles in the AI industry and discussing interesting AI topics with expert guests. The podcast, available on YouTube, Spotify, and Apple Podcasts, features interviews with industry experts. In the latest episode, Louis interviews Sander Schulhoff, the creator of Learn Prompting, the most comprehensive guide on prompt engineering. As shared in our learning section above, the interview is all about prompting, demystifying it, and condensing it into a one-hour discussion. This is the goal of this weekly podcast: to demystify something about AI every week with the help of an expert. Each episode features a specific topic, sub-field, or different roles related to AI, with the aim of teaching and sharing knowledge from experts who have worked hard to gather it.
A small teaser for the next episode: it will be about self-driving cars!
Meme of the week!
Meme shared by neuralink#7014
Featured Community post from the Discord
Oliver Z#1100 has created a Chrome extension called TwOp that can generate AI-powered social media posts by entering topics, keywords, themes, and desired tones. The extension is open-source and available for download on the Chrome Web Store and GitHub. Check it out and support a fellow community member. Share your feedback or questions in the thread here.
AI poll of the week!
Join the discussion on Discord.
TAI Curated section
Article of the week
Googleβs New AI Model PaLM-E Explained by Louis Bouchard
In this article, the author provides an in-depth tour of PaLM-E, Googleβs latest publication, which is described as an embodied multimodal language model. This means that it can comprehend various types of data, including text and images, from the ViT and PaLM models mentioned earlier.
Our must-read articles
Foundation Models: Scaling Large Language Models by Luhui Hu
How to Build a Low-Code Sales Dashboard with Python and Deepnote by Asish Biswas
StyleGAN2: Improve the Quality of StyleGAN by Albert Nguyen
If you are interested in publishing with Towards AI, check our guidelines and sign up. We will publish your work in our network if it meets our editorial policies and standards.
Job offers
Senior Software Security Engineer, Cloud Security @Anthropic (San Francisco, USA/Hybrid)
Director, Product β Data Platform @Biobot Analytics (Remote)
Data Scientist (H/F/X) @Shadow (Paris, France/Hybrid)
Large Language Model (LLM) Developer @Asimov (Boston, MA, USA)
Machine Learning Engineer/Scientist @Unlearn.AI (San Francisco, USA/Hybrid)
AI Research Assistant @Modern Intelligence (Austin, TX, USA)
Data Engineer β Jr/Mid Level @pulseData (Remote)
AI/ML Engineer @career (Remote)
Interested in sharing a job opportunity here? Contact [email protected].
If you are preparing your next machine learning interview, donβt hesitate to check out our leading interview preparation website, confetti!
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI