Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

This AI newsletter is all you need #74
Artificial Intelligence   Latest   Machine Learning

This AI newsletter is all you need #74

Last Updated on November 22, 2023 by Editorial Team

Author(s): Towards AI Editorial Team

Originally published on Towards AI.

What happened this week in AI by Louie

This week, all attention was on the whirlwind series of events at OpenAI, which unfortunately eclipsed several interesting new model releases. Safe to say you have followed the twists and turns of the drama, so we won’t cover here in detail, but in summary, OpenAI’s board fired its CEO, Sam Altman, on Friday with no prior warning to key staff or stakeholders. The board justified its actions, saying Sam β€œwas not consistently candid in his communications with the board.” Still, even today, they have not given a clear reason to OpenAI employees, executives, or Microsoft. As things stand, 747 out of 770 OpenAI staff have signed a joint letter to OpenAI’s board stating they may quit and follow Sam Altman and Greg Brockman to join a new AI team at Microsoft unless the board resigns and reinstates Sam and Greg. This letter was also signed by co-founder Ilya Sutskever, who now regrets participating in the board’s actions.

Perhaps strangest in all of this is the silence from the OpenAI board and the failure to explain their actions, which leaves their motives very unclear. Even more incredibly, the board apparently stated to OpenAI executives that allowing the company to be destroyed β€œwould be consistent with the mission.”

Suppose the board’s initial decisions were led by AI safety concerns, tension over monetization projects and investor profit sharing, and Sam’s leadership and board communication β€” at this stage. In that case, we think it would be clear to them that allowing OpenAI to collapse is not helping their cause relative to backing down and allowing OpenAI to survive in its current form. Suppose OpenAI staff leave en masse to join Microsoft (which doesn’t have the same AGI safeguards). In that case, it also means transferring all of OpenAI’s future potential profits (those beyond the profit cap set to accrue to the OpenAI charity) to a corporate at Microsoft. Sam would also still be in charge of the team and would have no OpenAI board to communicate with at all! Slowing down OpenAI also gives other competitors such as Google or Meta a chance to catch up, or even other countries such as China. All where the OpenAI board would have less power to control the future of AI.

Why should you care?

The significance of these events can range from trivial Silicon Valley politics to a six to eighteen-month setback in AI progress and even a pivotal moment in Earth’s future. This is all depending on your views of;

  • What roadblocks will we hit in the current trajectory of LLM progress, and how globally impactful will LLM technology really be?
  • Is AGI really a risk, and should building safeguards for this be a priority at this stage?
  • How long would it take to rebuild OpenAI’s GPT4.5/5 model pipeline within Microsoft, including 1) the RLHF dataset and ongoing pipeline from ChatGPT’s 100 million weekly active users, 2) Pre-training data set, and 3) training software infrastructure?
  • How far behind were Google, Anthropic, Meta, xAi, and others from releasing a GPT-4.5/5 level model?
  • Could this cause enough disruption to allow China to take the lead from the US and impact geopolitics?
  • Will AI be a winner in most markets where who gets there first really makes all the difference?
  • Were OpenAI’s charity status and AGI safeguards really set up to benefit humanity and redistribute wealth to all (after passing the several hundred $bn profit cap) better than a public company like Microsoft?

The leadership direction, organizational structure, culture, and location of the AI leader could well be globally significant, but we will have to wait and see how things play out!

– Louie Peters β€” Towards AI Co-founder and CEO

Hottest News

  1. A Timeline of Sam Altman’s Firing From OpenAI β€” And the Fallout

Sam Altman has stepped down as the CEO of OpenAI, leading to the resignation of the company’s president and co-founder, Greg Brockman, and three senior OpenAI researchers. The situation is rapidly evolving, and this article provides a timeline to help you follow the unfolding events.

2. Kyutai Is a French AI Research Lab With a $330 Million Budget That Will Make Everything Open Source

Paris-based AI research lab Kyutai secured $330 million in funding to advance the development of artificial general intelligence. With these resources, Kyutai plans to conduct comprehensive research led by PhD students, postdocs, and researchers. Additionally, the lab prioritizes transparency in AI by openly sharing its models, source code, and data.

3. DeepMind Introduces Lyria, a Model for Music Generation

Google DeepMind’s AI music model, Lyria, is transforming the music creation process by producing exceptional quality music with customizable vocals. The β€˜Dream Track’ experiment on YouTube enables artists to connect with fans through AI-generated voice and music, while AI tools enhance the creative journey for professionals in the music industry.

4. GraphCast: AI Model for Faster and More Accurate Global Weather Forecasting

DeepMind has developed GraphCast, an advanced AI system that uses Graph Neural Networks to accurately and quickly predict global weather for up to 10 days in just a minute. It outperforms the industry-standard HRES system, can track cyclones and atmospheric rivers, and can identify extreme temperatures.

5. Nvidia Unveils H200, Its Newest High-End Chip for Training AI Models

Nvidia introduces the H200 GPU, an upgraded version with 141GB of high-bandwidth memory. This enhancement helps with the inference process in training large AI models. Set to be released in Q2 2024, the H200 will compete with AMD’s MI300X GPU, offering increased memory capacity for handling big models.

Do you think firing Sam Altman is the right move for OpenAI? Share your thoughts in the comments below!

Five 5-minute reads/videos to keep you learning

  1. Applying OpenAI’s RAG Strategies

OpenAI’s RAG model incorporates various retrieval strategies: cosine similarity, multi-query, step-back prompting, Rewrite-Retrieve-Read, and efficient routing. This article expands on each method used in Open AI’s series of RAG experiments and shows how to implement each.

2. Practical Tips for Finetuning LLMs Using LoRA (Low-Rank Adaptation)

This article explores the practical steps for fine-tuning Low-Rank Adaptation (LoRA) in Language Models (LLMs), providing insights and recommendations. Experiments demonstrate consistent results with LoRA, saving memory usage but increasing runtime. Using LoRA on all layers, adjusting rank and alpha can improve model performance.

3. Start and Improve Your LLM Skills in 2023

This is a complete guide to starting and improving your LLM skills in 2023 without an advanced background in the field and staying up-to-date with the latest news and state-of-the-art techniques. It is intended for anyone with a small programming and machine learning background.

4. Here Is How Far We Are to Achieving AGI, According to DeepMind

A team of scientists from Google DeepMind has proposed a new framework for classifying the capabilities and behavior of AGI systems and their precursors. This article explores the framework, including criteria for measuring artificial intelligence, a matrix measuring performance and generality, and another matrix measuring autonomy and risk.

5. OpenAI’s Identity Crisis and the Battle for AI’s Future

This is a well-articulated commentary on the recent series of events happening at Open AI. According to the author, the question of the balance of AI safety vs. market momentum was a factor in the decision to oust Sam Altman.

Repositories & Tools

  1. explodinggradients/ragas

Ragas is a framework that helps you evaluate your Retrieval Augmented Generation (RAG) pipelines. RAG denotes a class of LLM applications that use external data to augment the LLM’s context.

2. abi / screenshot-to-code

This app converts a screenshot to HTML/Tailwind CSS. It uses GPT-4 Vision to generate the code and DALL-E 3 to generate similar-looking images.

3. Netmind Power

Netmind Power is a decentralized machine learning and AI platform. You can train their own models on the platform, and they will find the compute from their network and distribute the code for you.

4. BuilderIO / gpt-crawler

GPT Crawler lets you provide a site URL, which it will crawl and use as the knowledge base for the GPT. You can either share this GPT or integrate it as a custom assistant into your sites and apps.

Top Papers of The Week

  1. Comparing Humans, GPT-4, and GPT-4V On Abstraction and Reasoning Tasks

This study compares GPT-4 and its multimodal version, GPT-4V, with humans on abstraction and reasoning tasks using the ConceptARC benchmark. Results show neither GPT-4 version matches human-level abstract reasoning, even with detailed one-shot prompts and simplified image tasks

2. GPT-4V in Wonderland: Large Multimodal Models for Zero-Shot Smartphone GUI Navigation

The paper presents MM-Navigator, a GPT-4V-based agent that successfully performs zero-shot GUI navigation on smartphones using large multimodal models. It demonstrates great accuracy in understanding and executing iOS screen instructions.

3. A Survey on Language Models for Code

This comprehensive survey explores the evolution and advancements in code processing using language models. It covers over 50 models, 30 evaluation tasks, and 500 related works, focusing on general language and specialized models trained on code. The survey is open and updated on the GitHub repository.

4. Chain-of-Note: Enhancing Robustness in Retrieval-Augmented Language Models

Retrieval-augmented language models (RALMs) enhance language models’ capabilities but can generate misguided responses due to unreliable retrieved information. A new approach, Chain-of-Noting (CoN), generates sequential reading notes to evaluate document relevance and improve RALM responses.

Quick Links

  1. Meta announced Emu Video and Emu Edit, their latest AI image editing and video generation breakthroughs. EMU was announced in September, and today, it’s being used in production, powering experiences such as Meta AI’s Imagine feature.
  2. NVIDIA announced it has supercharged the world’s leading AI computing platform by introducing the NVIDIA HGXβ„’ H200. The platform features the NVIDIA H200 Tensor Core GPU with advanced memory.
  3. IBM furthers its commitment to climate action through new sustainability projects and free training in green and technology skills for vulnerable communities.

Who’s Hiring in AI

Sr. Backend Software Engineer (Billing Engineer) @Philo (Remote)

Decision Scientist β€” Data and Analytics @Salesforce (Remote)

Senior Software Engineer β€” DGX Cloud Messaging platform @NVIDIA (US/Remote)

AI & ML Engineer (Python, PyTorch, Tensorflow) @CodeLink (Ho Chi Minh City, Vietnam)

AI/ML Engineer @Intersog (Remote)

Technical Product Manager @CodeHunter (Remote)

Data Science Associate @Black Swan Data, Inc. (Remote)

Interested in sharing a job opportunity here? Contact [email protected].

If you are preparing your next machine learning interview, don’t hesitate to check out our leading interview preparation website, confetti!

https://www.confetti.ai/

Think a friend would enjoy this too? Share the newsletter and let them join the conversation.

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.

Published via Towards AI

Feedback ↓