Towards AI Can Help your Team Adopt AI: Corporate Training, Consulting, and Talent Solutions.


This AI newsletter is all you need #8
Latest   Machine Learning

This AI newsletter is all you need #8

Last Updated on July 26, 2023 by Editorial Team

Author(s): Towards AI Editorial Team

Originally published on Towards AI.

What happened this week in AI

This week’s highlight is surely Meta’s new chatbot: BlenderBot 3. BlenderBot 3 is accessible to everyone in the U.S. to chat with in order to collect feedback on its capabilities.

It seems like “Meta’s new AI chatbot can’t stop bashing Facebook” with some hilarious and unexpected answers. The bot has some really funny answers bashing its own company, and as they clearly say in the article: “If you’re worried that artificial intelligence is getting too smart, talking to Meta’s AI chatbot might make you feel better.” Indeed, even though BlenderBot 3 would pass a very specific Turing test and be classified as “intelligent” by some people, it remains a machine interpolating (and not extrapolating as humans can do) from data. Data gathered from human discussions on the internet, including our biases, and some of the worst ones due to anonymity’s tendency to bring out the worst in some people.

This is just one example. A lot of work must be done to improve “intelligence” within machines, including big changes in how we train these algorithms if we want them to be “generalizers” and, even better, extrapolation machines. This would mean that they wouldn’t be limited to training data anymore and could link concepts, make guesses and innovate, just like humans do. Yet, these recent machine learning-based algorithms are extremely powerful for precise and well-defined tasks we optimize them for. Still, such failure cases will happen when trying to use them for more complex tasks like simulating a human discussion.

Hottest News

  1. Meta just launched a new chatbot and it’s bashing its owner!
    Meta’s most recent chatbot, BlenderBot 3 is accessible to everyone in the U.S. to chat with in order to collect feedback on its capabilities. Try it out or read more about it if you are not in the U.S. like me
  2. The Github for stable diffusion is now public!
    The GitHub repo for stable diffusion, described in #2 below, is now public with pre-trained weights and everything you need!
  3. NVIDIA Instant NeRF Wins Best Paper at SIGGRAPH, Inspires Creative Wave Amid Tens of Thousands of Downloads
    Learn more in our article or read the paper. In one sentence: NVIDIA Turns Photos into 3D Scenes in Milliseconds.

Most interesting papers of the week

  1. VoLux-GAN: A Generative Model for 3D Face Synthesis with HDRI Relighting
    A generative framework to synthesize 3D-aware faces with convincing relighting.
  2. High-Resolution Image Synthesis with Latent Diffusion Models
    A latent text-to-image diffusion model. Similar to Google’s Imagen, this model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts.
  3. DeepFaceVideoEditing: Sketch-based Deep Editing of Face Videos
    A novel sketch-based facial high-quality video editing framework leveraging StyleGAN3.

Enjoy these papers and news summaries? Get a daily recap in your inbox!

The Learn AI Together community section!

Meme of the week!

Meme shared by one of our fantastic moderators, DrDub#0108. Join the conversation and share your memes with us!

Featured community post from the Discord

chiral-carbon#3484 asked a very good question in our Q&A channel with MineRL x Deepmind/OpenAI regarding how one gets into such a great AI company. Do you need a PhD, or is a Master’s enough? Is it solely based on your research experience and how can you build the necessary background?

We don’t have the answer to these questions yet, but stay tuned for our interview with them where we will surely ask them!

If you’d also like to ask a question or if you are interested in the MineRL competition or OpenAI/DeepMind, and want to learn about what it’s like to work there, join the conversation and ask your question on our discord channel!

AI poll of the week!

It seems like it is unanimous! Join the discussion on Discord and ask your question in our Q&A channel! We will host more Q&A on the server!

TAI curated section

Article of the week

Machine Learning Checklist: Cost Function and Gradient Descent: This piece absolutely completes the checklist of Machine Learning. The author begins with a simple explanation of the mathematics underpinning cost functions, followed by wonderful real-life examples. The dive into the Python code of each cost function is followed by a step-by-step explanation of Gradient Descent, the most crucial of both Machine Learning and Deep Learning.

If you are interested in publishing with us at Towards AI, please sign up here, and we will publish your blog to our network if it meets our editorial policies and standards.

Lauren’s Ethical Take on BlenderBot 3

Wow, is there a lot to unpack here! There seems to be a difficult balance between the cost of progress and who pays it. It’s laudable that the impetus behind Blenderbot is to improve some of the issues with large language models, but let’s not take them at face value.

First, Blenderbot attaches links to support its claims, which is a great feature that adds reliability to its responses. However, the reputability of the sources cited can still vary wildly — it’s not all JSTOR.

Blenderbot’s static model allegedly will protect from a carried-away-by-bigotry episode (like the one that befell Microsoft’s Tay, which adopted a real-time learning model), yet it can still espouse conspiracy theories and bigoted statements, as well as misinformed or nonsensical ones. This is bound to occur from being trained with unfiltered data at this stage, and will hopefully decrease over time as Meta predicts, but it shouldn’t be ignored as a present drawback.

Meta’s claim of gathering public chat data for the sake of improving the model may prove faulty. If the goal is for Blenderbot to imitate free-ranging, natural human conversation patterns, that won’t be achieved by human interaction with the model, because Blenderbot is not a normal human conversation partner and humans are smart enough to not treat it as such. So once you’re exposed to the potentially confusing and unpleasant conversation and you want to provide feedback or flag inappropriate responses, Meta requires that you give up your own data in order to do so.

Is this the best way to make progress? If their goal is to improve safety over time through interaction, then by Meta’s own standards, it is not safe enough now. Yet, it’s been released to the public. The benefits may come later but the harms are felt now through BlenderBot’s encoded bias. Even with disclaimers and acknowledging problems, more mitigation ought to be implemented by a third party for a release of this scale to avoid further harm from bias and misinformation.

Join the Learn AI community

Featured jobs this week

Senior Computer Vision Engineer @ Neurolabs (London & Remote)

Machine Learning Engineer @ Runway (Remote)

Senior ML Engineer — Algolia AI @ Algolia (Hybrid remote)

Senior ML Engineer — Semantic Search @ Algolia (Hybrid remote)

Interested in sharing a job opportunity here? Contact [email protected] or post the opportunity in our #hiring channel on discord!

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓