Master LLMs with our FREE course in collaboration with Activeloop & Intel Disruptor Initiative. Join now!

Publication

This AI newsletter is all you need #22
Newsletter

This AI newsletter is all you need #22

Last Updated on November 22, 2022 by Editorial Team

Author(s): Towards AI Editorial Team

Originally published on Towards AI the World’s Leading AI and Technology News and Media Company. If you are building an AI-related product or service, we invite you to consider becoming an AI sponsor. At Towards AI, we help scale AI and technology startups. Let us help you unleash your technology to the masses.

What happened this week in AI by Louis

One word: Galactica.

Galactica, Meta’s most recent large language model that can store, combine and reason about scientific knowledge was shut down after many users reported results that were misleading or incorrect. There’s a lot of controversy going on around this model, mostly to do with the gap between Meta’s confidence over the model and its rightfully questionable results. The demo was not as catastrophic as Microsoft’s Tay incident of 2016, but it too quickly found the line between fun experimental tool and dangerous propagator of misinformation. Galactica represents a big advancement for large language models, but given that it was intended for scientific use, the level of rigor was far from met.

On my end, I really liked a tweet shared by my friend Lior, which greatly summarizes my thoughts. I’d like to quote here:

“The drama surrounding Galactica baffles me. Let’s remember we’re all on the same team trying to make our tiny field progress.”

Was Galactica perfect? No. But GPT3, StableDiffusion, and Dall-E weren’t either. It’s by releasing it into the world that the feedback loop starts, and these insights help us build better tools over time.

To add the ethical perspective from Lauren, let’s not forget what effects this might leave on the world and our responsibility as AI co-creators to handle those effects, whether they range from negative to positive. This is neither the first nor the last language model to accidentally spread falsehoods, but understanding and learning from these mistakes ensures that the progress we work toward in AI forges the future we want.

Hottest News

  1. Achieving Individual — and Organizational — Value With AI: A report
    The report has many interesting findings and suggests that employees tend to underestimate how much they use AI technologies at work. Some key findings are that a majority of individual workers personally obtain value from AI and regard AI as a coworker, not a job threat. Requiring individuals to use AI encourages its use more than building trust in AI does, and mandatory use, despite seeming oppressive, still leads to individual value. Organizations get value when individuals get value, not at the expense of individual value.
  2. Design app Canva released a beta version of its own text-to-image generator
    Yes, another one! I actually like this news. I create all my YouTube thumbnails using Canva and I really like their product. They also have a background removal tool that works quite well and other AI-based tools. This new one might be really powerful too and useful for AI-related thumbnails 😎
  3. More layoffs…
    Twitter, Meta, and now Amazon are planning to lay off approximately 10,000 employees, one of the largest cuts in the company’s history! For those of you looking for a job, please be patient and try not to be discouraged — you will find something! In the meantime, my best recommendation is to suggest work on your portfolio. Build a cool little app, implement Stable Diffusion, and join one or more Kaggle competitions! Try enjoying the “free time” you have and leverage it to improve your chances of finding your future dream job 🙂

Most interesting papers of the week

  1. Galactica: A Large Language Model for Science
    Galactica: a large language model that can store, combine and reason about scientific knowledge.
  2. Latent-NeRF for Shape-Guided Generation of 3D Shapes and Textures
    An efficient NeRF approach based on Latent Diffusion Models.
  3. Extreme Generative Image Compression by Learning Text Embedding from Diffusion Models
    “We propose a generative image compression method that demonstrates the potential of saving an image as a short text embedding which in turn can be used to generate high-fidelity images which is equivalent to the original one perceptually.”

Enjoy these papers and news summaries? Get a daily recap in your inbox!

The Learn AI Together Community section!

Meme of the week!

This is why code should always be open-sourced and reproducible! Meme shared by vladm#8251.

Featured Community post from the Discord

JacobBum#7456 just published “Breaking it Down: K-Means Clustering”. This is a great article which explores and visualizes the fundamentals of K-means clustering with NumPy and scikit-learn. If you write articles and publish them on your blog or on our Medium publication, share them on our discord server and you might get a chance to be featured here too!

AI poll of the week!

Join the discussion on Discord.

TAI Curated section

Article of the week

6 Tips Save Me Time & Memory When Training Machine Learning Models by Youssef Hosni

Training machine learning models can be time and memory-consuming, especially if your data is large. It is important to optimize the workflow to save computational time and memory consumption, especially while training the model multiple times with different hyperparameters to find the best hyperparameters for your model. This article shares six practical tips to decrease computational time and memory consumption while training a machine learning model.

Our must-read articles

In-depth Azure Machine Learning Model Train, Test, and Deploy Pipelines on Cloud With Endpoints for Web APIs by Amit Chauhan

META’s PEER: A Collaborative Language Model by Salvatore Raieli

If you are interested in publishing with Towards AI, check our guidelines and sign up. We will publish your work to our network if it meets our editorial policies and standards.

Job offers

Senior Data Scientist / AI Developer @ Uniphore (Spain, Hybrid Remote)

Data Scientist @ SwissBorg (Europe, Remote)

Lead Data Engineer, Data Platform @ Tubi (Remote)

Data Scientist @ Alethea Group (Remote, US)

Machine Learning Engineer, Infrastructure @ Earnin (Remote)

AI Content Fellowship @ Deepgram (Remote)

Interested in sharing a job opportunity here? Contact [email protected].

If you are preparing your next machine learning interview, don’t hesitate to check out our leading interview preparation website, confetti!


This AI newsletter is all you need #22 was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.

Join thousands of data leaders on the AI newsletter. It’s free, we don’t spam, and we never share your email address. Keep up to date with the latest work in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓