Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Unlock the full potential of AI with Building LLMs for Productionβ€”our 470+ page guide to mastering LLMs with practical projects and expert insights!

Publication

Artificial Intelligence   Latest   Machine Learning

Inside 4M-21: Apple Small Model that Works Across 21 Modalities

Author(s): Jesus Rodriguez

Originally published on Towards AI.

Created Using DALL-E

I recently started an AI-focused educational newsletter, that already has over 170,000 subscribers. TheSequence is a no-BS (meaning no hype, no news, etc) ML-oriented newsletter that takes 5 minutes to read. The goal is to keep you up to date with machine learning projects, research papers, and concepts. Please give it a try by subscribing below:

TheSequence | Jesus Rodriguez | Substack

The best source to stay up-to-date with the developments in the machine learning, artificial intelligence, and data…

thesequence.substack.com

Apple has been late to the generative AI game, but lately, it has been pushing the research agenda quite hard. Apple has an ideal playground for innovating in one of the hottest areas of the next wave of generative AI: on-device multimodal models. The idea of powering mobile AI through API integrations with massively large foundation models seems highly impractical and insecure, and Apple is in a unique position to power alternatives to this paradigm. However, most of Apple’s efforts in small on-device models have been somewhat underwhelming.

This is starting to change.

Last week, Apple released what I consider its most impressive work in small, on-device foundation models with the publication and open source release of 4M-21, a multimodal model that work seamlessly across 21 modalities! The work definitely signals the path for Apple on-device model strategy and the large number of modalities is quite shocking. However, this work builds on a previous research work that Apple published months ago with the release of its 4M model.

Let’s start there.

4M Overview

The 4M framework, short for Massively Multimodal Masked Modeling, is designed to train models that can handle multiple tasks and modalities, predicting or generating any type of data from any other subset. These models excel in various vision tasks without additional tuning and perform even better when fine-tuned for new tasks.

Image Credit: Apple

4M is a comprehensive training scheme that involves a single unified Transformer encoder-decoder. This system is trained using a masked modeling objective across diverse input/output modalities, including text, images, geometric and semantic data, and neural network feature maps. By converting all modalities into discrete tokens, 4M performs multimodal masked modeling on a small, randomized subset of tokens.

In terms of capabilities, 4M excels in the following areas:

Β· Handle a variety of vision tasks directly.

Β· Improve performance when fine-tuned for new tasks or modalities.

Β· Function as generative models conditioned on different modalities, enabling flexible and expressive multimodal editing.

Training involves tokenizing various modalities into sequences of discrete tokens, allowing a single Transformer to learn from diverse data types. The training process maps random subsets of these tokens to others.

Image Credit: Apple

4M models generate any modality from any combination of others, even from partial inputs. When predicting multiple modalities from one, 4M sequentially predicts each modality, integrating fully generated outputs back into the input. This approach ensures self-consistent predictions across all training modalities.

4M-21

4M-21 expands the original 4M scheme by increasing the model and dataset size, types, and number of modalities. This version also trains on multiple datasets simultaneously. Each modality is transformed into sequences of discrete tokens using specific tokenizers. During training, random token subsets from all modalities are used as inputs and targets, aiming to predict one subset from another. Pseudo labeling is used to create a large pre-training dataset with multiple aligned modalities.

4M-21 trains on a wide range of modalities grouped into categories:

Β· RGB: Tokenized and pixel versions of images, along with color palettes.

Β· Geometric: Includes surface normals, depth, and 3D human poses and shapes.

Β· Semantic: Semantic segmentation, bounding boxes, and pseudo labels from models like SAM.

Β· Edges: Canny and SAM edges for scene layout and semantics.

Β· Feature Maps: Embeddings from CLIP, DINOv2, and ImageBind.

Β· Metadata: Various types of metadata from RGB images and other modalities.

Image Credit: Apple

Tokenization

One of the most important areas of contribution of the 4M-21 is its tokenization scheme. Tokenization converts modalities and tasks into sequences of discrete tokens, unifying their representation space.

The 4M-21 innovation relies on using different tokenizers are used for various modalities:

i. ViT Tokenizer: For image-like modalities.

ii. MLP Tokenizer: For human poses and global embeddings.

iii. Text Tokenizer: For encoding text and other modalities like bounding boxes and metadata.

Image Credit: Apple

4M-21 training involves a two-stage process: a 4M pre-training stage on a large image dataset, followed by fine-tuning on a smaller dataset with more modalities. Models are trained using random sampling from these datasets, performing language modeling as part of the training.

The 4M-21 architecture uses a Transformer encoder-decoder with modality embeddings. The masking strategy involves both multimodal random and span masking to ensure stable training.

Image Credit: Apple

Performance Evaluation

Apple assessed the zero-shot performance of 4M-21 on tasks like surface normals and depth estimation, semantic and instance segmentation, k-NN retrieval, and 3D human keypoint estimation. The model outperformed strong baselines and specialist models, demonstrating its capability to solve various tasks without performance loss.

Image Credit: Apple

It also performed well in transfer tasks, particularly in novel tasks like 3D object detection.

Image Credit: Apple

The results highlight 4M-21’s ability to handle multiple modalities and tasks, providing a significant improvement over its predecessor, 4M-7.

4M-21 is a complicated model. 21 modalities is not conducive to a simple architecture. However, 4M-21 shows incredible potential for the future of on-device foundation models and gives us a glimpse of Apple’s strategy in the space. Hopefully, 4M-21 will inspire more research in this super-important area of generative AI.

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.

Published via Towards AI

Feedback ↓