Towards AI Can Help your Team Adopt AI: Corporate Training, Consulting, and Talent Solutions.

Publication

What is CLIP (Contrastive Language — Image Pre-training) and how it can be used for semantic image search?
Latest   Machine Learning

What is CLIP (Contrastive Language — Image Pre-training) and how it can be used for semantic image search?

Last Updated on July 21, 2023 by Editorial Team

Author(s): Vatsal Saglani

Originally published on Towards AI.


Photo by Maria Teneva on Unsplash

Recently, the researchers at OpenAI published a multi-modal architecture that can be used for 30 different tasks once pre-trained on around 400 million image-text pairs. This methodology isn’t that new previously many other researchers have tried to use a combination of Text Transformer and Pre-Trained CNN model to pre-train a model on Image-Text pairs and then use it on different downwards tasks. But for varieties of reasons those approaches weren’t that successful as discussed in the paper. A variety of pre-training approaches were tried, both predictive and contrastive; to achieve SOTA level accuracy on different… Read the full blog for free on Medium.

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI

Feedback ↓