Google Gemini VS GPT-4
Last Updated on January 25, 2024 by Editorial Team
Author(s): Tim Cvetko
Originally published on Towards AI.
How does Googleβs New Multimodal Transformer Match Up Against OpenAIβs GPT-4?
On Dec 6th, 2023, Google proudly announced the creation of its new MultiModal Large Language Model that outperforms all models at the MMLU Benchmark, including GPT-4.
Gemini is a family of competent multimodal models developed at Google. They trained Gemini jointly across image, audio, video, and text data to build a model with strong generalist capabilities across modalities alongside cutting-edge understanding and reasoning performance in each respective domain.
Source: Deepmind
Who is this article useful for? AI Engineers, Product Builders, etc.
How advanced is this post? Anybody remotely acquainted with LLM should be able to follow along.
Follow for more of my content: timc102.medium.com
Gemini uses a new architecture that merges a multimodal encoder and decoder. The encoderβs job is to convert different types of data into a common language that the decoder can understand. Then the decoder takes over, generating outputs in different modalities based on the encoded inputs and the task at hand.
Source: Interacting with MultiModal AI
Hereβs a breakdown of Geminiβs key components:
Multimodal Encoder: This module processes the input data from each modality (e.g., text, image) independently, extracting relevant features and generating individual representations.Cross-modal Attention Network: This network is the heart of Gemini. It allows the model to learn relationships and dependencies between the… Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI