You Won’t Believe How This Python Library Unlocks GPT-4 Level Features with Claude 3
Author(s): Vatsal Saglani
Originally published on Towards AI.
Illustration generated by DALL-E
The OpenAI models are great at generating structured text data. They used this ability of their LLMs and implemented function calling. Function calling classifies what function to call and what arguments to provide, based on the combination of system prompt, user message, function names, function descriptions, and parameters of each function.
Function calling has been quite tough to replicate this ability with any other LLM API provider like Anthropic or even the open-source LLMs. I tried to replicate this with the OpenHermes-2.5-Mistral-7B model and you can read about it in the blog linked below. The function calling output generated with this was quite good but when compared to GPT-4 or GPT-4-Turbo they were just decent.
Rise of open source LLMs
pub.towardsai.net
Up until now GPT-4 and GPT-4-Turbo haven’t faced any competition from other LLMs in generating structured output and function calling. But with the launch of the Claude 3 family of models, we can say that the competition is here. The following are the benchmark comparisons for the Claude 3 family of models with other open and closed-source LLMs.
Image from Claude 3 launch post: https://www.anthropic.com/news/claude-3-family
If we observe the benchmarks attached above, the performance of the largest and most powerful Claude model… Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.
Published via Towards AI