Orca 13B: Imitating GPT-4 the “Right” Way
Last Updated on June 28, 2023 by Editorial Team
Author(s): Dr. Mandar Karhade, MD. PhD.
Originally published on Towards AI.
Limited diversity
Paper argues that all other small models are imitating GPT-4 the wrong way (style) but not the right way (reasoning). Lack of rigorous testing makes these small models look like they are as good as GPT-3.5 etc. Microsoft argues that their Orca follows “good” imitation. Therefore, if you have been looking for a small OpenSource model that can fit on your consumer GPU with 24gb, and works reasonably well; you might be in luck. I say “might” because Microsoft is still working on the legalities of releasing weights of this promising tiny-but-mighty model.
Let's dive into the research behind Orca —
source:… Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.
Published via Towards AI