Whisper.cpp + Llama.cpp + ElevenLabs: Local GPT-4o-like Voice Heaven
Last Updated on June 4, 2024 by Editorial Team
Author(s): Vatsal Saglani
Originally published on Towards AI.
Building a voice-driven assistant for Q&A on YouTube videos
Image generated by ChatGPT
The GPT-4o (omni) and Gemini-1.5 release has created quite a lot of buzz in the GenAI space. Both of these models have the multi-modal capability to understand voice, text, and image (video) to output text (and audio via the text).
Trying to follow this I wanted to create a bot that could use local models and for voice transcription, text generation, and answering via audio. So I decided to use Whisper.cpp and Llama.cpp for real-time transcription of voice and generation of response based on that transcribed text. The only missing part of this puzzle was to generate audio from the generated reply. I searched a lot and couldnβt find any viable options for a realistic text-to-speech output. Hence, I didnβt have any other option but to use ElevenLabs, which is also a great option!
All of these tools have a Python library to quickly build things on top of these tools. For Whisper.cpp weβll use the pywhispercpp Python library. We have got the llama-cpp-python library that provides Python bindings for llama.cpp and ElevenLabs also has a Python library which we can use to convert text to audio and stream the audio.
Instead of just building a voice assistant bot weβll… Read the full blog for free on Medium.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI