Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: pub@towardsai.net
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab VeloxTrend Ultrarix Capital Partners Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Our 15 AI experts built the most comprehensive, practical, 90+ lesson courses to master AI Engineering - we have pathways for any experience at Towards AI Academy. Cohorts still open - use COHORT10 for 10% off.

Publication

🎙️ Building a Local Speech-to-Text System with Parakeet-TDT 0.6B v2
Artificial Intelligence   Latest   Machine Learning

🎙️ Building a Local Speech-to-Text System with Parakeet-TDT 0.6B v2

Author(s): Sridhar Sampath

Originally published on Towards AI.

🎙️ Building a Local Speech-to-Text System with Parakeet-TDT 0.6B v2

🎙️ Building a Local Speech-to-Text System with Parakeet-TDT 0.6B v2

Ever spent hours cleaning up a transcript? Inserting commas, capitalizing words, adjusting timestamps, and fixing numbers spoken as “twenty-two thousand three hundred ten” rather than “22,310”? I was tired of cloud-based speech recognition tools that compromised privacy and desktop solutions that delivered flat, unpunctuated text without timestamps.

So I tried Parakeet-TDT.

TL;DR

Most speech-to-text tools miss key elements like punctuation, timestamps, or rely on cloud APIs. This blog showcases a fully local transcription system using NVIDIA’s Parakeet-TDT 0.6B model.

✅ Auto punctuation & capitalization
✅ Word/segment-level timestamps
✅ Long audio support
✅ Tested on financial news, lyrics, and tech conversations
✅ Built using Streamlit + NeMo — runs 100% offline

🎯 The Problem: ASR That Misses the Metadata

Most ASR tools do a decent job with basic transcripts. But they fall short when real-world applications demand:

📈 Business number accuracy
🧾 Structured formatting
🔐 Local processing with privacy
🎬 Subtitle alignment

Whether you’re handling earnings calls, voice notes, or executive interviews, flat transcripts won’t cut it.

💡 The Solution: NVIDIA Parakeet-TDT 0.6B

🎥 Live Demo
Watch Parakeet transcribe business audio, lyrics, and interviews — entirely offline:

A full walkthrough of the local ASR system built with Parakeet-TDT 0.6B. Includes architecture overview and transcription demos for financial news, song lyrics, and a tech dialogue.

🎧 Note: The lyrics demo segment (Wavin’ Flag) has been muted to comply with copyright restrictions in YouTube.

Parakeet-TDT is a 600M parameter ASR model, designed for high-fidelity English transcription.

Flow diagram showing Local ASR using NVIDIA Parakeet-TDT with Streamlit UI, audio preprocessing, and model inference pipeline

✅ Key Features:

  • Auto punctuation & casing
  • Word and segment-level timestamps
  • Handles long audio (up to 24 mins per chunk)
  • CUDA-accelerated
  • Free for commercial use (CC-BY-4.0)
  • Fast: RTFx 3380 (~56 min of audio/sec at batch size 128)

Under the Hood: Architecture & Training

📐 Architecture

🧪 Training Overview

Training and Evaluation Datasets

  • Pretrained with wav2vec on LibriLight
  • Fine-tuned on 500 hours of clean speech
  • Total: 120K hours from public & YouTube-like datasets
  • Trained on 64× A100 GPUs using NeMo Toolkit
Screenshot of the Training Dataset on Hugging Face

Setup: Run It Locally (Windows)

I’ve provided all the code, sample audio files, and requirements in this GitHub repo: [GitHub — SridharSampath/parakeet-asr-demo]

1. Create Conda Environment

conda create -n parakeet-asr python=3.10 -y
conda activate parakeet-asr

2. Install Dependencies

pip install -r requirements.txt

This includes NeMo, PyTorch, Streamlit, and audio processing libraries.

3. Install FFmpeg

choco install ffmpeg

🧠 Code Walkthrough

🔌 Model Loading

model = ASRModel.from_pretrained("nvidia/parakeet-tdt-0.6b-v2")
model = model.to("cuda" if torch.cuda.is_available() else "cpu")
if torch.cuda.is_available():
model = model.to(torch.bfloat16)

🎧 Audio Preprocessing

audio = AudioSegment.from_file(audio_path)
audio = audio.set_frame_rate(16000).set_channels(1)
audio.export("processed.wav", format="wav")

📝 Transcription

output = model.transcribe([processed_path], timestamps=True)
for seg in output[0].timestamp["segment"]:
print(f"{seg['start']}s - {seg['end']}s: {seg['segment']}")

The Streamlit app handles exporting to .csv, .srt, and .txt formats.

🖥️ Application Interface — Local ASR in Action

Here’s how the app looks just before transcription:

App loaded with Stockmarketnews.wav, ready to transcribe

The system runs completely offline, loads the 600M parameter model in seconds, and transcribes the 2:37 audio clip in under 2 seconds on CUDA.

🖼️ Real-World Transcription Tests

📈 1. Stock Market News — English Business Broadcast

File: Stockmarketnews.wav(2:30 mins)
This clip simulates a financial update covering Sensex, Nifty, and major Indian stocks like TCS, HDFC, and ITC.

✅ Key Transcription Wins:

  • Accurately transcribed phrases like
    “The Nifty 50 closed at 22,310 points”
    “Reliance Industries led the rally…”
    “TCS ended lower by 35, closing at ₹3,487”
  • Handled spoken numbers, percentages, and currency
  • Preserved clarity and punctuation throughout financial commentary

📸 Below is a snapshot of the transcription:

Transcription of the Stock market news segment with spoken numbers, percentages, and financial terms.

This is particularly valuable for automated market bulletins, financial transcription, or earnings call analysis.

🎵 2. Song Lyrics — Wavin’ Flag by K’naan

📂 Waving-Flag-song.wav(3:40 mins)
This test focused on poetic phrasing, rhyme, and repetition — all common in music.

✅ Key Transcription :

  • Phrases like “When I get older, I will be stronger…” transcribed accurately
  • Detected lyric breaks and capitalization
  • Preserved structure using punctuation

📸 Below is a snapshot of the transcription:

Transcription of Wavin’ Flag . Shows accurate, timestamped segments with punctuation. Image covers the first minute; remaining lines are scrollable in the UI.

This shows Parakeet handling poetic repetition and expressive music-driven sentence structures — great for lyric apps, karaoke tools, and music understanding systems.

🗣️ 3. Conversational Tech Dialogue — Jensen Huang x Satya Nadella

📂 JensenHuang-SatyaNadella-Conference-talk.wav(5:00 mins)
First 5-minute fireside chat from Microsoft Build where Jensen and Satya discuss AI and hyperscale compute.

✅ Key Transcription Wins:

  • Long-form thought delivery with phrases like
    “tokens per dollar per watt”,
    “40X speedup over Hopper”,
    “AI factories and agentic workloads”
  • Retains sentence structure and logical flow
  • Preserves capitalization and technical terminology

📸 Below is a snapshot of the first 1 minute of the transcription:

5-minute tech dialogue between Satya Nadella and Jensen Huang. Screenshot shows the first half of the transcript; the rest can be scrolled in the UI.

Reflects real-world scenarios like tech podcasts, executive interviews, and conference keynotes.

🎧 Sample Audio Files

You can try these real-world audio samples used in the demo:

🤖 Parakeet vs Whisper (Medium) — Technical Comparison

While both are powerful ASR models, Parakeet-TDT 0.6B offers several advantages over OpenAI’s Whisper Medium:

Parameters: Parakeet-TDT 0.6B: 600M / Whisper Medium: 769M

WER (LibriSpeech test-clean): Parakeet-TDT 0.6B: 2.5% / Whisper Medium: 3.6%

WER (LibriSpeech test-other): Parakeet-TDT 0.6B: 6.2% / Whisper Medium: 7.8%

RTFx (batched): Parakeet-TDT 0.6B: 3386 / Whisper Medium: ~300

Auto-punctuation: Parakeet-TDT 0.6B: Yes / Whisper Medium: Yes

Word-level timestamps: Parakeet-TDT 0.6B: Yes / Whisper Medium: No (segment only)

Commercial use: Parakeet-TDT 0.6B: Yes (CC-BY-4.0) / Whisper Medium: Yes (MIT)

Local operation: Parakeet-TDT 0.6B: Yes / Whisper Medium: Yes

Financial & number accuracy: Parakeet-TDT 0.6B: Superior / Whisper Medium: Good

The most significant differences: Parakeet provides word-level timestamps (critical for subtitles and alignment), runs significantly faster at batch inference, and has better accuracy on difficult speech with numbers and technical terms.

🏆 Benchmark Leadership

Parakeet-TDT 0.6B ranks #1 on the Hugging Face Open ASR Leaderboard (as of May 20, 2025):

🥇 Lowest average WER: 6.05%
RTFx: 3386 → ~56 min/sec audio
🟢 License: CC-BY-4.0
✅ Greedy decoding (no external language model)

Limitations

While Parakeet performs exceptionally well, it’s important to note some limitations:

  • Currently English-only support
  • Requires CUDA for optimal performance
  • No speaker diarization (yet)

🧠 Final Thoughts

This project proves that with the right open model and toolkit, you can build a fast, accurate, and local ASR system:

✅ Full offline use
✅ Real-world test cases
✅ Benchmark-topping performance
✅ Ideal for business, media, and research

If you’re working on:

🎙️ Executive interviews
📊 Financial call transcription
🎬 Subtitle syncing
🧾 Documenting spoken audio

…Parakeet is a production-grade candidate to consider.

⚙️ Local Setup & GPU Acceleration

This demo is running entirely locally using my NVIDIA GeForce RTX 3050 Laptop GPU, with CUDA 11.8 available.

Since Parakeet-TDT is optimized for GPU acceleration via the NVIDIA NeMo framework, you’ll need CUDA support and a compatible GPU for smooth performance.

📌 If you’re trying this on a CPU-only machine, performance may degrade significantly, especially for long audio clips.

📚 Resources

🙌 Let’s Connect

Found this useful or working on something similar? Let’s connect:

🔗 LinkedIn — Sridhar Sampath
🔗 Medium Blogs
💻 GitHub — Parakeet ASR Demo

✨ End

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI


Take our 90+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Towards AI has published Building LLMs for Production—our 470+ page guide to mastering LLMs with practical projects and expert insights!


Discover Your Dream AI Career at Towards AI Jobs

Towards AI has built a jobs board tailored specifically to Machine Learning and Data Science Jobs and Skills. Our software searches for live AI jobs each hour, labels and categorises them and makes them easily searchable. Explore over 40,000 live jobs today with Towards AI Jobs!

Note: Content contains the views of the contributing authors and not Towards AI.