|
|
--- |
|
|
library_name: transformers |
|
|
license: mit |
|
|
tags: |
|
|
- automatic-speech-recognition |
|
|
- audio |
|
|
- speech |
|
|
- whisper |
|
|
- multilingual |
|
|
- streaming |
|
|
- coreml |
|
|
- cuda |
|
|
- nvidia |
|
|
- apple-silicon |
|
|
- on-device |
|
|
--- |
|
|
|
|
|
# TheWhisper-Large-V3 |
|
|
|
|
|
## Model Summary |
|
|
|
|
|
**TheWhisper-Large-V3** is a fine-tuned, high-performance variant of OpenAI’s Whisper Large V3 model — optimized by **TheStage AI** for **real-time**, **low-latency**, and **low-power** speech-to-text (ASR) inference across multiple platforms, including **NVIDIA GPUs** and **Apple Silicon (CoreML)**. |
|
|
|
|
|
It provides **streaming transcription**, **word timestamps**, and **scalable performance** for use cases like real-time captioning, meetings, and on-device voice interfaces. |
|
|
|
|
|
|
|
|
## 📊 Quality Benchmarks |
|
|
|
|
|
TheWhisper is a fine-tuned Whisper model that can process audio chunks of any size up to 30 seconds. Unlike the original Whisper models, it doesn't require padding audio with silence to reach 30 seconds. We conducted quality benchmarking across different chunk sizes: 10, 15, 20, and 30 seconds. For quality benchmarks, we used the multilingual benchmarks [Open ASR Leaderboard](https://github.com/huggingface/open_asr_leaderboard#evaluate-a-model). |
|
|
|
|
|
<img width="1547" height="531" alt="vanilla whisper (1)" src="https://github.com/user-attachments/assets/f0c86e58-d834-4ac7-a06b-df3a7ae3e9e9" /> |
|
|
<img width="1547" height="458" alt="TheStage AI Whisper (1)" src="https://github.com/user-attachments/assets/17fb45a3-b33d-4c83-b843-69b0f0aa3f65" /> |
|
|
|
|
|
|
|
|
### 10s chunks |
|
|
|
|
|
| Model | Mean WER | |
|
|
|-------|-----------------| |
|
|
| openai/whisper-large-v3-turbo | 7.81 | |
|
|
| openai/whisper-large-v3 | 7.45 | |
|
|
| thewhisper-large-v3-turbo | 7.88 | |
|
|
| thewhisper-large-v3 | 7.8 | |
|
|
|
|
|
|
|
|
### 15s chunks |
|
|
|
|
|
| Model | Mean WER | |
|
|
|-------|-----------------| |
|
|
| openai/whisper-large-v3-turbo | 7.61 | |
|
|
| openai/whisper-large-v3 | 7.22 | |
|
|
| thewhisper-large-v3-turbo | 7.45 | |
|
|
| thewhisper-large-v3 | 7.34 | |
|
|
|
|
|
### 20s chunks |
|
|
|
|
|
| Model | Mean WER | |
|
|
|-------|-----------------| |
|
|
| openai/whisper-large-v3-turbo | 7.63 | |
|
|
| openai/whisper-large-v3 | 7.29 | |
|
|
| thewhisper-large-v3-turbo | 7.47 | |
|
|
| thewhisper-large-v3 | 7.31 | |
|
|
|
|
|
### 30s chunks |
|
|
|
|
|
| Model | Mean WER | |
|
|
|-------|-----------------| |
|
|
| openai/whisper-large-v3-turbo | 7.61 | |
|
|
| openai/whisper-large-v3 | 7.32 | |
|
|
| thewhisper-large-v3-turbo | 7.45 | |
|
|
| thewhisper-large-v3 | 7.28 | |
|
|
|
|
|
|
|
|
## Quick start |
|
|
--- |
|
|
|
|
|
### Apple Usage |
|
|
|
|
|
```python |
|
|
import torch |
|
|
from thestage_speechkit.apple import ASRPipeline |
|
|
|
|
|
model = ASRPipeline( |
|
|
model='TheStageAI/thewhisper-large-v3', |
|
|
# optimized model with ANNA |
|
|
model_size='S' |
|
|
chunk_length_s=10, |
|
|
token=hf_token |
|
|
) |
|
|
|
|
|
# inference |
|
|
result = model( |
|
|
"path_to_your_audio.wav", |
|
|
max_batch_size=32, |
|
|
return_timestamps="word" |
|
|
) |
|
|
|
|
|
print(result["text"]) |
|
|
``` |
|
|
|
|
|
### Apple Usage with Streaming |
|
|
|
|
|
```python |
|
|
from thestage_speechkit.apple import WhisperStreamingPipeline |
|
|
from thestage_speechkit.streaming import MicStream, FileStream, StdoutStream |
|
|
|
|
|
streaming_pipe = WhisperStreaming( |
|
|
model='TheStageAI/thewhisper-large-v3', |
|
|
# Optimized model by ANNA |
|
|
model_size='S', |
|
|
# Window length |
|
|
chunk_length_s=10, |
|
|
platform='apple' |
|
|
) |
|
|
|
|
|
# set stride in miliseconds |
|
|
mic_stream = MicStream(step_size_s=0.5) |
|
|
output_stream = StdoutStream() |
|
|
|
|
|
while True: |
|
|
chunk = mic_stream.next_chunk() |
|
|
if chunk: |
|
|
approved_text, assumption = streaming_pipe(chunk) |
|
|
output_stream.rewrite(approved_text, assumption) |
|
|
else: |
|
|
break |
|
|
``` |
|
|
|
|
|
### Nvidia Usage (HuggingFace Transfomers) |
|
|
|
|
|
```python |
|
|
import torch |
|
|
from thestage_speechkit.nvidia import ASRPipeline |
|
|
|
|
|
model = ASRPipeline( |
|
|
model='TheStageAI/thewhisper-large-v3', |
|
|
# allowed: 10s, 15s, 20s, 30s |
|
|
chunk_length_s=10, |
|
|
# optimized TheStage AI engines |
|
|
device='cuda', |
|
|
token=hf_token |
|
|
) |
|
|
|
|
|
# inference |
|
|
result = model( |
|
|
audio="path_to_your_audio.wav", |
|
|
max_batch_size=32, |
|
|
return_timestamps="segment" |
|
|
) |
|
|
|
|
|
print(result["text"]) |
|
|
``` |
|
|
|
|
|
### Nvidia Usage (TheStage AI engines) |
|
|
|
|
|
```python |
|
|
import torch |
|
|
from thestage_speechkit.nvidia import ASRPipeline |
|
|
|
|
|
model = ASRPipeline( |
|
|
model='TheStageAI/thewhisper-large-v3', |
|
|
# allowed: 10s, 15s, 20s, 30s |
|
|
chunk_length_s=10, |
|
|
# optimized TheStage AI engines |
|
|
mode='S', |
|
|
device='cuda', |
|
|
token=hf_token |
|
|
) |
|
|
|
|
|
# inference |
|
|
result = model( |
|
|
"path_to_your_audio.wav", |
|
|
max_batch_size=32, |
|
|
return_timestamps="segment" |
|
|
) |
|
|
|
|
|
print(result["text"]) |
|
|
``` |
|
|
|
|
|
## Model Details |
|
|
--- |
|
|
|
|
|
- **Developed by:** TheStage AI |
|
|
- **Model type:** Speech-to-Text (Automatic Speech Recognition) |
|
|
- **Languages:** Multilingual (same as Whisper Large V3: ~99 languages supported) |
|
|
- **License:** MIT |
|
|
- **Finetuned from:** [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) |
|
|
- **Frameworks:** PyTorch, CoreML |
|
|
- **Supported Platforms:** |
|
|
- NVIDIA GPUs (CUDA 11.8+) |
|
|
- Apple Silicon (M1–M4, macOS 15+) |
|
|
|
|
|
### Links |
|
|
|
|
|
- **Repository:** [https://github.com/TheStageAI/TheWhisper](https://github.com/TheStageAI/TheWhisper) |
|
|
- **Demo / Docs:** [https://app.thestage.ai](https://app.thestage.ai) |
|
|
- **Weights:** [https://huggingface.co/TheStageAI/thewhisper-large-v3](https://huggingface.co/TheStageAI/thewhisper-large-v3) |
|
|
|
|
|
--- |