File size: 5,150 Bytes
43e2153 2204a96 43e2153 2204a96 43e2153 2204a96 43e2153 2204a96 43e2153 2204a96 43e2153 2204a96 43e2153 2204a96 43e2153 2204a96 43e2153 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 |
---
library_name: transformers
license: mit
tags:
- automatic-speech-recognition
- audio
- speech
- whisper
- multilingual
- streaming
- coreml
- cuda
- nvidia
- apple-silicon
- on-device
---
# TheWhisper-Large-V3
## Model Summary
**TheWhisper-Large-V3** is a fine-tuned, high-performance variant of OpenAI’s Whisper Large V3 model — optimized by **TheStage AI** for **real-time**, **low-latency**, and **low-power** speech-to-text (ASR) inference across multiple platforms, including **NVIDIA GPUs** and **Apple Silicon (CoreML)**.
It provides **streaming transcription**, **word timestamps**, and **scalable performance** for use cases like real-time captioning, meetings, and on-device voice interfaces.
## 📊 Quality Benchmarks
TheWhisper is a fine-tuned Whisper model that can process audio chunks of any size up to 30 seconds. Unlike the original Whisper models, it doesn't require padding audio with silence to reach 30 seconds. We conducted quality benchmarking across different chunk sizes: 10, 15, 20, and 30 seconds. For quality benchmarks, we used the multilingual benchmarks [Open ASR Leaderboard](https://github.com/huggingface/open_asr_leaderboard#evaluate-a-model).
<img width="1547" height="531" alt="vanilla whisper (1)" src="https://github.com/user-attachments/assets/f0c86e58-d834-4ac7-a06b-df3a7ae3e9e9" />
<img width="1547" height="458" alt="TheStage AI Whisper (1)" src="https://github.com/user-attachments/assets/17fb45a3-b33d-4c83-b843-69b0f0aa3f65" />
### 10s chunks
| Model | Mean WER |
|-------|-----------------|
| openai/whisper-large-v3-turbo | 7.81 |
| openai/whisper-large-v3 | 7.45 |
| thewhisper-large-v3-turbo | 7.88 |
| thewhisper-large-v3 | 7.8 |
### 15s chunks
| Model | Mean WER |
|-------|-----------------|
| openai/whisper-large-v3-turbo | 7.61 |
| openai/whisper-large-v3 | 7.22 |
| thewhisper-large-v3-turbo | 7.45 |
| thewhisper-large-v3 | 7.34 |
### 20s chunks
| Model | Mean WER |
|-------|-----------------|
| openai/whisper-large-v3-turbo | 7.63 |
| openai/whisper-large-v3 | 7.29 |
| thewhisper-large-v3-turbo | 7.47 |
| thewhisper-large-v3 | 7.31 |
### 30s chunks
| Model | Mean WER |
|-------|-----------------|
| openai/whisper-large-v3-turbo | 7.61 |
| openai/whisper-large-v3 | 7.32 |
| thewhisper-large-v3-turbo | 7.45 |
| thewhisper-large-v3 | 7.28 |
## Quick start
---
### Apple Usage
```python
import torch
from thestage_speechkit.apple import ASRPipeline
model = ASRPipeline(
model='TheStageAI/thewhisper-large-v3',
# optimized model with ANNA
model_size='S'
chunk_length_s=10,
token=hf_token
)
# inference
result = model(
"path_to_your_audio.wav",
max_batch_size=32,
return_timestamps="word"
)
print(result["text"])
```
### Apple Usage with Streaming
```python
from thestage_speechkit.apple import WhisperStreamingPipeline
from thestage_speechkit.streaming import MicStream, FileStream, StdoutStream
streaming_pipe = WhisperStreaming(
model='TheStageAI/thewhisper-large-v3',
# Optimized model by ANNA
model_size='S',
# Window length
chunk_length_s=10,
platform='apple'
)
# set stride in miliseconds
mic_stream = MicStream(step_size_s=0.5)
output_stream = StdoutStream()
while True:
chunk = mic_stream.next_chunk()
if chunk:
approved_text, assumption = streaming_pipe(chunk)
output_stream.rewrite(approved_text, assumption)
else:
break
```
### Nvidia Usage (HuggingFace Transfomers)
```python
import torch
from thestage_speechkit.nvidia import ASRPipeline
model = ASRPipeline(
model='TheStageAI/thewhisper-large-v3',
# allowed: 10s, 15s, 20s, 30s
chunk_length_s=10,
# optimized TheStage AI engines
device='cuda',
token=hf_token
)
# inference
result = model(
audio="path_to_your_audio.wav",
max_batch_size=32,
return_timestamps="segment"
)
print(result["text"])
```
### Nvidia Usage (TheStage AI engines)
```python
import torch
from thestage_speechkit.nvidia import ASRPipeline
model = ASRPipeline(
model='TheStageAI/thewhisper-large-v3',
# allowed: 10s, 15s, 20s, 30s
chunk_length_s=10,
# optimized TheStage AI engines
mode='S',
device='cuda',
token=hf_token
)
# inference
result = model(
"path_to_your_audio.wav",
max_batch_size=32,
return_timestamps="segment"
)
print(result["text"])
```
## Model Details
---
- **Developed by:** TheStage AI
- **Model type:** Speech-to-Text (Automatic Speech Recognition)
- **Languages:** Multilingual (same as Whisper Large V3: ~99 languages supported)
- **License:** MIT
- **Finetuned from:** [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3)
- **Frameworks:** PyTorch, CoreML
- **Supported Platforms:**
- NVIDIA GPUs (CUDA 11.8+)
- Apple Silicon (M1–M4, macOS 15+)
### Links
- **Repository:** [https://github.com/TheStageAI/TheWhisper](https://github.com/TheStageAI/TheWhisper)
- **Demo / Docs:** [https://app.thestage.ai](https://app.thestage.ai)
- **Weights:** [https://huggingface.co/TheStageAI/thewhisper-large-v3](https://huggingface.co/TheStageAI/thewhisper-large-v3)
--- |