Update README.md
Browse files
README.md
CHANGED
@@ -3,3 +3,45 @@ language:
|
|
3 |
- ky
|
4 |
---
|
5 |
Whisper ASR for Kyrgyz Language is an automatic speech recognition (ASR) solution customized for the Kyrgyz language. It is based on the pre-trained Whisper model and has undergone fine-tuning and adaptation to accurately transcribe Kyrgyz speech, taking into account its specific phonetic intricacies.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
- ky
|
4 |
---
|
5 |
Whisper ASR for Kyrgyz Language is an automatic speech recognition (ASR) solution customized for the Kyrgyz language. It is based on the pre-trained Whisper model and has undergone fine-tuning and adaptation to accurately transcribe Kyrgyz speech, taking into account its specific phonetic intricacies.
|
6 |
+
|
7 |
+
To run the model, first install:
|
8 |
+
|
9 |
+
!pip install datasets>=2.6.1
|
10 |
+
!pip install git+https://github.com/huggingface/transformers
|
11 |
+
!pip install librosa
|
12 |
+
!pip install evaluate>=0.30
|
13 |
+
!pip install jiwer
|
14 |
+
!pip install gradio==3.50.2
|
15 |
+
|
16 |
+
|
17 |
+
|
18 |
+
Linking the notebook to the Hub is straightforward - it simply requires entering your Hub authentication token when prompted.
|
19 |
+
|
20 |
+
from huggingface_hub import notebook_login
|
21 |
+
|
22 |
+
notebook_login()
|
23 |
+
|
24 |
+
|
25 |
+
Now that we've fine-tuned our model, we can build a demo to show off its ASR capabilities! We'll use 🤗 Transformers pipeline, which will take care of the entire ASR pipeline, right from pre-processing the audio inputs to decoding the model predictions. We'll build our interactive demo with Gradio. Gradio is arguably the most straightforward way of building machine learning demos; with Gradio, we can build a demo in just a matter of minutes!
|
26 |
+
|
27 |
+
Running the example below will generate a Gradio demo where we can record speech through the microphone of our computer and input it to our fine-tuned Whisper model to transcribe the corresponding text:
|
28 |
+
|
29 |
+
from transformers import pipeline
|
30 |
+
import gradio as gr
|
31 |
+
|
32 |
+
pipe = pipeline(model="UlutSoftLLC/whisper-small-kyrgyz")
|
33 |
+
|
34 |
+
def transcribe(audio):
|
35 |
+
text = pipe(audio)["text"]
|
36 |
+
return text
|
37 |
+
|
38 |
+
iface = gr.Interface(
|
39 |
+
fn=transcribe,
|
40 |
+
inputs=gr.Audio(source="microphone", type="filepath"),
|
41 |
+
outputs="text",
|
42 |
+
title="Whisper Small Kyrgyz",
|
43 |
+
description="Realtime demo for Kyrgyz speech recognition using a fine-tuned Whisper small model.",
|
44 |
+
)
|
45 |
+
|
46 |
+
iface.launch()
|
47 |
+
|