smerchi commited on
Commit
ebb46aa
·
verified ·
1 Parent(s): 43435f7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +44 -1
README.md CHANGED
@@ -21,6 +21,49 @@ should probably proofread and complete it, then remove this comment. -->
21
  # Whisper_Cleverlytics
22
 
23
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
  ### Training hyperparameters
25
 
26
  The following hyperparameters were used during training:
@@ -34,4 +77,4 @@ The following hyperparameters were used during training:
34
  - Transformers 4.35.2
35
  - Pytorch 2.0.1+cu117
36
  - Datasets 2.16.0
37
- - Tokenizers 0.14.1
 
21
  # Whisper_Cleverlytics
22
 
23
 
24
+ ## Usage
25
+ To run the model, first install the Transformers library through the GitHub repo.
26
+
27
+ ```python
28
+ pip install --upgrade pip
29
+ pip install --upgrade git+https://github.com/huggingface/transformers.git accelerate datasets[audio]
30
+ ```
31
+
32
+ ```python
33
+ import torch
34
+ from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline
35
+ #from datasets import load_dataset
36
+
37
+ device = "cuda:0" if torch.cuda.is_available() else "cpu"
38
+ torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
39
+ model_id = "smerchi/Voice_Cleverlytics_large-v3"
40
+
41
+ model = AutoModelForSpeechSeq2Seq.from_pretrained(
42
+ model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=False, use_safetensors=True
43
+ )
44
+ model.to(device)
45
+
46
+ processor = AutoProcessor.from_pretrained(model_id)
47
+
48
+ pipe = pipeline(
49
+ "automatic-speech-recognition",
50
+ model=model,
51
+ tokenizer=processor.tokenizer,
52
+ feature_extractor=processor.feature_extractor,
53
+ max_new_tokens=128,
54
+ chunk_length_s=30,
55
+ batch_size=16,
56
+ return_timestamps=True,
57
+ torch_dtype=torch_dtype,
58
+ device=device,
59
+ )
60
+
61
+ audio="/content/audio.mp3"
62
+
63
+ %time result = pipe(audio)
64
+ print(result["text"],)
65
+ ```
66
+
67
  ### Training hyperparameters
68
 
69
  The following hyperparameters were used during training:
 
77
  - Transformers 4.35.2
78
  - Pytorch 2.0.1+cu117
79
  - Datasets 2.16.0
80
+ - Tokenizers 0.14.1