drepic commited on
Commit
8cfa643
·
verified ·
1 Parent(s): 79733bd

Initial CT2 export (reorganized from base repo; copied card & assets)

Browse files
Files changed (6) hide show
  1. README.md +92 -0
  2. config.json +237 -0
  3. model.bin +3 -0
  4. preprocessor_config.json +15 -0
  5. tokenizer.json +0 -0
  6. vocabulary.json +0 -0
README.md ADDED
@@ -0,0 +1,92 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: ctranslate2
3
+ license: apache-2.0
4
+ base_model: openai/whisper-small
5
+ tags:
6
+ - audio
7
+ - automatic-speech-recognition
8
+ - ctranslate2
9
+ - faster-whisper
10
+ - generated_from_trainer
11
+ - whisper
12
+ metrics:
13
+ - wer
14
+ model-index:
15
+ - name: whisper-small-jp
16
+ results: []
17
+ ---
18
+
19
+ > **This repository contains the CTranslate2 export of the fine-tuned model.**
20
+ >
21
+ > • Base Transformers model: [drepic/whisper-small-jp](https://huggingface.co/drepic/whisper-small-jp)
22
+ > • Use with `faster-whisper`:
23
+ >
24
+ > ```python
25
+ > from faster_whisper import WhisperModel
26
+ > model = WhisperModel("drepic/whisper-small-jp-ct2", device="cuda", compute_type="float16")
27
+ > ```
28
+
29
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
30
+ should probably proofread and complete it, then remove this comment. -->
31
+
32
+ # whisper-small-jp
33
+
34
+ This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on an unknown dataset.
35
+ It achieves the following results on the evaluation set:
36
+ - Loss: 0.6168
37
+ - Wer: 0.2600
38
+ - Cer: 0.2600
39
+
40
+ ## Model description
41
+
42
+ More information needed
43
+
44
+ ## Intended uses & limitations
45
+
46
+ More information needed
47
+
48
+ ## Training and evaluation data
49
+
50
+ More information needed
51
+
52
+ ## Training procedure
53
+
54
+ ### Training hyperparameters
55
+
56
+ The following hyperparameters were used during training:
57
+ - learning_rate: 5e-06
58
+ - train_batch_size: 8
59
+ - eval_batch_size: 4
60
+ - seed: 42
61
+ - distributed_type: multi-GPU
62
+ - num_devices: 2
63
+ - total_train_batch_size: 16
64
+ - total_eval_batch_size: 8
65
+ - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
66
+ - lr_scheduler_type: linear
67
+ - lr_scheduler_warmup_steps: 300
68
+ - num_epochs: 10
69
+ - mixed_precision_training: Native AMP
70
+
71
+ ### Training results
72
+
73
+ | Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
74
+ |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
75
+ | 0.6589 | 1.0 | 7154 | 0.6615 | 0.2735 | 0.2735 |
76
+ | 0.6273 | 2.0 | 14308 | 0.6457 | 0.2699 | 0.2699 |
77
+ | 0.6251 | 3.0 | 21462 | 0.6359 | 0.2660 | 0.2660 |
78
+ | 0.6427 | 4.0 | 28616 | 0.6283 | 0.2642 | 0.2642 |
79
+ | 0.6389 | 5.0 | 35770 | 0.6243 | 0.2631 | 0.2631 |
80
+ | 0.6078 | 6.0 | 42924 | 0.6242 | 0.2615 | 0.2615 |
81
+ | 0.5788 | 7.0 | 50078 | 0.6195 | 0.2603 | 0.2603 |
82
+ | 0.5801 | 8.0 | 57232 | 0.6180 | 0.2596 | 0.2596 |
83
+ | 0.5866 | 9.0 | 64386 | 0.6145 | 0.2598 | 0.2598 |
84
+ | 0.6052 | 10.0 | 71540 | 0.6168 | 0.2600 | 0.2600 |
85
+
86
+
87
+ ### Framework versions
88
+
89
+ - Transformers 4.56.1
90
+ - Pytorch 2.8.0+cu128
91
+ - Datasets 4.0.0
92
+ - Tokenizers 0.22.0
config.json ADDED
@@ -0,0 +1,237 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alignment_heads": [
3
+ [
4
+ 5,
5
+ 3
6
+ ],
7
+ [
8
+ 5,
9
+ 9
10
+ ],
11
+ [
12
+ 8,
13
+ 0
14
+ ],
15
+ [
16
+ 8,
17
+ 4
18
+ ],
19
+ [
20
+ 8,
21
+ 7
22
+ ],
23
+ [
24
+ 8,
25
+ 8
26
+ ],
27
+ [
28
+ 9,
29
+ 0
30
+ ],
31
+ [
32
+ 9,
33
+ 7
34
+ ],
35
+ [
36
+ 9,
37
+ 9
38
+ ],
39
+ [
40
+ 10,
41
+ 5
42
+ ]
43
+ ],
44
+ "lang_ids": [
45
+ 50259,
46
+ 50260,
47
+ 50261,
48
+ 50262,
49
+ 50263,
50
+ 50264,
51
+ 50265,
52
+ 50266,
53
+ 50267,
54
+ 50268,
55
+ 50269,
56
+ 50270,
57
+ 50271,
58
+ 50272,
59
+ 50273,
60
+ 50274,
61
+ 50275,
62
+ 50276,
63
+ 50277,
64
+ 50278,
65
+ 50279,
66
+ 50280,
67
+ 50281,
68
+ 50282,
69
+ 50283,
70
+ 50284,
71
+ 50285,
72
+ 50286,
73
+ 50287,
74
+ 50288,
75
+ 50289,
76
+ 50290,
77
+ 50291,
78
+ 50292,
79
+ 50293,
80
+ 50294,
81
+ 50295,
82
+ 50296,
83
+ 50297,
84
+ 50298,
85
+ 50299,
86
+ 50300,
87
+ 50301,
88
+ 50302,
89
+ 50303,
90
+ 50304,
91
+ 50305,
92
+ 50306,
93
+ 50307,
94
+ 50308,
95
+ 50309,
96
+ 50310,
97
+ 50311,
98
+ 50312,
99
+ 50313,
100
+ 50314,
101
+ 50315,
102
+ 50316,
103
+ 50317,
104
+ 50318,
105
+ 50319,
106
+ 50320,
107
+ 50321,
108
+ 50322,
109
+ 50323,
110
+ 50324,
111
+ 50325,
112
+ 50326,
113
+ 50327,
114
+ 50328,
115
+ 50329,
116
+ 50330,
117
+ 50331,
118
+ 50332,
119
+ 50333,
120
+ 50334,
121
+ 50335,
122
+ 50336,
123
+ 50337,
124
+ 50338,
125
+ 50339,
126
+ 50340,
127
+ 50341,
128
+ 50342,
129
+ 50343,
130
+ 50344,
131
+ 50345,
132
+ 50346,
133
+ 50347,
134
+ 50348,
135
+ 50349,
136
+ 50350,
137
+ 50351,
138
+ 50352,
139
+ 50353,
140
+ 50354,
141
+ 50355,
142
+ 50356,
143
+ 50357
144
+ ],
145
+ "suppress_ids": [
146
+ 1,
147
+ 2,
148
+ 7,
149
+ 8,
150
+ 9,
151
+ 10,
152
+ 14,
153
+ 25,
154
+ 26,
155
+ 27,
156
+ 28,
157
+ 29,
158
+ 31,
159
+ 58,
160
+ 59,
161
+ 60,
162
+ 61,
163
+ 62,
164
+ 63,
165
+ 90,
166
+ 91,
167
+ 92,
168
+ 93,
169
+ 359,
170
+ 503,
171
+ 522,
172
+ 542,
173
+ 873,
174
+ 893,
175
+ 902,
176
+ 918,
177
+ 922,
178
+ 931,
179
+ 1350,
180
+ 1853,
181
+ 1982,
182
+ 2460,
183
+ 2627,
184
+ 3246,
185
+ 3253,
186
+ 3268,
187
+ 3536,
188
+ 3846,
189
+ 3961,
190
+ 4183,
191
+ 4667,
192
+ 6585,
193
+ 6647,
194
+ 7273,
195
+ 9061,
196
+ 9383,
197
+ 10428,
198
+ 10929,
199
+ 11938,
200
+ 12033,
201
+ 12331,
202
+ 12562,
203
+ 13793,
204
+ 14157,
205
+ 14635,
206
+ 15265,
207
+ 15618,
208
+ 16553,
209
+ 16604,
210
+ 18362,
211
+ 18956,
212
+ 20075,
213
+ 21675,
214
+ 22520,
215
+ 26130,
216
+ 26161,
217
+ 26435,
218
+ 28279,
219
+ 29464,
220
+ 31650,
221
+ 32302,
222
+ 32470,
223
+ 36865,
224
+ 42863,
225
+ 47425,
226
+ 49870,
227
+ 50254,
228
+ 50258,
229
+ 50360,
230
+ 50361,
231
+ 50362
232
+ ],
233
+ "suppress_ids_begin": [
234
+ 220,
235
+ 50257
236
+ ]
237
+ }
model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:90e6f2bfa6c65dff16d77cad085f3bef96f7fe869ca26bdf86329cbc8e5d6e7a
3
+ size 483546977
preprocessor_config.json ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "chunk_length": 30,
3
+ "dither": 0.0,
4
+ "feature_extractor_type": "WhisperFeatureExtractor",
5
+ "feature_size": 80,
6
+ "hop_length": 160,
7
+ "n_fft": 400,
8
+ "n_samples": 480000,
9
+ "nb_max_frames": 3000,
10
+ "padding_side": "right",
11
+ "padding_value": 0.0,
12
+ "processor_class": "WhisperProcessor",
13
+ "return_attention_mask": false,
14
+ "sampling_rate": 16000
15
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
vocabulary.json ADDED
The diff for this file is too large to render. See raw diff