Update README.md
Browse files
README.md
CHANGED
@@ -4,9 +4,14 @@ pipeline_tag: audio-classification
|
|
4 |
tags:
|
5 |
- automatic-speech-recognition
|
6 |
- emotion-recognition
|
7 |
-
- model_hub_mixin
|
8 |
-
- pytorch_model_hub_mixin
|
9 |
- speaker-identification
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
10 |
---
|
11 |
|
12 |
# Multitask Speech Model with Wav2Vec2
|
@@ -48,4 +53,4 @@ Evaluation metrics:
|
|
48 |
|
49 |
Character Error Rate (CER) for character recognition
|
50 |
|
51 |
-
Accuracy for speaker and emotion classification
|
|
|
4 |
tags:
|
5 |
- automatic-speech-recognition
|
6 |
- emotion-recognition
|
|
|
|
|
7 |
- speaker-identification
|
8 |
+
language:
|
9 |
+
- en
|
10 |
+
metrics:
|
11 |
+
- accuracy
|
12 |
+
base_model:
|
13 |
+
- facebook/wav2vec2-base
|
14 |
+
library_name: fairseq
|
15 |
---
|
16 |
|
17 |
# Multitask Speech Model with Wav2Vec2
|
|
|
53 |
|
54 |
Character Error Rate (CER) for character recognition
|
55 |
|
56 |
+
Accuracy for speaker and emotion classification
|