Updated README
Browse files
README.md
CHANGED
@@ -30,6 +30,14 @@ This model continues pre-training from a [model](https://huggingface.co/ALM/wav2
|
|
30 |
|
31 |
We evaluate voc2vec-as-pt on six datasets: ASVP-ESD, ASPV-ESD (babies), CNVVE, NonVerbal Vocalization Dataset, Donate a Cry, VIVAE.
|
32 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
## Usage examples
|
34 |
|
35 |
You can use the model directly in the following manner:
|
|
|
30 |
|
31 |
We evaluate voc2vec-as-pt on six datasets: ASVP-ESD, ASPV-ESD (babies), CNVVE, NonVerbal Vocalization Dataset, Donate a Cry, VIVAE.
|
32 |
|
33 |
+
## Available Models
|
34 |
+
|
35 |
+
| Model | Description | Link |
|
36 |
+
|--------|-------------|------|
|
37 |
+
| **voc2vec** | Pre-trained model on **125 hours of non-verbal audio**. | [🔗 Model](https://huggingface.co/alkiskoudounas/voc2vec) |
|
38 |
+
| **voc2vec-as-pt** | Continues pre-training from a model that was **initially trained on the AudioSet dataset**. | [🔗 Model](https://huggingface.co/alkiskoudounas/voc2vec-as-pt) |
|
39 |
+
| **voc2vec-ls-pt** | Continues pre-training from a model that was **initially trained on the LibriSpeech dataset**. | [🔗 Model](https://huggingface.co/alkiskoudounas/voc2vec-ls-pt) |
|
40 |
+
|
41 |
## Usage examples
|
42 |
|
43 |
You can use the model directly in the following manner:
|