Add library name, pipeline tag, link to paper and Github repo

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +8 -4
README.md CHANGED
@@ -1,13 +1,13 @@
1
  ---
2
- license: cc-by-4.0
3
- language:
4
- - en
5
- - it
6
  datasets:
7
  - FBK-MT/mosel
8
  - facebook/covost2
9
  - openslr/librispeech_asr
10
  - facebook/voxpopuli
 
 
 
 
11
  metrics:
12
  - comet
13
  - wer
@@ -17,6 +17,8 @@ tags:
17
  - speech translation
18
  - ASR
19
  - ST
 
 
20
  ---
21
 
22
  # FAMA-medium
@@ -43,6 +45,7 @@ All the artifacts used for realizing FAMA models, including codebase, datasets,
43
  themself are [released under OS-compliant licenses](#license), promoting a more
44
  responsible creation of models in our community.
45
 
 
46
 
47
  It is available in 2 sizes, with 2 variants for ASR only:
48
 
@@ -52,6 +55,7 @@ It is available in 2 sizes, with 2 variants for ASR only:
52
  - [FAMA-medium-asr](https://huggingface.co/FBK-MT/fama-medium-asr) - 878 million parameters
53
 
54
  For more information about FAMA, please check our [blog post](https://huggingface.co/blog/FAMA/release) and the [arXiv](https://arxiv.org/abs/2505.22759) preprint.
 
55
 
56
  ## Usage
57
 
 
1
  ---
 
 
 
 
2
  datasets:
3
  - FBK-MT/mosel
4
  - facebook/covost2
5
  - openslr/librispeech_asr
6
  - facebook/voxpopuli
7
+ language:
8
+ - en
9
+ - it
10
+ license: cc-by-4.0
11
  metrics:
12
  - comet
13
  - wer
 
17
  - speech translation
18
  - ASR
19
  - ST
20
+ pipeline_tag: automatic-speech-recognition
21
+ library_name: transformers
22
  ---
23
 
24
  # FAMA-medium
 
45
  themself are [released under OS-compliant licenses](#license), promoting a more
46
  responsible creation of models in our community.
47
 
48
+ For further details, please refer to the paper [FAMA: The First Large-Scale Open-Science Speech Foundation Model for English and Italian](https://huggingface.co/papers/2505.22759).
49
 
50
  It is available in 2 sizes, with 2 variants for ASR only:
51
 
 
55
  - [FAMA-medium-asr](https://huggingface.co/FBK-MT/fama-medium-asr) - 878 million parameters
56
 
57
  For more information about FAMA, please check our [blog post](https://huggingface.co/blog/FAMA/release) and the [arXiv](https://arxiv.org/abs/2505.22759) preprint.
58
+ Code can be found at: https://github.com/hlt-mt/FBK-fairseq
59
 
60
  ## Usage
61