ahnafsamin commited on
Commit
c056a87
1 Parent(s): f7ecca8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -11
README.md CHANGED
@@ -49,7 +49,7 @@ size_categories:
49
  ## Dataset Description
50
 
51
  - **Developed By** Dept. of CSE, SUST, Bangladesh
52
- - **Paper:** [BanSpeech: A Multi-domain Bangla Speech Recognition Benchmark Toward Robust Performance in Challenging Conditions](https://www.sciencedirect.com/science/article/abs/pii/S0167639321001370)
53
  - **Point of Contact:** [Ahnaf Mozib Samin, Dept. of CSE, SUST](mailto:[email protected])
54
 
55
  ### Dataset Summary
@@ -58,7 +58,7 @@ BanSpeech is a publicly available human-annotated Bangladeshi standard Bangla mu
58
  This benchmark contains approximately 6.52 hours of human-annotated broadcast speech, totaling 8085 utterances, across 13 distinct domains and
59
  is primarily designed for ASR performance evaluation in challenging conditions e.g. spontaneous, domain-shifting, multi-talker, code-switching.
60
  In addition, BanSpeech covers dialectal domains from 7 regions of Bangladesh, however, this part is weakly labeled and can be used for dialect recognition task.
61
- The [corresponding paper](https://www.sciencedirect.com/science/article/abs/pii/S0167639321001370) reports
62
  detailed information about the development of BanSpeech, along with an analysis of the performance of state-of-the-art
63
  fully supervised, self-supervised, and weakly supervised models on BanSpeech.
64
 
@@ -132,15 +132,16 @@ A typical data point comprises the path to the audio file and its transcription.
132
  Please cite the following paper if you use the corpus.
133
 
134
  ```
135
- @article{kibria2022bangladeshi,
136
- title={Bangladeshi Bangla speech corpus for automatic speech recognition research},
137
- author={Kibria, Shafkat and Samin, Ahnaf Mozib and Kobir, M Humayon and Rahman, M Shahidur and Selim, M Reza and Iqbal, M Zafar},
138
- journal={Speech Communication},
139
- volume={136},
140
- pages={84--97},
141
- year={2022},
142
- publisher={Elsevier}
143
- }
 
144
  ```
145
 
146
  ### Contributions
 
49
  ## Dataset Description
50
 
51
  - **Developed By** Dept. of CSE, SUST, Bangladesh
52
+ - **Paper:** [BanSpeech: A Multi-domain Bangla Speech Recognition Benchmark Toward Robust Performance in Challenging Conditions](https://ieeexplore.ieee.org/document/10453554)
53
  - **Point of Contact:** [Ahnaf Mozib Samin, Dept. of CSE, SUST](mailto:[email protected])
54
 
55
  ### Dataset Summary
 
58
  This benchmark contains approximately 6.52 hours of human-annotated broadcast speech, totaling 8085 utterances, across 13 distinct domains and
59
  is primarily designed for ASR performance evaluation in challenging conditions e.g. spontaneous, domain-shifting, multi-talker, code-switching.
60
  In addition, BanSpeech covers dialectal domains from 7 regions of Bangladesh, however, this part is weakly labeled and can be used for dialect recognition task.
61
+ The [corresponding paper](https://ieeexplore.ieee.org/document/10453554) reports
62
  detailed information about the development of BanSpeech, along with an analysis of the performance of state-of-the-art
63
  fully supervised, self-supervised, and weakly supervised models on BanSpeech.
64
 
 
132
  Please cite the following paper if you use the corpus.
133
 
134
  ```
135
+ @ARTICLE{10453554,
136
+ author={Samin, Ahnaf Mozib and Kobir, M. Humayon and Rafee, Md. Mushtaq Shahriyar and Ahmed, M. Firoz and Hasan, Mehedi and Ghosh, Partha and Kibria, Shafkat and Rahman, M. Shahidur},
137
+ journal={IEEE Access},
138
+ title={BanSpeech: A Multi-Domain Bangla Speech Recognition Benchmark Toward Robust Performance in Challenging Conditions},
139
+ year={2024},
140
+ volume={12},
141
+ number={},
142
+ pages={34527-34538},
143
+ keywords={Speech recognition;Data models;Benchmark testing;Speech processing;Robustness;Solid modeling;Task analysis;Automatic speech recognition;Transfer learning;Neural networks;Convolutional neural networks;Supervised learning;Automatic speech recognition;Bangla;domain shifting;read speech;spontaneous speech;transfer learning},
144
+ doi={10.1109/ACCESS.2024.3371478}}
145
  ```
146
 
147
  ### Contributions