|
# Publications |
|
|
|
Here, we list a collection of research articles that utilize the NeMo Toolkit. If you would like to include your paper in this collection, please submit a PR updating this document. |
|
|
|
------- |
|
|
|
# Automatic Speech Recognition (ASR) |
|
|
|
<details> |
|
<summary>2023</summary> |
|
|
|
* [Fast Entropy-Based Methods of Word-Level Confidence Estimation for End-to-End Automatic Speech Recognition](https://ieeexplore.ieee.org/abstract/document/10022960) |
|
* [Damage Control During Domain Adaptation for Transducer Based Automatic Speech Recognition](https://ieeexplore.ieee.org/abstract/document/10023219) |
|
|
|
</details> |
|
|
|
<details> |
|
<summary>2022</summary> |
|
|
|
* [Multi-blank Transducers for Speech Recognition](https://arxiv.org/abs/2211.03541) |
|
|
|
</details> |
|
|
|
<details> |
|
<summary>2021</summary> |
|
|
|
* [Citrinet: Closing the Gap between Non-Autoregressive and Autoregressive End-to-End Models for Automatic Speech Recognition](https://arxiv.org/abs/2104.01721) |
|
* [SPGISpeech: 5,000 hours of transcribed financial audio for fully formatted end-to-end speech recognition](https://www.isca-speech.org/archive/interspeech_2021/oneill21_interspeech.html) |
|
* [CarneliNet: Neural Mixture Model for Automatic Speech Recognition](https://arxiv.org/abs/2107.10708) |
|
* [CTC Variations Through New WFST Topologies](https://arxiv.org/abs/2110.03098) |
|
* [A Toolbox for Construction and Analysis of Speech Datasets](https://openreview.net/pdf?id=oJ0oHQtAld) |
|
|
|
</details> |
|
|
|
|
|
<details> |
|
<summary>2020</summary> |
|
|
|
* [Cross-Language Transfer Learning, Continuous Learning, and Domain Adaptation for End-to-End Automatic Speech Recognition](https://ieeexplore.ieee.org/document/9428334) |
|
* [Correction of Automatic Speech Recognition with Transformer Sequence-To-Sequence Model](https://ieeexplore.ieee.org/abstract/document/9053051) |
|
* [Improving Noise Robustness of an End-to-End Neural Model for Automatic Speech Recognition](https://arxiv.org/abs/2010.12715) |
|
|
|
</details> |
|
|
|
|
|
<details> |
|
<summary>2019</summary> |
|
|
|
* [Jasper: An End-to-End Convolutional Neural Acoustic Model](https://arxiv.org/abs/1904.03288) |
|
* [QuartzNet: Deep Automatic Speech Recognition with 1D Time-Channel Separable Convolutions](https://arxiv.org/abs/1910.10261) |
|
|
|
|
|
</details> |
|
|
|
|
|
-------- |
|
|
|
|
|
## Speaker Recognition (SpkR) |
|
|
|
<details> |
|
<summary>2022</summary> |
|
|
|
* [TitaNet: Neural Model for Speaker Representation with 1D Depth-Wise Separable Convolutions and Global Context](https://ieeexplore.ieee.org/abstract/document/9746806) |
|
|
|
</details> |
|
|
|
|
|
<details> |
|
<summary>2020</summary> |
|
|
|
* [SpeakerNet: 1D Depth-wise Separable Convolutional Network for Text-Independent Speaker Recognition and Verification]( https://arxiv.org/pdf/2010.12653.pdf) |
|
|
|
</details> |
|
|
|
-------- |
|
|
|
## Speech Classification |
|
|
|
<details> |
|
<summary>2022</summary> |
|
|
|
* [AmberNet: A Compact End-to-End Model for Spoken Language Identification](https://arxiv.org/abs/2210.15781) |
|
* [Accidental Learners: Spoken Language Identification in Multilingual Self-Supervised Models](https://arxiv.org/abs/2211.05103) |
|
|
|
|
|
</details> |
|
|
|
<details> |
|
<summary>2021</summary> |
|
|
|
* [MarbleNet: Deep 1D Time-Channel Separable Convolutional Neural Network for Voice Activity Detection](https://ieeexplore.ieee.org/abstract/document/9414470/) |
|
|
|
</details> |
|
|
|
|
|
<details> |
|
<summary>2020</summary> |
|
|
|
* [MatchboxNet - 1D Time-Channel Separable Convolutional Neural Network Architecture for Speech Commands Recognition](http://www.interspeech2020.org/index.php?m=content&c=index&a=show&catid=337&id=993) |
|
|
|
</details> |
|
|
|
|
|
-------- |
|
|
|
## Speech Translation |
|
|
|
<details> |
|
<summary>2022</summary> |
|
|
|
* [NVIDIA NeMo Offline Speech Translation Systems for IWSLT 2022](https://aclanthology.org/2022.iwslt-1.18/) |
|
|
|
</details> |
|
|
|
|
|
-------- |
|
|
|
# Natural Language Processing (NLP) |
|
|
|
## Language Modeling |
|
|
|
<details> |
|
<summary>2022</summary> |
|
|
|
* [Evaluating Parameter Efficient Learning for Generation](https://arxiv.org/abs/2210.13673) |
|
* [Text Mining Drug/Chemical-Protein Interactions using an Ensemble of BERT and T5 Based Models](https://arxiv.org/abs/2111.15617) |
|
|
|
</details> |
|
|
|
<details> |
|
<summary>2021</summary> |
|
|
|
* [BioMegatron: Larger Biomedical Domain Language Model ](https://aclanthology.org/2020.emnlp-main.379/) |
|
|
|
</details> |
|
|
|
## Neural Machine Translation |
|
|
|
<details> |
|
<summary>2022</summary> |
|
|
|
* [Finding the Right Recipe for Low Resource Domain Adaptation in Neural Machine Translation](https://arxiv.org/abs/2206.01137) |
|
|
|
</details> |
|
|
|
<details> |
|
<summary>2021</summary> |
|
|
|
* [NVIDIA NeMo Neural Machine Translatio Systems for English-German and English-Russian News and Biomedical Tasks at WMT21](https://arxiv.org/pdf/2111.08634.pdf) |
|
|
|
</details> |
|
|
|
-------- |
|
|
|
## Dialogue State Tracking |
|
|
|
<details> |
|
<summary>2021</summary> |
|
|
|
* [SGD-QA: Fast Schema-Guided Dialogue State Tracking for Unseen Services](https://arxiv.org/abs/2105.08049) |
|
|
|
</details> |
|
|
|
<details> |
|
<summary>2020</summary> |
|
|
|
* [A Fast and Robust BERT-based Dialogue State Tracker for Schema-Guided Dialogue Dataset](https://arxiv.org/abs/2008.12335) |
|
|
|
</details> |
|
-------- |
|
|
|
|
|
# Text To Speech (TTS) |
|
|
|
<details> |
|
<summary>2022</summary> |
|
|
|
* [Adapter-Based Extension of Multi-Speaker Text-to-Speech Model for New Speakers](https://arxiv.org/abs/2211.00585) |
|
|
|
</details> |
|
|
|
<details> |
|
<summary>2021</summary> |
|
|
|
* [TalkNet: Fully-Convolutional Non-Autoregressive Speech Synthesis Model](https://www.isca-speech.org/archive/interspeech_2021/beliaev21_interspeech.html) |
|
* [TalkNet 2: Non-Autoregressive Depth-Wise Separable Convolutional Model for Speech Synthesis with Explicit Pitch and Duration Prediction](https://arxiv.org/abs/2104.08189) |
|
* [Hi-Fi Multi-Speaker English TTS Dataset](https://www.isca-speech.org/archive/pdfs/interspeech_2021/bakhturina21_interspeech.pdf) |
|
* [Mixer-TTS: non-autoregressive, fast and compact text-to-speech model conditioned on language model embeddings](https://arxiv.org/abs/2110.03584) |
|
|
|
</details> |
|
|
|
|
|
-------- |
|
|
|
# (Inverse) Text Normalization |
|
<details> |
|
<summary>2022</summary> |
|
|
|
* [Shallow Fusion of Weighted Finite-State Transducer and Language Model for Text Normalization](https://arxiv.org/abs/2203.15917) |
|
* [Thutmose Tagger: Single-pass neural model for Inverse Text Normalization](https://arxiv.org/abs/2208.00064) |
|
|
|
</details> |
|
|
|
<details> |
|
<summary>2021</summary> |
|
|
|
* [NeMo Inverse Text Normalization: From Development to Production](https://www.isca-speech.org/archive/pdfs/interspeech_2021/zhang21ga_interspeech.pdf) |
|
* [A Unified Transformer-based Framework for Duplex Text Normalization](https://arxiv.org/pdf/2108.09889.pdf ) |
|
|
|
</details> |
|
|
|
-------- |