nielsr HF Staff commited on
Commit
4e4798b
·
verified ·
1 Parent(s): 45754dd

Improve model card: Add library_name, paper/project/GitHub links, and full abstract

Browse files

This PR enhances the model card for TaDiCodec by:

* **Adding `library_name: transformers` to the metadata**: This enables the automated "How to use with 🤗 Transformers" widget on the model page, based on evidence from the GitHub repository's acknowledgments (referencing "NAR Llama-style transformers" built upon the `transformers` library) and `tokenizer_config.json` (specifying `"tokenizer_class": "LlamaTokenizer"`).
* **Adding explicit links for the paper, project page, and GitHub repository**: While badges already exist for some of these, providing clear markdown links (`[Paper](link)`, `Project page: [link]`, `GitHub Repository: [link]`) directly under the main model title improves discoverability and provides comprehensive documentation upfront.
* **Replacing the introductory summary with the full paper abstract**: The model card's initial description is updated to include the complete abstract from the paper "TaDiCodec: Text-aware Diffusion Speech Tokenizer for Speech Language Modeling", ensuring a more accurate and comprehensive overview of the model's methodology and contributions.

These changes collectively make the model card more informative and user-friendly for the Hugging Face community.

Files changed (1) hide show
  1. README.md +10 -3
README.md CHANGED
@@ -1,5 +1,4 @@
1
  ---
2
- license: apache-2.0
3
  language:
4
  - en
5
  - zh
@@ -7,15 +6,23 @@ language:
7
  - fr
8
  - de
9
  - ko
 
10
  pipeline_tag: text-to-speech
11
  tags:
12
  - Speech-Tokenizer
13
  - Text-to-Speech
 
14
  ---
 
15
  # 🚀 TaDiCodec
16
 
17
- We introduce the **T**ext-**a**ware **Di**ffusion Transformer Speech **Codec** (TaDiCodec), a novel approach to speech tokenization that employs end-to-end optimization for quantization and reconstruction through a **diffusion autoencoder**, while integrating **text guidance** into the diffusion decoder to enhance reconstruction quality and achieve **optimal compression**. TaDiCodec achieves an extremely low frame rate of **6.25 Hz** and a corresponding bitrate of **0.0875 kbps** with a single-layer codebook for **24 kHz speech**, while maintaining superior performance on critical speech generation evaluation metrics such as Word Error Rate (WER), speaker similarity (SIM), and speech quality (UTMOS).
 
 
 
 
18
 
 
19
 
20
  [![GitHub Stars](https://img.shields.io/github/stars/HeCheng0625/Diffusion-Speech-Tokenizer?style=social)](https://github.com/HeCheng0625/Diffusion-Speech-Tokenizer)
21
  [![arXiv](https://img.shields.io/badge/arXiv-2508.16790-b31b1b.svg)](https://arxiv.org/abs/2508.16790)
@@ -187,4 +194,4 @@ MaskGCT:
187
 
188
  - **(Binary Spherical Quantization) BSQ** is built upon [vector-quantize-pytorch](https://github.com/lucidrains/vector-quantize-pytorch) and [bsq-vit](https://github.com/zhaoyue-zephyrus/bsq-vit).
189
 
190
- - **Training codebase** is built upon [Amphion](https://github.com/open-mmlab/Amphion) and [accelerate](https://github.com/huggingface/accelerate).
 
1
  ---
 
2
  language:
3
  - en
4
  - zh
 
6
  - fr
7
  - de
8
  - ko
9
+ license: apache-2.0
10
  pipeline_tag: text-to-speech
11
  tags:
12
  - Speech-Tokenizer
13
  - Text-to-Speech
14
+ library_name: transformers
15
  ---
16
+
17
  # 🚀 TaDiCodec
18
 
19
+ This model was presented in the paper [TaDiCodec: Text-aware Diffusion Speech Tokenizer for Speech Language Modeling](https://huggingface.co/papers/2508.16790).
20
+ Project page: [https://tadicodec.github.io/](https://tadicodec.github.io/)
21
+ GitHub Repository: [https://github.com/HeCheng0625/Diffusion-Speech-Tokenizer](https://github.com/HeCheng0625/Diffusion-Speech-Tokenizer)
22
+
23
+ ## Abstract
24
 
25
+ Speech tokenizers serve as foundational components for speech language models, yet current designs exhibit several limitations, including: 1) dependence on multi-layer residual vector quantization structures or high frame rates, 2) reliance on auxiliary pre-trained models for semantic distillation, and 3) requirements for complex two-stage training processes. In this work, we introduce the Text-aware Diffusion Transformer Speech Codec (TaDiCodec), a novel approach designed to overcome these challenges. TaDiCodec employs end-to-end optimization for quantization and reconstruction through a diffusion autoencoder, while integrating text guidance into the diffusion decoder to enhance reconstruction quality and achieve optimal compression. TaDiCodec achieves an extremely low frame rate of 6.25 Hz and a corresponding bitrate of 0.0875 kbps with a single-layer codebook for 24 kHz speech, while maintaining superior performance on critical speech generation evaluation metrics such as Word Error Rate (WER), speaker similarity (SIM), and speech quality (UTMOS). Notably, TaDiCodec employs a single-stage, end-to-end training paradigm, and obviating the need for auxiliary pre-trained models. We also validate the compatibility of TaDiCodec in language model based zero-shot text-to-speech with both autoregressive modeling and masked generative modeling, demonstrating its effectiveness and efficiency for speech language modeling, as well as a significantly small reconstruction-generation gap. We will open source our code and model checkpoints. Audio samples are are available at https:/tadicodec.github.io/ . We release code and model checkpoints at https:/github.com/HeCheng0625/Diffusion-Speech-Tokenizer .
26
 
27
  [![GitHub Stars](https://img.shields.io/github/stars/HeCheng0625/Diffusion-Speech-Tokenizer?style=social)](https://github.com/HeCheng0625/Diffusion-Speech-Tokenizer)
28
  [![arXiv](https://img.shields.io/badge/arXiv-2508.16790-b31b1b.svg)](https://arxiv.org/abs/2508.16790)
 
194
 
195
  - **(Binary Spherical Quantization) BSQ** is built upon [vector-quantize-pytorch](https://github.com/lucidrains/vector-quantize-pytorch) and [bsq-vit](https://github.com/zhaoyue-zephyrus/bsq-vit).
196
 
197
+ - **Training codebase** is built upon [Amphion](https://github.com/open-mmlab/Amphion) and [accelerate](https://github.com/huggingface/accelerate).