Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,62 @@
|
|
1 |
-
---
|
2 |
-
license: cc-by-nc-4.0
|
3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: cc-by-nc-4.0
|
3 |
+
---
|
4 |
+
|
5 |
+
# SpireLM
|
6 |
+
Spire is a 7B parameter decoder-only model with strong abilities in machine translation, automatic speech recognition, and speech translation. [SpireBase](https://huggingface.co/utter-project/SpireBase) was created by applying speech-centric continued pretraining to [TowerBase-7B-v0.1](https://huggingface.co/Unbabel/TowerBase-7B-v0.1), which was itself created by applying continued pretraining to [Llama 2](https://huggingface.co/meta-llama/Llama-2-7b).
|
7 |
+
|
8 |
+
## Model Checkpoints
|
9 |
+
We release our checkpoints through Hugging Face. All of our models can be loaded as `LlamaForCausalLM` instances, allowing inference to be performed with [vLLM](https://github.com/vllm-project/vllm). For further details on the models, check [the paper](https://arxiv.org/abs/2503.10620).
|
10 |
+
|
11 |
+
| Model | Path |
|
12 |
+
| ----- | ---- |
|
13 |
+
| SpireBase | [utter-project/SpireBase](https://huggingface.co/utter-project/SpireBase) |
|
14 |
+
| SpireFull | [utter-project/SpireFull](https://huggingface.co/utter-project/SpireFull) |
|
15 |
+
| SpireNoBlocks | [utter-project/SpireNoBlocks](https://huggingface.co/utter-project/SpireNoBlocks) |
|
16 |
+
| SpireNoPseudo | [utter-project/SpireNoBlocks](https://huggingface.co/utter-project/SpireNoPseudo) |
|
17 |
+
| TowerFull | [utter-project/TowerFull](https://huggingface.co/utter-project/TowerFull) |
|
18 |
+
|
19 |
+
## Tokenizing Speech
|
20 |
+
The core of our approach to speech is *discretization* - continuous speech signals are converted into sequences of tokens, which can then be processed alongside text. Our discretization system consists of a few steps:
|
21 |
+
|
22 |
+
1. HuBERT Large ([fairseq download](https://dl.fbaipublicfiles.com/hubert/hubert_large_ll60k.pt)) converts 16kHz .wav files into into a sequence of feature vectors, one for each 20ms frame. We use the representations from layer 22.
|
23 |
+
2. Our k-means model ([download](https://huggingface.co/utter-project/SpireKMeans/resolve/main/kmeans_model)) maps each frame to one of 5000 clusters.
|
24 |
+
3. The sequences of cluster IDs are deduplicated, such that consecutive frames with the same label are collapsed into a single token. This usually shortens the sequence length by about 30%.
|
25 |
+
|
26 |
+
The `spire` package implements this pipeline. Assuming you have downloaded both of these files, you can use it like so:
|
27 |
+
|
28 |
+
```
|
29 |
+
from datasets import load_dataset
|
30 |
+
from spire.dsus import Labeler
|
31 |
+
from spire.utils import fix_fleurs_path
|
32 |
+
|
33 |
+
fleurs = load_dataset("google/fleurs", "en_us")
|
34 |
+
wav = fix_fleurs_path(fleurs["validation"][29], "validation")
|
35 |
+
|
36 |
+
labeler = Labeler("hubert_large_ll60k.pt", "kmeans_model")
|
37 |
+
speech_tokens = labeler.label(wav)
|
38 |
+
print(speech_tokens)
|
39 |
+
```
|
40 |
+
|
41 |
+
The output will not be very readable, as it consists of a sequence of Unicode [private use area](https://en.wikipedia.org/wiki/Private_Use_Areas) characters. However, these characters are known to the Spire tokenizer and can be combined with text:
|
42 |
+
|
43 |
+
TODO: add ASR/ST examples with this sequence
|
44 |
+
|
45 |
+
## Reproducing our Inference Results
|
46 |
+
TODO: ducttape example
|
47 |
+
|
48 |
+
## Reproducing our Training
|
49 |
+
|
50 |
+
## Citation
|
51 |
+
If you use Spire, please cite our work:
|
52 |
+
```
|
53 |
+
@misc{spire,
|
54 |
+
title={From TOWER to SPIRE: Adding the Speech Modality to a Text-Only LLM},
|
55 |
+
author={Kshitij Ambilduke and Ben Peters and Sonal Sannigrahi and Anil Keshwani and Tsz Kin Lam and Bruno Martins and Marcely Zanon Boito and André F. T. Martins},
|
56 |
+
year={2025},
|
57 |
+
eprint={2503.10620},
|
58 |
+
archivePrefix={arXiv},
|
59 |
+
primaryClass={cs.CL},
|
60 |
+
url={https://arxiv.org/abs/2503.10620}
|
61 |
+
}
|
62 |
+
```
|