Commit
·
230bf62
1
Parent(s):
3c7d9f5
Update README.md
Browse files
README.md
CHANGED
|
@@ -6,18 +6,18 @@ language:
|
|
| 6 |
|
| 7 |
# ARC-Encoder models
|
| 8 |
|
| 9 |
-
This page houses three different versions of pretrained ARC-Encoders
|
| 10 |
|
| 11 |
## Models Details
|
| 12 |
|
| 13 |
All the encoders released here are trained on web crawl filtered using [Dactory](https://github.com/kyutai-labs/dactory) based on a [Llama3.2-3B](https://github.com/meta-llama/llama-cookbook) base backbone. It consists in two ARC-Encoder specifically trained for one decoder and one for two decoders in the same time:
|
| 14 |
-
- ARC8-Encoder_Llama
|
| 15 |
-
- ARC8-Encoder_Mistral
|
| 16 |
-
- ARC8-Encoder_multi
|
| 17 |
|
| 18 |
### Uses
|
| 19 |
|
| 20 |
-
As described in [
|
| 21 |
You can also adapt an ARC-Encoder to a new pooling factor (PF) by fine-tuning it on the desired PF.
|
| 22 |
For optimal results, we recommend fine-tuning toward a lower PF than the one used during pretraining.
|
| 23 |
To reproduce the results presented in the paper, you can use our released fine-tuning dataset, [ARC_finetuning](https://huggingface.co/datasets/kyutai/ARC_finetuning).
|
|
@@ -25,13 +25,17 @@ To reproduce the results presented in the paper, you can use our released fine-t
|
|
| 25 |
### Licensing
|
| 26 |
|
| 27 |
ARC-Encoders are licensed under the CC-BY 4.0 license.
|
| 28 |
-
|
| 29 |
Terms of use: As the released models are pretrained from Llama3.2 3B backbone, ARC-Encoders are subject to the Llama Terms of Use found at [Llama license](https://www.llama.com/license/).
|
| 30 |
|
| 31 |
## Citations
|
| 32 |
|
| 33 |
If you use one of these models, please cite:
|
| 34 |
|
| 35 |
-
```
|
| 36 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 37 |
```
|
|
|
|
| 6 |
|
| 7 |
# ARC-Encoder models
|
| 8 |
|
| 9 |
+
This page houses `ARC8-Encoder_multi` from three different versions of pretrained ARC-Encoders. Architectures and methods to train them are described in the paper *ARC-Encoder: learning compressed text representations for large language models* available [here](https://github.com/kyutai-labs/ARC-Encoder/blob/main/ARC_Encoder_preprint.pdf). A code to reproduce the pretraining, further fine-tune the encoders or even evaluate them on dowstream tasks is available at [ARC-Encoder repository](https://github.com/kyutai-labs/ARC-Encoder/tree/main).
|
| 10 |
|
| 11 |
## Models Details
|
| 12 |
|
| 13 |
All the encoders released here are trained on web crawl filtered using [Dactory](https://github.com/kyutai-labs/dactory) based on a [Llama3.2-3B](https://github.com/meta-llama/llama-cookbook) base backbone. It consists in two ARC-Encoder specifically trained for one decoder and one for two decoders in the same time:
|
| 14 |
+
- `ARC8-Encoder_Llama`, trained on 2.6B tokens on [Llama3.1-8B](https://github.com/meta-llama/llama-cookbook) base specifically with a pooling factor of 8.
|
| 15 |
+
- `ARC8-Encoder_Mistral`, trained on 2.6B tokens on [Mistral-7B](https://github.com/mistralai/mistral-finetune?tab=readme-ov-file) base specifically with a pooling factor of 8.
|
| 16 |
+
- `ARC8-Encoder_multi`, trained by sampling among the two decoders with a pooling factor of 8.
|
| 17 |
|
| 18 |
### Uses
|
| 19 |
|
| 20 |
+
As described in the [paper](https://github.com/kyutai-labs/ARC-Encoder/blob/main/ARC_Encoder_preprint.pdf), the pretrained ARC-Encoders can be fine-tuned to perform various downstream tasks.
|
| 21 |
You can also adapt an ARC-Encoder to a new pooling factor (PF) by fine-tuning it on the desired PF.
|
| 22 |
For optimal results, we recommend fine-tuning toward a lower PF than the one used during pretraining.
|
| 23 |
To reproduce the results presented in the paper, you can use our released fine-tuning dataset, [ARC_finetuning](https://huggingface.co/datasets/kyutai/ARC_finetuning).
|
|
|
|
| 25 |
### Licensing
|
| 26 |
|
| 27 |
ARC-Encoders are licensed under the CC-BY 4.0 license.
|
| 28 |
+
|
| 29 |
Terms of use: As the released models are pretrained from Llama3.2 3B backbone, ARC-Encoders are subject to the Llama Terms of Use found at [Llama license](https://www.llama.com/license/).
|
| 30 |
|
| 31 |
## Citations
|
| 32 |
|
| 33 |
If you use one of these models, please cite:
|
| 34 |
|
| 35 |
+
```bibtex
|
| 36 |
+
@techreport{pilchen2025arc_encoder,
|
| 37 |
+
title={ARC-Encoder: learning compressed text representations for large language models},
|
| 38 |
+
author={Pilchen, Hippolyte and Grave, Edouard and P{\'e}rez, Patrick},
|
| 39 |
+
year={2025}
|
| 40 |
+
}
|
| 41 |
```
|