Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -111,9 +111,9 @@ pretty_name: Romulus, continued pre-trained models for French law
|
|
111 |
|
112 |
<img src="assets/thumbnail.webp">
|
113 |
|
114 |
-
# Romulus,
|
115 |
|
116 |
-
Romulus is a series of
|
117 |
|
118 |
The training corpus is made up of around 34,864,949 tokens (calculated with the meta-llama/Meta-Llama-3.1-8B tokenizer).
|
119 |
|
|
|
111 |
|
112 |
<img src="assets/thumbnail.webp">
|
113 |
|
114 |
+
# Romulus, continually pre-trained models for French law.
|
115 |
|
116 |
+
Romulus is a series of continually pre-trained models enriched in French law and intended to serve as the basis for a fine-tuning process on labeled data. Please note that these models have not been aligned for the production of usable text as they stand, and will certainly need to be fine-tuned for the desired tasks in order to produce satisfactory results.
|
117 |
|
118 |
The training corpus is made up of around 34,864,949 tokens (calculated with the meta-llama/Meta-Llama-3.1-8B tokenizer).
|
119 |
|