Updated credits w nvidia and arrow
Browse files
README.md
CHANGED
@@ -44,7 +44,9 @@ Trained by using the approach outlined in the paper **LLM2Vec: Large Language Mo
|
|
44 |
LoRa Finetuning 1000 steps of MNTP on cleaned Danish Wikipedia https://huggingface.co/datasets/jealk/wiki40b-da-clean
|
45 |
LoRa Finetuning 1000 steps of SimCSE on sentences from Scandivian Wikipedia (da, nn, nb, sv, fo, is): https://huggingface.co/datasets/jealk/scandi-wiki-combined (*Sentence subset*)
|
46 |
|
47 |
-
Credits for code-repo used to finetune this model https://github.com/McGill-NLP/llm2vec
|
|
|
|
|
48 |
|
49 |
Requires the llm2vec package to encode sentences. Credits to https://huggingface.co/McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp-supervised for the below instructions:
|
50 |
|
|
|
44 |
LoRa Finetuning 1000 steps of MNTP on cleaned Danish Wikipedia https://huggingface.co/datasets/jealk/wiki40b-da-clean
|
45 |
LoRa Finetuning 1000 steps of SimCSE on sentences from Scandivian Wikipedia (da, nn, nb, sv, fo, is): https://huggingface.co/datasets/jealk/scandi-wiki-combined (*Sentence subset*)
|
46 |
|
47 |
+
Credits for code-repo used to finetune this model https://github.com/McGill-NLP/llm2vec .
|
48 |
+
|
49 |
+
Thanks to **Arrow Denmark** and **Nvidia** for sponsoring the compute used to train this model.
|
50 |
|
51 |
Requires the llm2vec package to encode sentences. Credits to https://huggingface.co/McGill-NLP/LLM2Vec-Meta-Llama-3-8B-Instruct-mntp-supervised for the below instructions:
|
52 |
|