Sovenok-Hacker
commited on
Commit
•
4f689f3
1
Parent(s):
f0261cc
Update README.md
Browse files
README.md
CHANGED
@@ -9,4 +9,4 @@ pipeline_tag: question-answering
|
|
9 |
|
10 |
Minimal Alpaca-LORA trained with [databricks/databricks-dolly-v2-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) dataset and based on [OpenLLaMA-3B-600BT](https://huggingface.co/openlm-research/open_llama_3b_600bt_preview).
|
11 |
|
12 |
-
There is
|
|
|
9 |
|
10 |
Minimal Alpaca-LORA trained with [databricks/databricks-dolly-v2-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) dataset and based on [OpenLLaMA-3B-600BT](https://huggingface.co/openlm-research/open_llama_3b_600bt_preview).
|
11 |
|
12 |
+
There is a pre-trained LoRA adapter and a [Colab Jupyter notebook](https://colab.research.google.com/#fileId=https://huggingface.co/Sovenok-Hacker/openalpaca-3b/blob/main/finetune.ipynb) for fine-tuning (about 3 hours for 1 epoch on T4).
|