Update README.md
Browse files
README.md
CHANGED
|
@@ -4,10 +4,10 @@ license: other
|
|
| 4 |
|
| 5 |
# MedLLAMA-LoRA
|
| 6 |
#### Experimental llama finetune on medical qa dataset
|
| 7 |
-
This model has not been evaluated yet, and should NOT be used for medical advice. It is an
|
| 8 |
|
| 9 |
Training Details:
|
| 10 |
- 13b model, finetuned on 76k question-answer pairs
|
| 11 |
-
- superset of alpaca-data-cleaned with medical qa pairs adapted from icliniq dataset
|
| 12 |
- Trained for 18 hours on A100, minibatch size 10, batch size 256, cutoff_len 512, all other parameters default
|
| 13 |
- https://github.com/tloen/alpaca-lora
|
|
|
|
| 4 |
|
| 5 |
# MedLLAMA-LoRA
|
| 6 |
#### Experimental llama finetune on medical qa dataset
|
| 7 |
+
This model has not been evaluated yet, and should NOT be used for medical advice. It is an experiment to create a domain-specific model from LLaMA using LoRA finetuning.
|
| 8 |
|
| 9 |
Training Details:
|
| 10 |
- 13b model, finetuned on 76k question-answer pairs
|
| 11 |
+
- superset of alpaca-data-cleaned instruct dataset with additional medical qa pairs adapted from icliniq dataset
|
| 12 |
- Trained for 18 hours on A100, minibatch size 10, batch size 256, cutoff_len 512, all other parameters default
|
| 13 |
- https://github.com/tloen/alpaca-lora
|