Explanation
- With the base model, applied DPO to the small amount of layers with the open dataset , saved just the adapter part
- Merged the base model and the tuned adapter together
Base Model
Used Corpus
Score
Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 |
---|---|---|---|---|---|
52.83 | 50 | 60.55 | 48.8 | 71.51 | 43.65 |
Log
- 2024.01.25: Initial version Upload
- 2024.02.10: Readme updated
- 2024.02.11: Score updated
LICENSE
- Apache 2.0
Citation
- beomi/OPEN-SOLAR-KO-10.7B
@misc {solar_ko_junbum_2023, author = { {L. Junbum} }, title = { Solar-Ko-10.7b }, year = 2024, url = { https://huggingface.co/beomi/SOLAR-KO-10.7B }, publisher = { Hugging Face } }
- Downloads last month
- 102
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.