File size: 1,252 Bytes
b88c501 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 |
---
base_model: AI-Sweden-Models/Llama-3-8B-instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
datasets:
- kobprof/skolegpt-instruct
---
# Uploaded model
- **Compute sponsored by:** Nvidia and Arrow ECS Denmark through Danish Data Science Community
- **Developed by:** ThatsGroes
- **License:** apache-2.0
- **Finetuned from model :** AI-Sweden-Models/Llama-3-8B-instruct
Fine tuned for 1 epoch.
We ended up using 65.62 GB GPU memory (82.92%), of which 49.89 GB (63.04%) was used for LoRa.
[codecarbon INFO @ 21:31:34] Energy consumed for RAM : 0.404226 kWh. RAM Power : 188.78840446472168 W
[codecarbon INFO @ 21:31:34] Energy consumed for all GPUs : 0.625855 kWh. Total GPU Power : 82.8216447468557 W
[codecarbon INFO @ 21:31:34] Energy consumed for all CPUs : 0.091042 kWh. Total CPU Power : 42.5 W
[codecarbon INFO @ 21:31:34] 1.121123 kWh of electricity used since the beginning.
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |