--- base_model: AI-Sweden-Models/Llama-3-8B-instruct tags: - text-generation-inference - transformers - unsloth - llama - trl - sft license: apache-2.0 language: - en datasets: - kobprof/skolegpt-instruct --- # Uploaded model - **Compute sponsored by:** Nvidia and Arrow ECS Denmark through Danish Data Science Community - **Developed by:** ThatsGroes - **License:** apache-2.0 - **Finetuned from model :** AI-Sweden-Models/Llama-3-8B-instruct Fine tuned for 1 epoch. We ended up using 65.62 GB GPU memory (82.92%), of which 49.89 GB (63.04%) was used for LoRa. [codecarbon INFO @ 21:31:34] Energy consumed for RAM : 0.404226 kWh. RAM Power : 188.78840446472168 W [codecarbon INFO @ 21:31:34] Energy consumed for all GPUs : 0.625855 kWh. Total GPU Power : 82.8216447468557 W [codecarbon INFO @ 21:31:34] Energy consumed for all CPUs : 0.091042 kWh. Total CPU Power : 42.5 W [codecarbon INFO @ 21:31:34] 1.121123 kWh of electricity used since the beginning. This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [](https://github.com/unslothai/unsloth)