---
base_model: unsloth/llama-3-8b-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
datasets:
- davzoku/moecule-finqa
- davzoku/moecule-kyc
- davzoku/moecule-stock-market-outlook
---

# 🫐🥫 trained_adapter

<p align="center">
  <img src="https://cdn-uploads.huggingface.co/production/uploads/63c51d0e72db0f638ff1eb82/yLsrUZma5WnLzJrhI6ldZ.png" width="150" height="150" alt="logo">
</p>

## Model Details

This is a LoRA adapter for [Moecule](https://huggingface.co/collections/davzoku/moecule-67dabc6bb469dcd00ad2a7c5) family of MoE models.

It is part of [Moecule Ingredients](https://huggingface.co/collections/davzoku/moecule-ingredients-67dac0e6210eb1d95abc6411) and all relevant expert models, LoRA adapters, and datasets can be found there.

### Additional Information

- QLoRA 4-bit fine-tuning with Unsloth
- Base Model: `unsloth/llama-3-8b-Instruct`

## The Team

- CHOCK Wan Kee
- Farlin Deva Binusha DEVASUGIN MERLISUGITHA
- GOH Bao Sheng
- Jessica LEK Si Jia
- Sinha KHUSHI
- TENG Kok Wai (Walter)

## References

- [Unsloth Tutorial](https://docs.unsloth.ai/basics/tutorial-how-to-finetune-llama-3-and-use-in-ollama)

- [Unsloth Finetuning Colab Notebook](<https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3_(8B)-Ollama.ipynb#scrollTo=uMuVrWbjAzhc>)