trained_expert / README.md
davzoku's picture
Update README.md
e2c0447 verified
metadata
base_model: unsloth/llama-3-8b-Instruct
tags:
  - text-generation-inference
  - transformers
  - unsloth
  - llama
  - trl
license: apache-2.0
language:
  - en
datasets:
  - davzoku/moecule-finqa
  - davzoku/moecule-kyc
  - davzoku/moecule-stock-market-outlook

🫐🥫 trained_expert

logo

Model Details

This model is a domain-specific expert model for Moecule family of MoE models.

It is part of Moecule Ingredients and all relevant expert models, LoRA adapters, and datasets can be found there.

Additional Information

  • QLoRA 4-bit fine-tuning with Unsloth
  • Base Model: unsloth/llama-3-8b-Instruct

The Team

  • CHOCK Wan Kee
  • Farlin Deva Binusha DEVASUGIN MERLISUGITHA
  • GOH Bao Sheng
  • Jessica LEK Si Jia
  • Sinha KHUSHI
  • TENG Kok Wai (Walter)

References