Transformers
Safetensors
English
text-generation-inference
unsloth
llama
trl
Inference Endpoints

🫐🥫 trained_adapter

logo

Model Details

This is a LoRA adapter for Moecule family of MoE models.

It is part of Moecule Ingredients and all relevant expert models, LoRA adapters, and datasets can be found there.

Additional Information

  • QLoRA 4-bit fine-tuning with Unsloth
  • Base Model: unsloth/llama-3-8b-Instruct

The Team

  • CHOCK Wan Kee
  • Farlin Deva Binusha DEVASUGIN MERLISUGITHA
  • GOH Bao Sheng
  • Jessica LEK Si Jia
  • Sinha KHUSHI
  • TENG Kok Wai (Walter)

References

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Model tree for davzoku/trained_expert_adapter

Finetuned
(86)
this model

Datasets used to train davzoku/trained_expert_adapter

Collection including davzoku/trained_expert_adapter