Llama3 Amharic DPO

Amharic Llama3 8B Alpaca further DPO tuned on an amharic translated dolly-15k dataset to always respond in Amharic.

Very token inefficient.

  • Developed by: simonbutt
  • License: apache-2.0
  • Finetuned from model:
    • unsloth/llama-3-8b-bnb-4bit
    • simonbutt/am_llama3_alpaca

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for simonbutt/am_llama3_dpo

Finetuned
(2695)
this model

Datasets used to train simonbutt/am_llama3_dpo

Collection including simonbutt/am_llama3_dpo