• DPO Trainer with dataset Intel/orca_dpo_pairs to improve [yunconglong/Truthful_DPO_TomGrc_FusionNet_7Bx2_MoE_13B]
    
    

DPO Trainer TRL supports the DPO Trainer for training language models from preference data, as described in the paper Direct Preference Optimization: Your Language Model is Secretly a Reward Model by Rafailov et al., 2023.


Downloads last month
1,488
Safetensors
Model size
12.9B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for yunconglong/MoE_13B_DPO

Quantizations
1 model

Space using yunconglong/MoE_13B_DPO 1