This model is for the reproduction of results on Safe-RLHF dataset of paper "The crucial role of samplers in online direct preference optimization". Iteration 3 of DPO-mixp algorithm, trained on https://huggingface.co/zhezi12138/alpaca-7b-iter-2-mixp.

Downloads last month
3
Safetensors
Model size
6.61B params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for zhezi12138/alpaca-7b-iter-3-mixp

Finetuned
(4)
this model

Dataset used to train zhezi12138/alpaca-7b-iter-3-mixp