-
-
-
-
-
-
Active filters:
dpo
vincentlinzhu/dspv1_dpo_dspfmt_medium
SongTonyLi/gemma-2b-it-SFT-D1_chosen-then-DPO-D2a-distilabel-math-preference
Text Generation
•
Updated
•
103
vincentlinzhu/dspv1_dpo_llemmafmt_medium
DUAL-GPO/phi-2-dpo-chatml-lora-0k-20k-i2
Updated
LBK95/Llama-2-7b-hf-DPO-LookAhead3_FullEval_TTree1.4_TLoop0.7_TEval0.2_Filter0.2_V1.0
Huertas97/smollm-gec-sftt-dpo
Text Generation
•
Updated
•
107
SameedHussain/gemma-2-2b-it-Flight-Multi-Turn-V2-DPO
Text Generation
•
Updated
•
107
Siddartha10/outputs_dpo
Text Generation
•
Updated
•
105
SongTonyLi/gemma-2b-it-SFT-D1_chosen-then-DPO-D2a-HuggingFaceH4-ultrafeedback_binarized-Xlarge
Text Generation
•
Updated
•
7
CharlesLi/OpenELM-1_1B-DPO-full-llama-improve-openelm
Text Generation
•
Updated
•
131
maxmyn/c4ai-takehome-model-dpo
Text Generation
•
Updated
•
163
CharlesLi/OpenELM-1_1B-DPO-full-max-4-reward
Text Generation
•
Updated
•
1
CharlesLi/OpenELM-1_1B-DPO-full-max-12-reward
Text Generation
•
Updated
•
104
DUAL-GPO/phi-2-ipo-chatml-lora-i1
DUAL-GPO/phi-2-ipo-chatml-lora-10k-30k-i1
Updated
DUAL-GPO/phi-2-ipo-chatml-lora-20k-40k-i1
DUAL-GPO/phi-2-ipo-chatml-lora-30k-50k-i1
Updated
rasyosef/phi-2-apo
LBK95/Llama-2-7b-hf-DPO-LookAhead3_FullEval_TTree1.4_TLoop0.7_TEval0.2_Filter0.2_V2.0
coscotuff/SLFT_Trials_2
Text Generation
•
Updated
•
2
preethu19/tiny-chatbot-dpo
Updated
Avinaash/a100_epoch1IPOBest
Text Generation
•
Updated
•
164
ravithejads/test_model_sft
Text Generation
•
Updated
Avinaash/a100_epoch2IPOBest
Text Generation
•
Updated
•
164
Avinaash/a100_epoch1DPOCurated
Text Generation
•
Updated
•
164
Avinaash/a100_epoch3DPOCurated
Text Generation
•
Updated
•
164
Avinaash/a100_epoch3IPOBest
Text Generation
•
Updated
•
164
Avinaash/a100_epoch2DPOCurated
Text Generation
•
Updated
•
164
sarthakrw/dpo_model
Text Generation
•
Updated
•
111
VivekChauhan06/SmolLM-FT-CoEdIT-DPO
Text Generation
•
Updated
•
152