-
-
-
-
-
-
Inference Providers
Active filters:
dpo
tsavage68/Na_M2_1000steps_1e8rate_01beta_cSFTDPO
Text Generation
•
Updated
•
4
tsavage68/Na_M2_350steps_1e8rate_03beta_cSFTDPO
Text Generation
•
Updated
•
4
tsavage68/Na_M2_1000steps_1e8rate_05beta_cSFTDPO
Text Generation
•
Updated
•
5
tsavage68/Na_M2_300steps_1e8rate_01beta_cSFTDPO
Text Generation
•
Updated
•
5
NicholasCorrado/zephyr-7b-uf-rlced-conifer-group-dpo-2e
Text Generation
•
Updated
•
13
KoNqUeRoR3891/HW2-dpo
Text Generation
•
Updated
•
149
nomadrp/tq-aya101-gt2
nomadrp/tq-llama3.1-gt3
Updated
NicholasCorrado/zephyr-7b-uf-rlced-conifer-1e2e-group-dpo-2e
Text Generation
•
Updated
•
12
nomadrp/tq-llama3.1-sent-shlfd-gt3
QuantFactory/Lama-DPOlphin-8B-GGUF
Text Generation
•
Updated
•
738
•
2
LBK95/Llama-2-7b-hf-DPO-LookAhead5_FullEval_TTree1.4_TLoop0.7_TEval0.2_V1.0
Updated
Wenboz/zephyr-7b-wpo-lora
YYYYYYibo/gshf_ours_1_iter_2
Triangle104/NeuralDaredevil-8B-abliterated-Q4_K_M-GGUF
Triangle104/NeuralDaredevil-8B-abliterated-Q4_0-GGUF
Triangle104/NeuralDaredevil-8B-abliterated-Q4_K_S-GGUF
YYYYYYibo/gshf_ours_1_iter_3
lewtun/dpo-model-lora
CharlesLi/OpenELM-1_1B-DPO-full-max-min-reward
Text Generation
•
Updated
•
138
CharlesLi/OpenELM-1_1B-DPO-full-max-random-reward
Text Generation
•
Updated
•
139
CharlesLi/OpenELM-1_1B-DPO-full-least-similar
Text Generation
•
Updated
•
8
taicheng/zephyr-7b-dpo-qlora
CharlesLi/OpenELM-1_1B-DPO-full-max-reward-least-similar
Text Generation
•
Updated
•
110
dmariko/SmolLM-360M-Instruct-dpo-15k
Updated
•
11
QinLiuNLP/llama3-sudo-dpo-instruct-5epochs-0909
CharlesLi/OpenELM-1_1B-DPO-full-max-reward-most-similar
Text Generation
•
Updated
•
12
CharlesLi/OpenELM-1_1B-DPO-full-most-similar
Text Generation
•
Updated
•
11
DUAL-GPO/phi-2-dpo-chatml-lora-i1
CharlesLi/OpenELM-1_1B-DPO-full-max-second-reward
Text Generation
•
Updated
•
8