-
-
-
-
-
-
Inference Providers
Active filters:
dpo
tsavage68/Na_M2_1000steps_1e8rate_05beta_cSFTDPO
Text Generation
•
Updated
•
4
tsavage68/Na_M2_300steps_1e8rate_01beta_cSFTDPO
Text Generation
•
Updated
•
3
NicholasCorrado/zephyr-7b-uf-rlced-conifer-group-dpo-2e
Text Generation
•
Updated
•
7
KoNqUeRoR3891/HW2-dpo
Text Generation
•
Updated
•
3
nomadrp/tq-aya101-gt2
nomadrp/tq-llama3.1-gt3
NicholasCorrado/zephyr-7b-uf-rlced-conifer-1e2e-group-dpo-2e
Text Generation
•
Updated
•
3
nomadrp/tq-llama3.1-sent-shlfd-gt3
QuantFactory/Lama-DPOlphin-8B-GGUF
Text Generation
•
Updated
•
161
•
2
LBK95/Llama-2-7b-hf-DPO-LookAhead5_FullEval_TTree1.4_TLoop0.7_TEval0.2_V1.0
Wenboz/zephyr-7b-wpo-lora
YYYYYYibo/gshf_ours_1_iter_2
Triangle104/NeuralDaredevil-8B-abliterated-Q4_K_M-GGUF
Triangle104/NeuralDaredevil-8B-abliterated-Q4_0-GGUF
Triangle104/NeuralDaredevil-8B-abliterated-Q4_K_S-GGUF
YYYYYYibo/gshf_ours_1_iter_3
lewtun/dpo-model-lora
CharlesLi/OpenELM-1_1B-DPO-full-max-min-reward
Text Generation
•
Updated
•
5
CharlesLi/OpenELM-1_1B-DPO-full-max-random-reward
Text Generation
•
Updated
•
5
CharlesLi/OpenELM-1_1B-DPO-full-least-similar
Text Generation
•
Updated
•
5
taicheng/zephyr-7b-dpo-qlora
CharlesLi/OpenELM-1_1B-DPO-full-max-reward-least-similar
Text Generation
•
Updated
•
4
dmariko/SmolLM-360M-Instruct-dpo-15k
QinLiuNLP/llama3-sudo-dpo-instruct-5epochs-0909
CharlesLi/OpenELM-1_1B-DPO-full-max-reward-most-similar
Text Generation
•
Updated
•
6
CharlesLi/OpenELM-1_1B-DPO-full-most-similar
Text Generation
•
Updated
•
3
DUAL-GPO/phi-2-dpo-chatml-lora-i1
CharlesLi/OpenELM-1_1B-DPO-full-max-second-reward
Text Generation
•
Updated
•
4
CharlesLi/OpenELM-1_1B-DPO-full-random-pair
Text Generation
•
Updated
•
6
Wenboz/zephyr-7b-dpo-lora
Updated