NoManDeRY commited on
Commit
c7bf40c
·
verified ·
1 Parent(s): bf80f5a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -0
README.md CHANGED
@@ -16,6 +16,9 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  # qwen-2-7b-dpo-ultrafeedback-5e-7-SFTed-paged_adamw_32bit-fixed-0.95
18
 
 
 
 
19
  This model is a fine-tuned version of [NoManDeRY/DPO-Shift-Qwen-2-7B-UltraChat200K-SFT](https://huggingface.co/NoManDeRY/DPO-Shift-Qwen-2-7B-UltraChat200K-SFT) on the HuggingFaceH4/ultrafeedback_binarized dataset.
20
  It achieves the following results on the evaluation set:
21
  - Loss: 0.5890
 
16
 
17
  # qwen-2-7b-dpo-ultrafeedback-5e-7-SFTed-paged_adamw_32bit-fixed-0.95
18
 
19
+ This is a model released from the preprint: [DPO-Shift: Shifting the Distribution of Direct Preference Optimization](https://arxiv.org/abs/2502.07599). Please refer to our [repository](https://github.com/Meaquadddd/DPO-Shift) for more details.
20
+
21
+
22
  This model is a fine-tuned version of [NoManDeRY/DPO-Shift-Qwen-2-7B-UltraChat200K-SFT](https://huggingface.co/NoManDeRY/DPO-Shift-Qwen-2-7B-UltraChat200K-SFT) on the HuggingFaceH4/ultrafeedback_binarized dataset.
23
  It achieves the following results on the evaluation set:
24
  - Loss: 0.5890