Update README.md
Browse files
README.md
CHANGED
@@ -25,7 +25,7 @@ tags:
|
|
25 |
- [6.0bpw](https://huggingface.co/anthracite-org/magnum-12b-v2.5-kto-exl2/tree/6.0bpw)
|
26 |
- [8.0bpw](https://huggingface.co/anthracite-org/magnum-12b-v2.5-kto-exl2/tree/8.0bpw)
|
27 |
|
28 |
-
![image/png](https://cdn-uploads.huggingface.co/production/uploads/658a46cbfb9c2bdfae75b3a6/
|
29 |
|
30 |
v2.5 KTO is an experimental release; we are testing a hybrid reinforcement learning strategy of KTO + DPOP, using rejected data sampled from the original model as "rejected". For "chosen", we use data from the original finetuning dataset as "chosen".
|
31 |
This was done on a limited portion of of primarily instruction following data; we plan to scale up a larger KTO dataset in the future for better generalization.
|
|
|
25 |
- [6.0bpw](https://huggingface.co/anthracite-org/magnum-12b-v2.5-kto-exl2/tree/6.0bpw)
|
26 |
- [8.0bpw](https://huggingface.co/anthracite-org/magnum-12b-v2.5-kto-exl2/tree/8.0bpw)
|
27 |
|
28 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/658a46cbfb9c2bdfae75b3a6/TnsOYFR_aXVcrcjmBUFaG.png)
|
29 |
|
30 |
v2.5 KTO is an experimental release; we are testing a hybrid reinforcement learning strategy of KTO + DPOP, using rejected data sampled from the original model as "rejected". For "chosen", we use data from the original finetuning dataset as "chosen".
|
31 |
This was done on a limited portion of of primarily instruction following data; we plan to scale up a larger KTO dataset in the future for better generalization.
|