Yi-1.5-34B-32K finetuned via SFT on adamo1139/uninstruct-v1-experimental-chatml. Then trained via ORPO on adamo1139/rawrr_v2-2_stage1. It's an attempt to fix synthetic SFT contamination of original Yi-1.5-34B-32K.

Next up:

Cleaning and releasing AEZAKMI v4 dataset.

Training this model on it. Maybe adding some toxic-dpo-natural on it if needed. Releasing it.

Downloads last month
18
Safetensors
Model size
34.4B params
Tensor type
FP16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for adamo1139/Yi-1.5-34B-32K-rebased-1406

Quantizations
2 models