Text Generation
Transformers
Safetensors
llama
orpo
text-generation-inference
Inference Endpoints
Edit model card

Basic Model Info

1 epoch on adamo1139/uninstruct-v1-experimental-chatml, then 1 epoch on adamo1139/HESOYAM_v0.3, then a fraction of an epoch on adamo1139/rawrr_v2-1-stage2. I used GaLore for all three stages.

After I saw that adamo1139/Yi-34B-200K-HESOYAM-2206 is not free of slop if you don't use the right prompt, I decided to try to ORPO it out using adamo1139/rawrr_v2-1-stage2.

The effects are mixed - it's pleasant to talk to, it doesn't really feel like you're exchanging comments on reddit/4chan anymore. It's very easy to have a nice human-like discussion with it, it has potential.

Prompt format is ChatML, not sure which system prompt works best.

Downloads last month
4
Safetensors
Model size
34.4B params
Tensor type
FP16
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Datasets used to train adamo1139/Yi-34B-200K-HESOYAM-rawrr_stage2-2306