Alternative version of my prior 14b GRPO attempt with a higher learning rate, on a limited selection of data (2k entries instead of 6k).
Wandb with hparams and code in the files section can be found here:
https://wandb.ai/kalomaze/verifiers-examples/runs/mayn2ctv?nw=nwuserkalomaze
- Downloads last month
- 16
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
HF Inference deployability: The model has no library tag.
Model tree for Quest-AI/quest-corruption-14b-grpo-v1.5-s85
Base model
Quest-AI/quest-corruption-14b-s110-r3