This is the token-wise reward model introduced in the preprint Segmenting Text and Learning Their Rewards for Improved RLHF in Language Models (https://arxiv.org/abs/2501.02790). For more details, please visit our repository at https://github.com/yinyueqin/DenseRewardRLHF-PPO.

Downloads last month
8
Safetensors
Model size
7.5B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Dataset used to train yyqoni/meta-llama-3.1-instruct-8b-token-rm-700k

Collection including yyqoni/meta-llama-3.1-instruct-8b-token-rm-700k