Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up
rbgo 's Collections
Finetuning
LLM-Alignment Papers
PPO Trainers
All About LLMs

PPO Trainers

updated Sep 12, 2024
Upvote
-

  • Direct Language Model Alignment from Online AI Feedback

    Paper • 2402.04792 • Published Feb 7, 2024 • 32
Upvote
-
  • Collection guide
  • Browse collections
Company
TOS Privacy About Jobs
Website
Models Datasets Spaces Pricing Docs