Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
13
1
10
Swift
aoi-ot
Follow
umair-dev's profile picture
Gapeleon's profile picture
xynic's profile picture
13 followers
·
2 following
AI & ML interests
None yet
Recent Activity
updated
a collection
6 days ago
VibeVoice - Mirror
reacted
to
Kseniase
's
post
with 👍
7 days ago
10 Latest Preference Optimization Techniques Models need feedback on what makes outputs “good” or “bad.” Policy optimization (PO) turns preferences and rewards into actual training signals. This field is evolving quickly, moving far beyond classics like PPO and GRPO. So here is our overview of 10 newest PO methods: 1. Pref-GRPO → https://huggingface.co/papers/2508.20751 Stabilizes text-to-image reinforcement learning (RL) with pairwise preference rewards and a unified UNIGENBENCH benchmark 2. PVPO (Policy with Value Preference Optimization) → https://huggingface.co/papers/2508.21104 This critic-free RL method uses a pre-trained model as a reference anchor to reduce bias and guide learning, selecting high-value examples through data pre-sampling 3. DCPO (Dynamic Clipping Policy Optimization) → https://huggingface.co/papers/2509.02333 Uses dynamic clipping, which adjusts probability limits per token for better token exploration, and smooth reward standardization to balance rewards over training steps and prevent wasted updates 4. ARPO (Agentic Reinforced Policy Optimization) → https://huggingface.co/papers/2507.19849 Optimizes multi-turn LLM agents that use external tools. It uses an entropy-based adaptive rollout to explore post-tool use and an advantage attribution method to better assign credit across steps, leading to more efficient tool use with fewer resources 5. GRPO-RoC (Group Relative Policy Optimization with Resampling-on-Correct) → https://huggingface.co/papers/2508.20722 Oversamples rollouts, then resamples them to keep diverse mistakes and only the highest-quality correct answers. It reduces noises and ends up with stronger reasoning in a code environment Read further below ⬇️ If you like this, also subscribe to the Turing post: https://www.turingpost.com/subscribe
updated
a collection
10 days ago
VibeVoice - Mirror
View all activity
Organizations
aoi-ot
's datasets
14
Sort: Recently updated
aoi-ot/zerochan-meta
Viewer
•
Updated
Jul 25
•
3.96M
•
4
aoi-ot/anime-pictures-meta
Viewer
•
Updated
Jul 25
•
638k
•
1
aoi-ot/imagenet21k-wip
Viewer
•
Updated
Jul 9
•
63.4M
•
1
aoi-ot/imagenet21k
Viewer
•
Updated
Jul 3
•
264k
•
1
aoi-ot/anime-pictures-1024p
Viewer
•
Updated
Jun 19
•
604k
•
2
aoi-ot/imagenet1k-640p
Viewer
•
Updated
Jun 18
•
1.33M
•
1
aoi-ot/pic-style
Viewer
•
Updated
Jun 12
•
1.28M
•
2
aoi-ot/danbooru2023
Viewer
•
Updated
May 29
•
6.86M
•
2
aoi-ot/zerochan-wip
Viewer
•
Updated
Apr 14
•
19.6k
•
1
aoi-ot/anime-pictures-wip-backup
Viewer
•
Updated
Apr 8
•
63k
•
2
aoi-ot/pix-archive
Viewer
•
Updated
Mar 5
•
746k
•
1
aoi-ot/zerochan
Viewer
•
Updated
Feb 18
•
7.69M
•
1
aoi-ot/konachan
Viewer
•
Updated
Feb 17
•
601k
•
2
aoi-ot/anime-pictures
Viewer
•
Updated
Feb 16
•
606k
•
1