Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
TPO
community
Activity Feed
Follow
5
AI & ML interests
Alignment, Preference Optimization, RLHF
Team members
4
tpo-alignment
's models
11
Sort: Recently updated
tpo-alignment/Instruct-Llama-3-8B-TPO-L-y2
8B
•
Updated
Feb 19
•
12
tpo-alignment/Instruct-Llama-3-8B-TPO-y2
8B
•
Updated
Feb 19
•
7
tpo-alignment/Instruct-Llama-3-8B-TPO-y4
8B
•
Updated
Feb 19
•
11
tpo-alignment/Instruct-Llama-3-8B-TPO-y3
8B
•
Updated
Feb 19
•
11
tpo-alignment/Mistral-Instruct-7B-TPO-y2-v0.2
7B
•
Updated
Feb 19
•
10
tpo-alignment/Mistral-Instruct-7B-TPO-y2-v0.1
7B
•
Updated
Feb 19
•
29
tpo-alignment/Mistral-Instruct-7B-TPO-y4
7B
•
Updated
Feb 19
•
14
tpo-alignment/Mistral-Instruct-7B-TPO-y3
7B
•
Updated
Feb 19
•
7
tpo-alignment/Llama-3-8B-TPO-L-40k
8B
•
Updated
Feb 19
•
11
tpo-alignment/Mistral-7B-TPO-40k
7B
•
Updated
Feb 19
•
37
tpo-alignment/Llama-3-8B-TPO-40k
8B
•
Updated
Feb 19
•
12