Hacked together a way to log trl GRPO training completions to a 🤗 dataset repo. This allows you to:
- Track rewards from multiple reward functions - Treat the completion and rewards from training as a "proper" dataset and do EDA - Share results for open science
The implementation is super hacky, but I'm curious if people would find this useful.
To push completions to the Hub, you just need two extra parameters:
Its own self-description? "A model for generating concise summaries of model & dataset cards from the Hugging Face Hub"
The goal? Make it easier to find the right models and datasets for your specific needs. It's already powering a semantic search for datasets Space.
It's still a WIP but thanks to @loubnabnl , @anton-l , @eliebak et al, for cooking such a nice base model for fine-tuning small, efficient models for specific domains and tasks. 🙏
Why choose between strong LLM reasoning and efficient models?
Use DeepSeek to generate high-quality training data, then distil that knowledge into ModernBERT answerdotai/ModernBERT-base for fast, efficient classification.
Given an input image, it generates several queries along with explanations to justify them. This approach can generate synthetic data for fine-tuning ColPali models.
The Hugging Face community has rated educational content in languages spoken by 1.6 billion people! New additions: • Japanese • Italian • Old High German