Saturn-1.5B & Saturn-7B
Model Overview
Saturn-1.5B and Saturn-7B are part of the SATURN framework, which utilizes Boolean Satisfiability (SAT) problems to continuously improve language model reasoning through a curriculum learning pipeline.
Model Details
- Saturn-1.5B: A 1.5 billion parameter model.
- Saturn-7B: A 7 billion parameter model.
Usage
To use these models, simply load them from Hugging Face’s transformers
library, as shown below:
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "gtxygyzb/Saturn-7B" # or "gtxygyzb/Saturn-1.5B"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
Further Information
For more detailed information, please refer to:
- Our GitHub repository
- The research paper
Citation
If you use Saturn-1.5B or Saturn-7B in your research, please cite our work:
@article{saturn2025,
author = {Huanyu Liu and Jia Li and Hao Zhu and Kechi Zhang and Yihong Dong and Ge Li},
title = {SATURN: SAT-based Reinforcement Learning to Unleash Language Model Reasoning},
journal = {CoRR},
volume = {abs/2505.16368},
year = {2025},
}
- Downloads last month
- 5
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support