--- base_model: - Qwen/Qwen2.5-3B-Instruct datasets: - ulab-ai/Time-Bench license: apache-2.0 tags: - temporal-reasoning - reinforcement-learning - large-language-models paperswithcode: arxiv_id: 2505.13508 library_name: transformers pipeline_tag: text-generation ---
Output Examples
πŸ“Š Dataset | πŸš€ Code | πŸ“– Paper
# Time-R1 Model Series This collection hosts the official checkpoints for the **Time-R1** model, as described in the paper "Time-R1: Towards Comprehensive Temporal Reasoning in LLMs". Time-R1 is a 3B parameter Large Language Model trained with a novel three-stage reinforcement learning curriculum to endow it with comprehensive temporal abilities: understanding, prediction, and creative generation. These models are trained using the [Time-Bench dataset](https://huggingface.co/datasets/ulab-ai/Time-Bench). ## Model Checkpoints We provide several checkpoints representing different stages of the Time-R1 training process: ### Stage 1: Temporal Comprehension Models These models are trained to develop foundational temporal understanding. * **[Time-R1-S1P1](https://huggingface.co/ulab-ai/Time-R1-S1P1):** Checkpoint after Phase 1 of Stage 1 training. * *Focus: Foundational logic on easy timestamp inference tasks.* * **[Time-R1-S1P2](https://huggingface.co/ulab-ai/Time-R1-S1P2):** Checkpoint after Phase 2 of Stage 1 training. * *Focus: Full task exploration on all Stage 1 subtasks with mixed difficulty.* * **[Time-R1-Theta1](https://huggingface.co/ulab-ai/Time-R1-Theta1):** Checkpoint θ₁, after Phase 3 (full Stage 1 training). * *Focus: Refined precision on all Stage 1 subtasks under stricter evaluation.* * **[Time-R1-Theta1_prime](https://huggingface.co/ulab-ai/Time-R1-Theta1_prime):** Ablation model θ₁', trained for Stage 1 without the dynamic reward design. * *Focus: Serves as a baseline to evaluate the efficacy of the dynamic reward curriculum.* ### Stage 2: Future Event Time Prediction Model This model builds upon Stage 1 capabilities to predict future event timings. * **[Time-R1-Theta2](https://huggingface.co/ulab-ai/Time-R1-Theta2):** Checkpoint ΞΈβ‚‚, after Stage 2 training. * *Focus: Predicting the timing of future events occurring after its initial knowledge cutoff.* Please refer to the [main paper](https://arxiv.org/abs/2505.13508) for detailed discussions on the architecture, training methodology, and comprehensive evaluations. ## How to Use For loading and using these models, please refer to the example scripts and documentation provided in our [GitHub repository](https://github.com/ulab-uiuc/Time-R1). Typically, you can load the models using the Hugging Face `transformers` library: ```python from transformers import AutoModelForCausalLM, AutoTokenizer # Example for one of the models (replace with the specific model name) model_name = "ulab-ai/Time-R1-Theta1_prime" # Or your specific Hugging Face model path tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) # Further usage instructions would go here or in the repository ``` ## Citations ```bibtex @article{liu2025time, title={Time-R1: Towards Comprehensive Temporal Reasoning in LLMs}, author={Liu, Zijia and Han, Peixuan and Yu, Haofei and Li, Haoru and You, Jiaxuan}, journal={arXiv preprint arXiv:2505.13508}, year={2025} }