Replace Arxiv paper link with Hugging Face paper link
#2
by
nielsr
HF Staff
- opened
README.md
CHANGED
@@ -3,15 +3,15 @@ base_model:
|
|
3 |
- Qwen/Qwen2.5-3B-Instruct
|
4 |
datasets:
|
5 |
- ulab-ai/Time-Bench
|
|
|
6 |
license: apache-2.0
|
|
|
7 |
tags:
|
8 |
- temporal-reasoning
|
9 |
- reinforcement-learning
|
10 |
- large-language-models
|
11 |
paperswithcode:
|
12 |
arxiv_id: 2505.13508
|
13 |
-
library_name: transformers
|
14 |
-
pipeline_tag: text-generation
|
15 |
---
|
16 |
|
17 |
<center>
|
@@ -19,12 +19,12 @@ pipeline_tag: text-generation
|
|
19 |
</center>
|
20 |
|
21 |
<div align="center">
|
22 |
-
<a href="https://huggingface.co/datasets/ulab-ai/Time-Bench"> π <strong>Dataset</strong></a> | <a href="https://github.com/ulab-uiuc/Time-R1">π <strong>Code</strong></a> | <a href="https://
|
23 |
</div>
|
24 |
|
25 |
# Time-R1 Model Series
|
26 |
|
27 |
-
This collection hosts the official checkpoints for the **Time-R1** model, as described in the paper
|
28 |
|
29 |
These models are trained using the [Time-Bench dataset](https://huggingface.co/datasets/ulab-ai/Time-Bench).
|
30 |
|
@@ -52,7 +52,7 @@ This model builds upon Stage 1 capabilities to predict future event timings.
|
|
52 |
* **[Time-R1-Theta2](https://huggingface.co/ulab-ai/Time-R1-Theta2):** Checkpoint ΞΈβ, after Stage 2 training.
|
53 |
* *Focus: Predicting the timing of future events occurring after its initial knowledge cutoff.*
|
54 |
|
55 |
-
Please refer to the [main paper](https://
|
56 |
|
57 |
## How to Use
|
58 |
|
@@ -76,4 +76,5 @@ model = AutoModelForCausalLM.from_pretrained(model_name)
|
|
76 |
author={Liu, Zijia and Han, Peixuan and Yu, Haofei and Li, Haoru and You, Jiaxuan},
|
77 |
journal={arXiv preprint arXiv:2505.13508},
|
78 |
year={2025}
|
79 |
-
}
|
|
|
|
3 |
- Qwen/Qwen2.5-3B-Instruct
|
4 |
datasets:
|
5 |
- ulab-ai/Time-Bench
|
6 |
+
library_name: transformers
|
7 |
license: apache-2.0
|
8 |
+
pipeline_tag: text-generation
|
9 |
tags:
|
10 |
- temporal-reasoning
|
11 |
- reinforcement-learning
|
12 |
- large-language-models
|
13 |
paperswithcode:
|
14 |
arxiv_id: 2505.13508
|
|
|
|
|
15 |
---
|
16 |
|
17 |
<center>
|
|
|
19 |
</center>
|
20 |
|
21 |
<div align="center">
|
22 |
+
<a href="https://huggingface.co/datasets/ulab-ai/Time-Bench"> π <strong>Dataset</strong></a> | <a href="https://github.com/ulab-uiuc/Time-R1">π <strong>Code</strong></a> | <a href="https://huggingface.co/papers/2505.13508">π <strong>Paper</strong></a>
|
23 |
</div>
|
24 |
|
25 |
# Time-R1 Model Series
|
26 |
|
27 |
+
This collection hosts the official checkpoints for the **Time-R1** model, as described in the paper [Time-R1: Towards Comprehensive Temporal Reasoning in LLMs](https://huggingface.co/papers/2505.13508). Time-R1 is a 3B parameter Large Language Model trained with a novel three-stage reinforcement learning curriculum to endow it with comprehensive temporal abilities: understanding, prediction, and creative generation.
|
28 |
|
29 |
These models are trained using the [Time-Bench dataset](https://huggingface.co/datasets/ulab-ai/Time-Bench).
|
30 |
|
|
|
52 |
* **[Time-R1-Theta2](https://huggingface.co/ulab-ai/Time-R1-Theta2):** Checkpoint ΞΈβ, after Stage 2 training.
|
53 |
* *Focus: Predicting the timing of future events occurring after its initial knowledge cutoff.*
|
54 |
|
55 |
+
Please refer to the [main paper](https://huggingface.co/papers/2505.13508) for detailed discussions on the architecture, training methodology, and comprehensive evaluations.
|
56 |
|
57 |
## How to Use
|
58 |
|
|
|
76 |
author={Liu, Zijia and Han, Peixuan and Yu, Haofei and Li, Haoru and You, Jiaxuan},
|
77 |
journal={arXiv preprint arXiv:2505.13508},
|
78 |
year={2025}
|
79 |
+
}
|
80 |
+
```
|