m-serious commited on
Commit
aad20ba
·
verified ·
1 Parent(s): 38b3187

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -39,16 +39,16 @@ These models are trained to develop foundational temporal understanding.
39
  * *Focus: Foundational logic on easy timestamp inference tasks.*
40
  * **[Time-R1-S1P2](https://huggingface.co/ulab-ai/Time-R1-S1P2):** Checkpoint after Phase 2 of Stage 1 training.
41
  * *Focus: Full task exploration on all Stage 1 subtasks with mixed difficulty.*
42
- * **[Time-R1-Theta1](https://huggingface.co/ulab-ai/Time-R1-Theta1):** Checkpoint $\theta_1$, after Phase 3 (full Stage 1 training).
43
  * *Focus: Refined precision on all Stage 1 subtasks under stricter evaluation.*
44
- * **[Time-R1-Theta1_prime](https://huggingface.co/ulab-ai/Time-R1-Theta1_prime):** Ablation model $\theta_1'$, trained for Stage 1 without the dynamic reward design.
45
  * *Focus: Serves as a baseline to evaluate the efficacy of the dynamic reward curriculum.*
46
 
47
  ### Stage 2: Future Event Time Prediction Model
48
 
49
  This model builds upon Stage 1 capabilities to predict future event timings.
50
 
51
- * **[Time-R1-Theta2](https://huggingface.co/ulab-ai/Time-R1-Theta2):** Checkpoint $\theta_2$, after Stage 2 training.
52
  * *Focus: Predicting the timing of future events occurring after its initial knowledge cutoff.*
53
 
54
  Please refer to the [main paper](https://arxiv.org/abs/2505.13508) for detailed discussions on the architecture, training methodology, and comprehensive evaluations.
 
39
  * *Focus: Foundational logic on easy timestamp inference tasks.*
40
  * **[Time-R1-S1P2](https://huggingface.co/ulab-ai/Time-R1-S1P2):** Checkpoint after Phase 2 of Stage 1 training.
41
  * *Focus: Full task exploration on all Stage 1 subtasks with mixed difficulty.*
42
+ * **[Time-R1-Theta1](https://huggingface.co/ulab-ai/Time-R1-Theta1):** Checkpoint θ₁, after Phase 3 (full Stage 1 training).
43
  * *Focus: Refined precision on all Stage 1 subtasks under stricter evaluation.*
44
+ * **[Time-R1-Theta1_prime](https://huggingface.co/ulab-ai/Time-R1-Theta1_prime):** Ablation model θ₁', trained for Stage 1 without the dynamic reward design.
45
  * *Focus: Serves as a baseline to evaluate the efficacy of the dynamic reward curriculum.*
46
 
47
  ### Stage 2: Future Event Time Prediction Model
48
 
49
  This model builds upon Stage 1 capabilities to predict future event timings.
50
 
51
+ * **[Time-R1-Theta2](https://huggingface.co/ulab-ai/Time-R1-Theta2):** Checkpoint θ₂, after Stage 2 training.
52
  * *Focus: Predicting the timing of future events occurring after its initial knowledge cutoff.*
53
 
54
  Please refer to the [main paper](https://arxiv.org/abs/2505.13508) for detailed discussions on the architecture, training methodology, and comprehensive evaluations.