Update README.md
Browse files
README.md
CHANGED
@@ -37,19 +37,29 @@ In addition, Doge uses Dynamic Mask Attention as sequence transformation and can
|
|
37 |
> TODO: The larger model is under training and will be uploaded soon.
|
38 |
|
39 |
**Training**:
|
40 |
-
| Model | Training Data | Epochs | Steps | Content Length | Tokens | LR | Batch Size | Precision |
|
41 |
-
|---|---|---|---|---|---|---|---|---|
|
42 |
-
| [Doge-20M](https://huggingface.co/JingzeShi/Doge-20M) | [HuggingFaceTB/smollm-corpus](https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus) | 2 | 10k | 2048 | 5B | 8e-4 | 0.25M | bfloat16 |
|
43 |
-
| [Doge-60M](https://huggingface.co/JingzeShi/Doge-60M) | [HuggingFaceTB/smollm-corpus](https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus) | 2 | 20k | 2048 | 20B | 6e-4 | 0.5M | bfloat16 |
|
44 |
|
45 |
-
|
46 |
-
| Model | TriviaQA | MMLU | ARC | PIQA | HellaSwag | OBQA | Winogrande |
|
47 |
|---|---|---|---|---|---|---|---|
|
48 |
-
| [Doge-20M](https://huggingface.co/JingzeShi/Doge-20M) | - |
|
49 |
-
| [Doge-60M](https://huggingface.co/JingzeShi/Doge-60M) | - |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
50 |
|
51 |
**Environment**:
|
52 |
-
|
|
|
53 |
- Hardware: 1x NVIDIA RTX 4090
|
54 |
- Software: Transformers
|
55 |
|
|
|
37 |
> TODO: The larger model is under training and will be uploaded soon.
|
38 |
|
39 |
**Training**:
|
|
|
|
|
|
|
|
|
40 |
|
41 |
+
| Model | Training Data | Steps | Content Length | Tokens | LR | Batch Size | Precision |
|
|
|
42 |
|---|---|---|---|---|---|---|---|
|
43 |
+
| [Doge-20M](https://huggingface.co/JingzeShi/Doge-20M) | [HuggingFaceTB/smollm-corpus](https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus) | 8k | 2048 | 4B | 8e-3 | 0.5M | bfloat16 |
|
44 |
+
| [Doge-60M](https://huggingface.co/JingzeShi/Doge-60M) | [HuggingFaceTB/smollm-corpus](https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus) | 16k | 2048 | 16B | 6e-3 | 1M | bfloat16 |
|
45 |
+
|
46 |
+
**Evaluation**:
|
47 |
+
|
48 |
+
| Model | MMLU | TriviaQA | ARC-E | ARC-C | PIQA | HellaSwag | OBQA | Winogrande |
|
49 |
+
|---|---|---|---|---|---|---|---|---|
|
50 |
+
| [Doge-20M](https://huggingface.co/JingzeShi/Doge-20M) | 25.43 | 0 | 36.83 | 22.53 | 58.38 | 27.25 | 25.60 | 50.20 |
|
51 |
+
| [Doge-60M](https://huggingface.co/JingzeShi/Doge-60M) | 26.41 | 0 | 50.00 | 25.34 | 61.43 | 31.45 | 28.00 | 49.64 |
|
52 |
+
|
53 |
+
> All evaluations are done using five-shot settings, without additional training on the benchmarks.
|
54 |
+
|
55 |
+
**Procedure**:
|
56 |
+
|
57 |
+
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/loser_cheems/huggingface/runs/ydynuvfz)
|
58 |
+
|
59 |
|
60 |
**Environment**:
|
61 |
+
|
62 |
+
- Image: nvcr.io/nvidia/pytorch:24.12-py3
|
63 |
- Hardware: 1x NVIDIA RTX 4090
|
64 |
- Software: Transformers
|
65 |
|