DavieLion commited on
Commit
8f968d4
·
verified ·
1 Parent(s): 4967bb3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -5
README.md CHANGED
@@ -1,13 +1,15 @@
1
  ---
2
- base_model: meta-llama/Llama-3.2-1B
 
3
  tags:
4
  - alignment-handbook
5
  - generated_from_trainer
6
  datasets:
7
- - new_data/iter0
8
  model-index:
9
  - name: iter0-ckpt
10
  results: []
 
11
  ---
12
 
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -15,11 +17,14 @@ should probably proofread and complete it, then remove this comment. -->
15
 
16
  # iter0-ckpt
17
 
18
- This model is a fine-tuned version of [/data1/minghaocao/project10_llm/SPIN-main/meta-llama/Llama-3.2-1B](https://huggingface.co//data1/minghaocao/project10_llm/SPIN-main/meta-llama/Llama-3.2-1B) on the new_data/iter0 dataset.
19
 
20
  ## Model description
21
 
22
- More information needed
 
 
 
23
 
24
  ## Intended uses & limitations
25
 
@@ -57,4 +62,4 @@ The following hyperparameters were used during training:
57
  - Transformers 4.37.0
58
  - Pytorch 2.1.2+cu121
59
  - Datasets 2.14.6
60
- - Tokenizers 0.15.2
 
1
  ---
2
+ base_model:
3
+ - DavieLion/Lllma-3.2-1B
4
  tags:
5
  - alignment-handbook
6
  - generated_from_trainer
7
  datasets:
8
+ - DavieLion/SPIN_iter0
9
  model-index:
10
  - name: iter0-ckpt
11
  results: []
12
+ license: apache-2.0
13
  ---
14
 
15
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
17
 
18
  # iter0-ckpt
19
 
20
+ This model is a fine-tuned version of [DavieLion/Lllma-3.2-1B](https://huggingface.co/DavieLion/Lllma-3.2-1B) on the DavieLion/SPIN_iter0 dataset.
21
 
22
  ## Model description
23
 
24
+ Model type: A 1B parameter GPT-like model fine-tuned on synthetic datasets.
25
+ Language(s) (NLP): Primarily English
26
+ License: Apache License 2.0
27
+ Finetuned from model: DavieLion/Lllma-3.2-1B (based on meta/Llama-3.2-1B)
28
 
29
  ## Intended uses & limitations
30
 
 
62
  - Transformers 4.37.0
63
  - Pytorch 2.1.2+cu121
64
  - Datasets 2.14.6
65
+ - Tokenizers 0.15.2