alpindale commited on
Commit
8ec09f5
1 Parent(s): b5fb6a6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -7
README.md CHANGED
@@ -31,14 +31,9 @@ Can I ask a question?<|im_end|>
31
 
32
  ## Credits
33
 
34
- This model has been a team effort, credits go to:
35
 
36
- - [Sao10K](https://huggingface.co/Sao10K) for help with (and cleaning up!) the dataset.
37
- - [alpindale](https://huggingface.co/alpindale) for the training.
38
- - [kalomaze](https://huggingface.co/kalomaze) for helping with the hyperparameter tuning.
39
- - Various other people for their continued help as we tuned the parameters, restarted failed runs. In no particular order: [Doctor Shotgun](https://huggingface.co/Doctor-Shotgun), [Lucy](https://huggingface.co/lucyknada), [Nopm](https://huggingface.co/nopm), [Mango](https://huggingface.co/MangoMango69420), and the rest of the Anthracite team.
40
-
41
- And last but not least, we'd like to thank [Kearm](https://twitter.com/Nottlespike) for sponsoring the compute needed to train this model.
42
 
43
  ## Training
44
  The training was done with 55 million tokens of high-quality RP data, over 1.5 epochs. We used 8x [AMD Instinct™ MI300X Accelerators](https://www.amd.com/en/products/accelerators/instinct/mi300/mi300x.html) for the full-parameter fine-tuning of the model.
 
31
 
32
  ## Credits
33
 
34
+ This model has been a team effort, and the credits goes to all members of Anthracite.
35
 
36
+ We'd also like to thank [Kearm](https://twitter.com/Nottlespike) for sponsoring the compute needed to train this model.
 
 
 
 
 
37
 
38
  ## Training
39
  The training was done with 55 million tokens of high-quality RP data, over 1.5 epochs. We used 8x [AMD Instinct™ MI300X Accelerators](https://www.amd.com/en/products/accelerators/instinct/mi300/mi300x.html) for the full-parameter fine-tuning of the model.