Update README.md
Browse files
README.md
CHANGED
@@ -152,8 +152,7 @@ dpo_trainer = DPOTrainer(
|
|
152 |
|
153 |
## Credits
|
154 |
|
155 |
-
Thank you to [Lucy Knada](https://huggingface.co/lucyknada), [CelineDion](https://huggingface.co/CelineDion), [Intervitens](https://huggingface.co/intervitens), [Kalomaze](https://huggingface.co/kalomaze), [Kubernetes Bad](https://huggingface.co/kubernetes-bad) and the rest of [Anthracite](https://huggingface.co/anthracite-org)
|
156 |
-
|
157 |
|
158 |
## Training
|
159 |
The training was done for 2 epochs. We used 4 x [RTX 3090s](https://www.nvidia.com/en-us/geforce/graphics-cards/30-series/rtx-3090-3090ti/) GPUs graciously provided by [Intervitens](https://huggingface.co/intervitens) for the full-parameter fine-tuning of the model, After which DPO tuning was on 1 x [Nvidia T4 GPU](https://www.nvidia.com/en-us/data-center/tesla-t4/)
|
|
|
152 |
|
153 |
## Credits
|
154 |
|
155 |
+
Thank you to [Lucy Knada](https://huggingface.co/lucyknada), [CelineDion](https://huggingface.co/CelineDion), [Intervitens](https://huggingface.co/intervitens), [Kalomaze](https://huggingface.co/kalomaze), [Kubernetes Bad](https://huggingface.co/kubernetes-bad) and the rest of [Anthracite](https://huggingface.co/anthracite-org)
|
|
|
156 |
|
157 |
## Training
|
158 |
The training was done for 2 epochs. We used 4 x [RTX 3090s](https://www.nvidia.com/en-us/geforce/graphics-cards/30-series/rtx-3090-3090ti/) GPUs graciously provided by [Intervitens](https://huggingface.co/intervitens) for the full-parameter fine-tuning of the model, After which DPO tuning was on 1 x [Nvidia T4 GPU](https://www.nvidia.com/en-us/data-center/tesla-t4/)
|