Delta-Vector commited on
Commit
f02384b
1 Parent(s): b33bea5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -3
README.md CHANGED
@@ -18,10 +18,9 @@ tags:
18
 
19
 
20
 
21
- ![image/png](https://huggingface.co/Edens-Gate/Testing123/resolve/main/oie_gM9EsNXjMDsT.jpg?download=true)
22
  A model made to continue off my previous work on [Magnum 4B](https://huggingface.co/anthracite-org/magnum-v2-4b), A small model made for creative writing / General assistant tasks, finetuned ontop of [IntervitensInc/Llama-3.1-Minitron-4B-Width-Base-chatml](https://huggingface.co/IntervitensInc/Llama-3.1-Minitron-4B-Width-Base-chatml), this model is made to be more coherent and generally be better then the 4B at both writing and assistant tasks.
23
 
24
- # Quants (Thanks Lucy <3)
25
 
26
  GGUF: https://huggingface.co/NewEden/Holland-4B-gguf
27
 
@@ -166,7 +165,6 @@ special_tokens:
166
  - [Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned](https://huggingface.co/datasets/Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned)
167
  - [lodrick-the-lafted/OpusStories](https://huggingface.co/datasets/lodrick-the-lafted/OpusStories)
168
 
169
- I couldn't have made this model without the help of [Kubernetes_bad](https://huggingface.co/kubernetes-bad) and the support of [Lucy Knada](https://huggingface.co/lucyknada)
170
 
171
  ## Training
172
  The training was done for 2 epochs. We used 2 x [RTX 6000s](https://store.nvidia.com/en-us/nvidia-rtx/products/nvidia-rtx-6000-ada-generation/) GPUs graciously provided by [Kubernetes_Bad](https://huggingface.co/kubernetes-bad) for the full-parameter fine-tuning of the model.
 
18
 
19
 
20
 
 
21
  A model made to continue off my previous work on [Magnum 4B](https://huggingface.co/anthracite-org/magnum-v2-4b), A small model made for creative writing / General assistant tasks, finetuned ontop of [IntervitensInc/Llama-3.1-Minitron-4B-Width-Base-chatml](https://huggingface.co/IntervitensInc/Llama-3.1-Minitron-4B-Width-Base-chatml), this model is made to be more coherent and generally be better then the 4B at both writing and assistant tasks.
22
 
23
+ # Quants
24
 
25
  GGUF: https://huggingface.co/NewEden/Holland-4B-gguf
26
 
 
165
  - [Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned](https://huggingface.co/datasets/Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned)
166
  - [lodrick-the-lafted/OpusStories](https://huggingface.co/datasets/lodrick-the-lafted/OpusStories)
167
 
 
168
 
169
  ## Training
170
  The training was done for 2 epochs. We used 2 x [RTX 6000s](https://store.nvidia.com/en-us/nvidia-rtx/products/nvidia-rtx-6000-ada-generation/) GPUs graciously provided by [Kubernetes_Bad](https://huggingface.co/kubernetes-bad) for the full-parameter fine-tuning of the model.