Text Generation
GGUF
creative
creative writing
fiction writing
plot generation
sub-plot generation
story generation
scene continue
storytelling
fiction story
science fiction
romance
all genres
story
writing
vivid prosing
vivid writing
fiction
roleplaying
bfloat16
role play
128k context
llama3.2
llama-3
llama-3.2
imatrix
conversational
Update README.md
Browse files
README.md
CHANGED
@@ -137,6 +137,8 @@ IQ4XS: Due to the unusual nature of this quant (mixture/processing), generations
|
|
137 |
|
138 |
You may want to try it / compare it to other quant(s) output.
|
139 |
|
|
|
|
|
140 |
Special note on Q2k/Q3 quants:
|
141 |
|
142 |
You may need to use temp 2 or lower with these quants (1 or lower for q2k). Just too much compression at this level, damaging the model. I will see if Imatrix versions
|
|
|
137 |
|
138 |
You may want to try it / compare it to other quant(s) output.
|
139 |
|
140 |
+
3 "ARM" quants are also uploaded for devices than can run them.
|
141 |
+
|
142 |
Special note on Q2k/Q3 quants:
|
143 |
|
144 |
You may need to use temp 2 or lower with these quants (1 or lower for q2k). Just too much compression at this level, damaging the model. I will see if Imatrix versions
|