Text Generation
Transformers
GGUF
English
Merge
programming
code generation
code
coding
coder
chat
brainstorm
qwen
qwen3
qwencoder
brainstorm20x
esper
esper-3
valiant
valiant-labs
qwen-3
qwen-3-8b
8b
reasoning
code-instruct
python
javascript
dev-ops
jenkins
terraform
scripting
powershell
azure
aws
gcp
cloud
problem-solving
architect
engineer
developer
creative
analytical
expert
rationality
conversational
instruct
llama-cpp
gguf-my-repo
Update README.md
Browse files
README.md
CHANGED
@@ -60,6 +60,37 @@ library_name: transformers
|
|
60 |
This model was converted to GGUF format from [`DavidAU/Qwen3-Esper3-Reasoning-CODER-Instruct-12B-Brainstorm20x`](https://huggingface.co/DavidAU/Qwen3-Esper3-Reasoning-CODER-Instruct-12B-Brainstorm20x) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
61 |
Refer to the [original model card](https://huggingface.co/DavidAU/Qwen3-Esper3-Reasoning-CODER-Instruct-12B-Brainstorm20x) for more details on the model.
|
62 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
63 |
## Use with llama.cpp
|
64 |
Install llama.cpp through brew (works on Mac and Linux)
|
65 |
|
|
|
60 |
This model was converted to GGUF format from [`DavidAU/Qwen3-Esper3-Reasoning-CODER-Instruct-12B-Brainstorm20x`](https://huggingface.co/DavidAU/Qwen3-Esper3-Reasoning-CODER-Instruct-12B-Brainstorm20x) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
61 |
Refer to the [original model card](https://huggingface.co/DavidAU/Qwen3-Esper3-Reasoning-CODER-Instruct-12B-Brainstorm20x) for more details on the model.
|
62 |
|
63 |
+
---
|
64 |
+
This model contains Brainstorm 20x, combined with ValiantLabs's 8B General / Coder (instruct model):
|
65 |
+
|
66 |
+
https://huggingface.co/ValiantLabs/Qwen3-8B-Esper3
|
67 |
+
|
68 |
+
Information on the 8B model below, followed by Brainstorm 20x adapter (by DavidAU) and then a complete help section for running LLM / AI models.
|
69 |
+
|
70 |
+
The Brainstorm adapter improves code generation, and unique code solving abilities.
|
71 |
+
|
72 |
+
This model requires:
|
73 |
+
|
74 |
+
- Jinja (embedded) or CHATML template
|
75 |
+
- Max context of 40k.
|
76 |
+
|
77 |
+
Settings used for testing (suggested):
|
78 |
+
|
79 |
+
- Temp .3 to .7
|
80 |
+
- Rep pen 1.05 to 1.1
|
81 |
+
- Topp .8 , minp .05
|
82 |
+
- Topk 20
|
83 |
+
- No system prompt.
|
84 |
+
|
85 |
+
FOR CODING:
|
86 |
+
|
87 |
+
Higher temps: .6 to .9 (even over 1) work better for more complex coding / especially with more restrictions.
|
88 |
+
|
89 |
+
This model will respond well to both detailed instructions and step by step refinement and additions to code.
|
90 |
+
|
91 |
+
As this is an instruct model, it will also benefit from a detailed system prompt too.
|
92 |
+
|
93 |
+
---
|
94 |
## Use with llama.cpp
|
95 |
Install llama.cpp through brew (works on Mac and Linux)
|
96 |
|