parameters guide
samplers guide
model generation
role play settings
quant selection
arm quants
iq quants vs q quants
optimal model setting
gibberish fixes
coherence
instructing following
quality generation
chat settings
quality settings
llamacpp server
llamacpp
lmstudio
sillytavern
koboldcpp
backyard
ollama
model generation steering
steering
model generation fixes
text generation webui
ggufs
exl2
full precision
quants
imatrix
neo imatrix
Update README.md
Browse files
README.md
CHANGED
@@ -57,6 +57,13 @@ This doc also shows how to use "system prommpt/role" to change the operation of
|
|
57 |
|
58 |
[ https://huggingface.co/DavidAU/How-To-Use-Reasoning-Thinking-Models-and-Create-Them ]
|
59 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
60 |
---
|
61 |
|
62 |
<H2>MAIN DOCUMENT:</H2>
|
|
|
57 |
|
58 |
[ https://huggingface.co/DavidAU/How-To-Use-Reasoning-Thinking-Models-and-Create-Them ]
|
59 |
|
60 |
+
<B>#3 - Mixture of Experts / MOE - Set/activate Experts: </B>
|
61 |
+
|
62 |
+
This document covers how to adjust/set the number of experts in various AI/LLM apps, and includes links
|
63 |
+
to MOE/Mixture of expert models - both GGUF and source.
|
64 |
+
|
65 |
+
[ https://huggingface.co/DavidAU/How-To-Set-and-Manage-MOE-Mix-of-Experts-Model-Activation-of-Experts ]
|
66 |
+
|
67 |
---
|
68 |
|
69 |
<H2>MAIN DOCUMENT:</H2>
|