Update README.md
Browse files
README.md
CHANGED
@@ -41,6 +41,14 @@ extra_gated_prompt: '**Usage Warnings**
|
|
41 |
This model was converted to GGUF format from [`huihui-ai/Huihui-MoE-24B-A8B-abliterated`](https://huggingface.co/huihui-ai/Huihui-MoE-24B-A8B-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
42 |
Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-MoE-24B-A8B-abliterated) for more details on the model.
|
43 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
44 |
## Use with llama.cpp
|
45 |
Install llama.cpp through brew (works on Mac and Linux)
|
46 |
|
|
|
41 |
This model was converted to GGUF format from [`huihui-ai/Huihui-MoE-24B-A8B-abliterated`](https://huggingface.co/huihui-ai/Huihui-MoE-24B-A8B-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
42 |
Refer to the [original model card](https://huggingface.co/huihui-ai/Huihui-MoE-24B-A8B-abliterated) for more details on the model.
|
43 |
|
44 |
+
---
|
45 |
+
Huihui-MoE-24B-A8B-abliterated is a Mixture of Experts (MoE) language model developed by huihui.ai, built upon the huihui-ai/Qwen3-8B-abliterated base model. It enhances the standard Transformer architecture by replacing MLP layers with MoE layers, each containing 4 experts, to achieve high performance with efficient inference. The model is designed for natural language processing tasks, including text generation, question answering, and conversational applications.
|
46 |
+
|
47 |
+
This model combines four ablated models, and perhaps it can achieve the performance of all the ablated models?
|
48 |
+
|
49 |
+
This is just a test. The exploration of merging different manifestations of models of the same type is another possibility.
|
50 |
+
|
51 |
+
---
|
52 |
## Use with llama.cpp
|
53 |
Install llama.cpp through brew (works on Mac and Linux)
|
54 |
|