Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,42 @@
|
|
1 |
-
---
|
2 |
-
license: apache-2.0
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
datasets:
|
4 |
+
- HuggingFaceH4/ultrachat_200k
|
5 |
+
language:
|
6 |
+
- en
|
7 |
+
pipeline_tag: text-generation
|
8 |
+
tags:
|
9 |
+
- mesh
|
10 |
+
- moe
|
11 |
+
- mesh-labs
|
12 |
+
- alpha
|
13 |
+
- preview
|
14 |
+
- research
|
15 |
+
- experiment
|
16 |
+
- routing
|
17 |
+
- innovative
|
18 |
+
- innovation
|
19 |
+
- mesh-moe
|
20 |
+
---
|
21 |
+
|
22 |
+
# Mesh-v0.1-2x2 (Stage 001)
|
23 |
+
<small>Currently, the model is only capable of generating gibberish. This will be fixed until the final release.</small>
|
24 |
+

|
25 |
+
|
26 |
+
## Introducing mesh
|
27 |
+
|
28 |
+
This is our first ever model! Allow us to explain how the `mesh` architecture works in detail.
|
29 |
+
\
|
30 |
+
- Neural Mesh extends the concept of Mixture of Experts by allowing bidirectional expert communication.
|
31 |
+
|
32 |
+
- The experts are shared in a bidimensional grid (2x2, 4x4, etc.) layout, that allows for them to communicate with their neighbors using the "Neighbor Exchange" method.
|
33 |
+
- Just like MoE models, Mesh models have dynamic routing, and through the `routing_k` parameter you can define the amount of active parameters. For this model (2x2):
|
34 |
+
- top-1 routing: 173M active parameters
|
35 |
+
- top-2 routing: 242M active parameters (default)
|
36 |
+
- dense routing: 302M active parameters
|
37 |
+
|
38 |
+
## Here's how the mesh architecture works:
|
39 |
+

|
40 |
+
|
41 |
+
## Disclaimer
|
42 |
+
This small language model is just a proof-of-concept, paving the way to the final release, which is likely to happen in Q4 2025, and include more models and better support from external libraries such as Transformers and Llama.cpp.
|