aquiffoo commited on
Commit
c42df27
·
verified ·
1 Parent(s): 046a910

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +44 -0
README.md ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - HuggingFaceFW/fineweb-edu
5
+ - HuggingFaceH4/MATH-500
6
+ - openai/gsm8k
7
+ language:
8
+ - en
9
+ pipeline_tag: text-generation
10
+ tags:
11
+ - mesh
12
+ - moe
13
+ - mesh-labs
14
+ - alpha
15
+ - preview
16
+ - research
17
+ - experiment
18
+ - routing
19
+ - innovative
20
+ - innovation
21
+ - mesh-moe
22
+ - custom_code
23
+ ---
24
+
25
+ # Mesh-v0.1-2x2 (Stage 003)
26
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6747320df82ae35f0327cdd3/2JPwH3coASgEc4vJvJVRt.png)
27
+
28
+ ## Introducing mesh
29
+
30
+ This is our first ever model! Allow us to explain how the `mesh` architecture works in detail.
31
+
32
+ - Neural Mesh extends the concept of Mixture of Experts by allowing bidirectional expert communication.
33
+
34
+ - The experts are shared in a bidimensional grid (2x2, 4x4, etc.) layout, that allows for them to communicate with their neighbors using the "Neighbor Exchange" method.
35
+ - Just like MoE models, Mesh models have dynamic routing, and through the `routing_k` parameter you can define the amount of active parameters. For this model (2x2):
36
+ - top-1 routing: 173M active parameters
37
+ - top-2 routing: 242M active parameters (default)
38
+ - dense routing: 302M active parameters
39
+
40
+ ## Here's how the mesh architecture works:
41
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6747320df82ae35f0327cdd3/WRpS2T5KBMPbacobfh0bw.png)
42
+
43
+ ## Disclaimer
44
+ This small language model is just a proof-of-concept, paving the way to the final release, which is likely to happen in Q4 2025, and include more models and better support from external libraries such as Transformers and Llama.cpp.