Kquant03 commited on
Commit
ad35569
1 Parent(s): 3d39ae7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -0
README.md CHANGED
@@ -19,8 +19,12 @@ Mixture of Experts enable models to be pretrained with far less compute, which m
19
  So, what exactly is a MoE? In the context of transformer models, a MoE consists of two main elements:
20
 
21
  Sparse MoE layers are used instead of dense feed-forward network (FFN) layers. MoE layers have a certain number of “experts” (e.g. 32 in my "frankenMoE"), where each expert is a neural network. In practice, the experts are FFNs, but they can also be more complex networks or even a MoE itself, leading to hierarchical MoEs!
 
22
  A gate network or router, that determines which tokens are sent to which expert. For example, in the image below, the token “More” is sent to the second expert, and the token "Parameters” is sent to the first network. As we’ll explore later, we can send a token to more than one expert. How to route a token to an expert is one of the big decisions when working with MoEs - the router is composed of learned parameters and is pretrained at the same time as the rest of the network.
23
 
 
 
 
24
  Switch Layer
25
  MoE layer from the [Switch Transformers paper](https://arxiv.org/abs/2101.03961)
26
 
 
19
  So, what exactly is a MoE? In the context of transformer models, a MoE consists of two main elements:
20
 
21
  Sparse MoE layers are used instead of dense feed-forward network (FFN) layers. MoE layers have a certain number of “experts” (e.g. 32 in my "frankenMoE"), where each expert is a neural network. In practice, the experts are FFNs, but they can also be more complex networks or even a MoE itself, leading to hierarchical MoEs!
22
+
23
  A gate network or router, that determines which tokens are sent to which expert. For example, in the image below, the token “More” is sent to the second expert, and the token "Parameters” is sent to the first network. As we’ll explore later, we can send a token to more than one expert. How to route a token to an expert is one of the big decisions when working with MoEs - the router is composed of learned parameters and is pretrained at the same time as the rest of the network.
24
 
25
+
26
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/up_I0R2TQGjqTShZp_1Sz.png)
27
+
28
  Switch Layer
29
  MoE layer from the [Switch Transformers paper](https://arxiv.org/abs/2101.03961)
30