A 3x3B Mixture of Expert (MoE) experimental model with 2 experts per token. Special thank MoV.

PS: Decent model, fast speed. Maybe it's silly, but fun enough if you use for message chat roleplay.

Downloads last month
1
Safetensors
Model size
7.83B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for DoppelReflEx/MoETest-3E2A-3x3B