Bochkov/max_bvv_moe

Research demo: Multilingual Frozen-Embedding Mixture-of-Experts (MoE) Transformer (RU+ZH)

Model size: 0.8B parameters
MoE fusion: Merges two independently-trained LMs (max_bvv_ru, max_bvv_zh) via shared, frozen, Unicode-derived token embeddings.


πŸ“ Model description

  • First demonstration of practical MoE fusion for language models via shared, frozen, non-semantic glyph/visual-based token embeddings.
  • Each expert trained separately with the same fixed embeddings, then seamlessly fused β€” no retraining of embeddings or catastrophic forgetting.

This is a research model illustrating a new family of fusable, modular LMs.


🏹 Evaluation

  • Avg. MMLU: 22.37%
  • SQuAD: 18.40%
  • ARC-e: 21.39%
  • BLEU (en-ru): 5.02%
  • BLEU (en-zh): 1.34%

Metrics are lower than SOTA β€” intended for research and concept demonstration, not state-of-the-art benchmarks!

🚩 Why is this important? This model shows:

Frozen, visual/Unicode-based embeddings allow seamless fusion of LMs in MoE style. No performance drop from MoE fusion. All semantics are learned above the embedding layer.

⚠️ Limitations Research use only. Trained on a small, non-exhaustive Russian + Chinese subset. Quality, robustness, and reasoning are much lower than SOTA models. SFT was only lightly applied; not intended for real world use.

πŸ§‘β€πŸ”¬ Citation & Concept

If you use this model or the underlying concepts in your research, please cite our work:

@misc{bochkov2025emergentsemanticstokenembeddings,
      title={Emergent Semantics Beyond Token Embeddings: Transformer LMs with Frozen Visual Unicode Representations}, 
      author={A. Bochkov},
      year={2025},
      eprint={2507.04886},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2507.04886}, 
}

@misc{bochkov2025growingtransformersmodularcomposition,
      title={Growing Transformers: Modular Composition and Layer-wise Expansion on a Frozen Substrate}, 
      author={A. Bochkov},
      year={2025},
      eprint={2507.07129},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2507.07129}, 
}

This work demonstrates that transformer blocks, not token embeddings, carry the semantic burden in LLMs β€” a step toward modular, fusable, multilingual LMs.

πŸ”— Related Models Bochkov/max_bvv_ru Bochkov/max_bvv_zh

πŸ§ͺ Example

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model = AutoModelForCausalLM.from_pretrained('Bochkov/max_bvv_moe', trust_remote_code=True).to('cuda')
tokenizer = AutoTokenizer.from_pretrained('Bochkov/max_bvv_moe', trust_remote_code=True)

inputs = tokenizer.encode("Example sentence in Russian or Chinese", return_tensors="pt").to('cuda')
outputs = model.generate(inputs, max_new_tokens=100, temperature=0.8, top_k=50, top_p=0.95, do_sample=True)
print(tokenizer.decode(outputs[0].tolist()))
Downloads last month
12
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Collection including Bochkov/max_bvv_moe

Evaluation results