Bochkov/max_bvv_zh
Research demo: Chinese Causal Language Model with Frozen, Visual/Unicode Token Embeddings
Model size: 0.4B parameters
Tokenizer: Custom, Unicode-based; compatible with max_bvv_ru
and max_bvv_moe
Goal: Show viability of frozen, non-semantic embeddings for LLMs (Chinese variant)
π Model description
- Frozen token embeddings derived from glyph/visual/Unicode statistics β not trained on text.
- All transformer & output layers are trained; embeddings remain fixed.
- Enables straightforward fusion with other models sharing these embeddings.
This is a proof-of-concept checkpoint. Performance is limited by training data and model size.
πΉ Evaluation
- Avg. MMLU: 24.72%
- SQuAD: 17.66%
- ARC-e: 21.93%
- BLEU (en-zh): 2.36%
Metrics are low because this model is for research and demo only.
β οΈ Limitations Research use only. Trained on a small, non-exhaustive Chinese subse subset. Quality, robustness, and reasoning are much lower than SOTA models. SFT was only lightly applied; not intended for real world use.
π§βπ¬ Citation & Concept
If you use this model or the underlying concepts in your research, please cite our work:
@misc{bochkov2025emergentsemanticstokenembeddings,
title={Emergent Semantics Beyond Token Embeddings: Transformer LMs with Frozen Visual Unicode Representations},
author={A. Bochkov},
year={2025},
eprint={2507.04886},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2507.04886},
}
@misc{bochkov2025growingtransformersmodularcomposition,
title={Growing Transformers: Modular Composition and Layer-wise Expansion on a Frozen Substrate},
author={A. Bochkov},
year={2025},
eprint={2507.07129},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2507.07129},
}
This work demonstrates that transformer blocks, not token embeddings, carry the semantic burden in LLMs β a step toward modular, fusable, multilingual LMs.
π₯ Example
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained('Bochkov/max_bvv_zh', trust_remote_code=True).to('cuda')
tokenizer = AutoTokenizer.from_pretrained('Bochkov/max_bvv_zh', trust_remote_code=True)
inputs = tokenizer.encode("δ½ ε₯½δΈηοΌ", return_tensors="pt").to('cuda')
outputs = model.generate(inputs, max_new_tokens=50, temperature=0.8, top_k=50, top_p=0.95, do_sample=True)
print(tokenizer.decode(outputs[0].tolist()))
- Downloads last month
- 19
Collection including Bochkov/max_bvv_zh
Evaluation results
- Average MMLUself-reported24.720