Pro models [pretrain]
Collection
Frozen-embedding LMs for English, Russian, Chinese; demonstration & comparison with standard LM.
β’
6 items
β’
Updated
Description
200M parameter EN/ZH language model trained on mixed EN/ZH corpus using frozen, visually-motivated token embeddings (Unicode-based). Intended for demonstration of cross-lingual learning and generalization capability.
Training details
Evaluation (SELECTED):
Task | pro_bvv_zh |
---|---|
MMLU | 17.96% Β± 0.25% |
ARC-e | 21.74% Β± 1.10% |
ARC-c | 22.24% Β± 1.14% |
C-SENSE | 18.51% Β± 0.76% |
SQUAD | 5.59% Β± 0.76% |
BLEU [en-ru] | 2.82% Β± 0.32% |
BLEU [en-zh] | 1.32% Β± 0.31% |
BLEU [zh-en] | 4.65% Β± 0.28% |
β οΈ Limitations Research use only. Trained on a small subset. Quality, robustness, and reasoning are much lower than SOTA models. SFT was only lightly applied; not intended for real world use.
If you use this model or the underlying concepts in your research, please cite our work:
@misc{bochkov2025emergentsemanticstokenembeddings,
title={Emergent Semantics Beyond Token Embeddings: Transformer LMs with Frozen Visual Unicode Representations},
author={A. Bochkov},
year={2025},
eprint={2507.04886},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2507.04886},
}
@misc{bochkov2025growingtransformersmodularcomposition,
title={Growing Transformers: Modular Composition and Layer-wise Expansion on a Frozen Substrate},
author={A. Bochkov},
year={2025},
eprint={2507.07129},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2507.07129},
}
This work demonstrates that transformer blocks, not token embeddings, carry the semantic burden in LLMs β a step toward modular, fusable, multilingual LMs.
Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained('Bochkov/pro_bvv_zh')
model = AutoModelForCausalLM.from_pretrained('Bochkov/pro_bvv_zh', trust_remote_code=True).to('cuda')
inputs = torch.tensor([tokenizer.encode("Example input: ")], device='cuda')
outputs = model.generate(inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0]))