Nemo models [pretrain]
Collection
Proof-of-concept: SOTA tokenizers can be used for compatible precomputed embeddings, industry can repeat with their tokenizers
β’
5 items
β’
Updated
This is nemo_bvv_zh, a Chinese language GPT-style model trained with fully precomputed and frozen token embeddings using the Mistral/Nemo tokenizer (visual appearance-based). Specifically designed to demonstrate the compatibility of SOTA tokenizers in the fixed-embedding paradigm.
If you use this model or the underlying concepts in your research, please cite our work:
@misc{bochkov2025emergentsemanticstokenembeddings,
title={Emergent Semantics Beyond Token Embeddings: Transformer LMs with Frozen Visual Unicode Representations},
author={A. Bochkov},
year={2025},
eprint={2507.04886},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2507.04886},
}
@misc{bochkov2025growingtransformersmodularcomposition,
title={Growing Transformers: Modular Composition and Layer-wise Expansion on a Frozen Substrate},
author={A. Bochkov},
year={2025},
eprint={2507.07129},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2507.07129},
}
This work demonstrates that transformer blocks, not token embeddings, carry the semantic burden in LLMs β a step toward modular, fusable, multilingual LMs.
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained('Bochkov/nemo_bvv_zh', trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained('Bochkov/nemo_bvv_zh', trust_remote_code=True).to('cuda')
inputs = tokenizer("δ½ ε₯½οΌδΈηοΌ ", return_tensors="pt").to('cuda')
outputs = model.generate(**inputs, max_new_tokens=50)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))