My Tokenizer

这是一个基于BPE算法训练的分词器,支持中英文混合文本。

如何使用

from tokenizers import Tokenizer

# 加载分词器
tokenizer = Tokenizer.from_pretrained("你的用户名/my-tokenizer")

# 分词示例
text = "Hello, world!"
output = tokenizer.encode(text)
print("分词结果:", output.tokens)
Downloads last month
2
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support