tokenizers
Collection
trained and adapted tokenizers - various
•
14 items
•
Updated
Vocabulary size: 65103
usage:
from transformers import AutoTokenizer
tk = AutoTokenizer.from_pretrained('BEE-spoke-data/claude-tokenizer-forT5')
inputs = tk("here are some words", return_tensors="pt")
"post_processor": {
"type": "TemplateProcessing",
"single": [
{
"Sequence": {
"id": "A",
"type_id": 0
}
},
{
"SpecialToken": {
"id": "</s>",
"type_id": 0
}
}
],
"pair": [
{
"Sequence": {
"id": "A",
"type_id": 0
}
},
{
"SpecialToken": {
"id": "</s>",
"type_id": 0
}
},
{
"Sequence": {
"id": "B",
"type_id": 0
}
},
{
"SpecialToken": {
"id": "</s>",
"type_id": 0
}
}
],
"special_tokens": {
"</s>": {
"id": "</s>",
"ids": [
65001
],
"tokens": [
"</s>"
]
}
}
},