Issue with tokenizer_config.json
#140
by
LucaF
- opened
Hello!
In the file tokenizer_config.json
, there is a very large value for model_max_length
. I suppose it might be a typo?
"bos_token": "<|startoftext|>",
"clean_up_tokenization_spaces": false,
"eos_token": "<|return|>",
"extra_special_tokens": {},
"model_input_names": [
"input_ids",
"attention_mask"
],
"model_max_length": 1000000000000000019884624838656,
"pad_token": "<|endoftext|>",
"tokenizer_class": "PreTrainedTokenizerFast"
}