When will the error get resolved ?Can't load tokenizer using from_pretrained, please update its configuration

#5
by MukeshSharma - opened

Can't load tokenizer using from_pretrained, please update its configuration: Can't load tokenizer for 'hivemind/gpt-j-6B-8bit'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'hivemind/gpt-j-6B-8bit' is the correct path to a directory containing all relevant files for a GPT2TokenizerFast tokenizer.

Have a look team , and do the needful earliest possible

Please load the tokenizer from EleutherAI/gpt-j-6B, as it is identical.

This code was superceded by the [load_in_8bit=True feature in transformers]https://github.com/huggingface/transformers/pull/17901)
by Younes Belkada and Tim Dettmers. Please see this usage example.
This legacy model was built for transformers v4.15.0 and pytorch 1.11. Newer versions could work, but are not supported.

Sign up or log in to comment