Gives me the following error

#3
by ElvisM - opened

Traceback (most recent call last):
File “D:\Chatbot\installer_files\env\lib\site-packages\transformers\modeling_utils.py”, line 442, in load_state_dict
return torch.load(checkpoint_file, map_location=“cpu”)
File “D:\Chatbot\installer_files\env\lib\site-packages\torch\serialization.py”, line 791, in load
with _open_file_like(f, ‘rb’) as opened_file:
File “D:\Chatbot\installer_files\env\lib\site-packages\torch\serialization.py”, line 271, in _open_file_like
return _open_file(name_or_buffer, mode)
File “D:\Chatbot\installer_files\env\lib\site-packages\torch\serialization.py”, line 252, in init
super().init(open(name, mode))
FileNotFoundError: [Errno 2] No such file or directory: ‘models\TheBloke_wizardLM-7B-GPTQ\pytorch_model-00001-of-00003.bin’

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File “D:\Chatbot\text-generation-webui\server.py”, line 101, in load_model_wrapper
shared.model, shared.tokenizer = load_model(shared.model_name)
File “D:\Chatbot\text-generation-webui\modules\models.py”, line 207, in load_model
model = LoaderClass.from_pretrained(checkpoint, **params)
File “D:\Chatbot\installer_files\env\lib\site-packages\transformers\models\auto\auto_factory.py”, line 471, in from_pretrained
return model_class.from_pretrained(
File “D:\Chatbot\installer_files\env\lib\site-packages\transformers\modeling_utils.py”, line 2795, in from_pretrained
) = cls._load_pretrained_model(
File “D:\Chatbot\installer_files\env\lib\site-packages\transformers\modeling_utils.py”, line 3109, in _load_pretrained_model
state_dict = load_state_dict(shard_file)
File “D:\Chatbot\installer_files\env\lib\site-packages\transformers\modeling_utils.py”, line 445, in load_state_dict
with open(checkpoint_file) as f:
FileNotFoundError: [Errno 2] No such file or directory: ‘models\TheBloke_wizardLM-7B-GPTQ\pytorch_model-00001-of-00003.bin’

Firstly, please delete file pytorch_model.bin.index.json. That file shouldn't have been there and I've deleted it from the repo now.

However I think you getting this message means you're not loading with GPTQ-for-LLaMa code.

Is this text-generation-webui? If not then it's probably not going to be able to load this model. These models are GPTQ quantised and require GPTQ-for-LLaMa's llama_inference.py to read them.

That's available as a plugin for text-generation-webui but I don't know if other UIs support it yet.

Firstly, please delete file pytorch_model.bin.index.json. That file shouldn't have been there and I've deleted it from the repo now.

However I think you getting this message means you're not loading with GPTQ-for-LLaMa code.

Is this text-generation-webui? If not then it's probably not going to be able to load this model. These models are GPTQ quantised and require GPTQ-for-LLaMa's llama_inference.py to read them.

That's available as a plugin for text-generation-webui but I don't know if other UIs support it yet.

I deleted the file and now got a different error when trying to load the model (and yes, I'm using the text-generation-webui):

Traceback (most recent call last):
File “D:\Chatbot\text-generation-webui\server.py”, line 101, in load_model_wrapper
shared.model, shared.tokenizer = load_model(shared.model_name)
File “D:\Chatbot\text-generation-webui\modules\models.py”, line 207, in load_model
model = LoaderClass.from_pretrained(checkpoint, **params)
File “D:\Chatbot\installer_files\env\lib\site-packages\transformers\models\auto\auto_factory.py”, line 471, in from_pretrained
return model_class.from_pretrained(
File “D:\Chatbot\installer_files\env\lib\site-packages\transformers\modeling_utils.py”, line 2405, in from_pretrained
raise EnvironmentError(
OSError: Error no file named pytorch_model.bin, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory models\TheBloke_wizardLM-7B-GPTQ.

In which case you're not running text-gen-ui with the right command line arguments. From the README:

cd text-generation-webui
python server.py --model wizardLM-7B-GPTQ --wbits 4 --groupsize 128 --model_type Llama # add any other command line args you want

In which case you're not running text-gen-ui with the right command line arguments. From the README:

cd text-generation-webui
python server.py --model wizardLM-7B-GPTQ --wbits 4 --groupsize 128 --model_type Llama # add any other command line args you want

That did it. Thanks. But the model seems to be running slow on my RTX 2060 6GB VRAM.

Yeah I've had reports of this. It seems to run slowly for people using ooba's fork of GPTQ-for-LLaMa.

I'm making a new model to see if that's quicker. I'll ping you when it's ready to try

Yeah I've had reports of this. It seems to run slowly for people using ooba's fork of GPTQ-for-LLaMa.

I'm making a new model to see if that's quicker. I'll ping you when it's ready to try

Thanks!

Performance issues are resolved now. Re-download config.json

TheBloke changed discussion status to closed

Sign up or log in to comment