Getting model error

#9
by Harsh1301 - opened

Getting this model error, when launching a model worker. can anyone help me fix this, thanks,

(llama-omni) Ubuntu@0008-dsm-prxmx30009:~/TestTwo/LLaMA-Omni$ python -m omni_speech.serve.model_worker --host 0.0.0.0 --controller http://localhost:10000 --port 40000 --worker http://localhost:40000 --model-path Llama-3.1-8B-Omni --model-name Llama-3.1-8B-Omni --s2s
2024-09-14 00:03:16 | INFO | model_worker | args: Namespace(host='0.0.0.0', port=40000, worker_address='http://localhost:40000', controller_address='http://localhost:10000', model_path='Llama-3.1-8B-Omni', model_base=None, model_name='Llama-3.1-8B-Omni', device='cuda', limit_model_concurrency=5, stream_interval=1, no_register=False, load_8bit=False, load_4bit=False, use_flash_attn=False, input_type='mel', mel_size=128, s2s=True, is_lora=False)
2024-09-14 00:03:16 | ERROR | stderr | Traceback (most recent call last):
2024-09-14 00:03:16 | ERROR | stderr | File "/home/Ubuntu/.conda/envs/llama-omni/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 304, in hf_raise_for_status
2024-09-14 00:03:16 | ERROR | stderr | response.raise_for_status()
2024-09-14 00:03:16 | ERROR | stderr | File "/home/Ubuntu/.conda/envs/llama-omni/lib/python3.10/site-packages/requests/models.py", line 1024, in raise_for_status
2024-09-14 00:03:16 | ERROR | stderr | raise HTTPError(http_error_msg, response=self)
2024-09-14 00:03:16 | ERROR | stderr | requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/Llama-3.1-8B-Omni/resolve/main/tokenizer_config.json
2024-09-14 00:03:16 | ERROR | stderr |
2024-09-14 00:03:16 | ERROR | stderr | The above exception was the direct cause of the following exception:
2024-09-14 00:03:16 | ERROR | stderr |
2024-09-14 00:03:16 | ERROR | stderr | Traceback (most recent call last):
2024-09-14 00:03:16 | ERROR | stderr | File "/home/Ubuntu/.conda/envs/llama-omni/lib/python3.10/site-packages/transformers/utils/hub.py", line 402, in cached_file
2024-09-14 00:03:16 | ERROR | stderr | resolved_file = hf_hub_download(
2024-09-14 00:03:16 | ERROR | stderr | File "/home/Ubuntu/.conda/envs/llama-omni/lib/python3.10/site-packages/huggingface_hub/utils/_deprecation.py", line 101, in inner_f
2024-09-14 00:03:16 | ERROR | stderr | return f(*args, **kwargs)
2024-09-14 00:03:16 | ERROR | stderr | File "/home/Ubuntu/.conda/envs/llama-omni/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
2024-09-14 00:03:16 | ERROR | stderr | return fn(*args, **kwargs)
2024-09-14 00:03:16 | ERROR | stderr | File "/home/Ubuntu/.conda/envs/llama-omni/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1240, in hf_hub_download
2024-09-14 00:03:16 | ERROR | stderr | return _hf_hub_download_to_cache_dir(
2024-09-14 00:03:16 | ERROR | stderr | File "/home/Ubuntu/.conda/envs/llama-omni/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1347, in _hf_hub_download_to_cache_dir
2024-09-14 00:03:16 | ERROR | stderr | _raise_on_head_call_error(head_call_error, force_download, local_files_only)
2024-09-14 00:03:16 | ERROR | stderr | File "/home/Ubuntu/.conda/envs/llama-omni/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1855, in _raise_on_head_call_error
2024-09-14 00:03:16 | ERROR | stderr | raise head_call_error
2024-09-14 00:03:16 | ERROR | stderr | File "/home/Ubuntu/.conda/envs/llama-omni/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1752, in _get_metadata_or_catch_error
2024-09-14 00:03:16 | ERROR | stderr | metadata = get_hf_file_metadata(
2024-09-14 00:03:16 | ERROR | stderr | File "/home/Ubuntu/.conda/envs/llama-omni/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
2024-09-14 00:03:16 | ERROR | stderr | return fn(*args, **kwargs)
2024-09-14 00:03:16 | ERROR | stderr | File "/home/Ubuntu/.conda/envs/llama-omni/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1674, in get_hf_file_metadata
2024-09-14 00:03:16 | ERROR | stderr | r = _request_wrapper(
2024-09-14 00:03:16 | ERROR | stderr | File "/home/Ubuntu/.conda/envs/llama-omni/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 376, in _request_wrapper
2024-09-14 00:03:16 | ERROR | stderr | response = _request_wrapper(
2024-09-14 00:03:16 | ERROR | stderr | File "/home/Ubuntu/.conda/envs/llama-omni/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 400, in _request_wrapper
2024-09-14 00:03:16 | ERROR | stderr | hf_raise_for_status(response)
2024-09-14 00:03:16 | ERROR | stderr | File "/home/Ubuntu/.conda/envs/llama-omni/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 352, in hf_raise_for_status
2024-09-14 00:03:16 | ERROR | stderr | raise RepositoryNotFoundError(message, response) from e
2024-09-14 00:03:16 | ERROR | stderr | huggingface_hub.utils._errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-66e4d2c4-25ddd5e765380ccd6a208b71;b125b2d7-98da-476c-aeb9-d2b539506a9d)
2024-09-14 00:03:16 | ERROR | stderr |
2024-09-14 00:03:16 | ERROR | stderr | Repository Not Found for url: https://huggingface.co/Llama-3.1-8B-Omni/resolve/main/tokenizer_config.json.
2024-09-14 00:03:16 | ERROR | stderr | Please make sure you specified the correct repo_id and repo_type.
2024-09-14 00:03:16 | ERROR | stderr | If you are trying to access a private or gated repo, make sure you are authenticated.
2024-09-14 00:03:16 | ERROR | stderr | Invalid username or password.
2024-09-14 00:03:16 | ERROR | stderr |
2024-09-14 00:03:16 | ERROR | stderr | The above exception was the direct cause of the following exception:
2024-09-14 00:03:16 | ERROR | stderr |
2024-09-14 00:03:16 | ERROR | stderr | Traceback (most recent call last):
2024-09-14 00:03:16 | ERROR | stderr | File "/home/Ubuntu/.conda/envs/llama-omni/lib/python3.10/runpy.py", line 196, in _run_module_as_main
2024-09-14 00:03:16 | ERROR | stderr | return _run_code(code, main_globals, None,
2024-09-14 00:03:16 | ERROR | stderr | File "/home/Ubuntu/.conda/envs/llama-omni/lib/python3.10/runpy.py", line 86, in _run_code
2024-09-14 00:03:16 | ERROR | stderr | exec(code, run_globals)
2024-09-14 00:03:16 | ERROR | stderr | File "/home/Ubuntu/TestTwo/LLaMA-Omni/omni_speech/serve/model_worker.py", line 277, in
2024-09-14 00:03:16 | ERROR | stderr | worker = ModelWorker(args.controller_address,
2024-09-14 00:03:16 | ERROR | stderr | File "/home/Ubuntu/TestTwo/LLaMA-Omni/omni_speech/serve/model_worker.py", line 83, in init
2024-09-14 00:03:16 | ERROR | stderr | self.tokenizer, self.model, self.context_len = load_pretrained_model(
2024-09-14 00:03:16 | ERROR | stderr | File "/home/Ubuntu/TestTwo/LLaMA-Omni/omni_speech/model/builder.py", line 78, in load_pretrained_model
2024-09-14 00:03:16 | ERROR | stderr | tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False)
2024-09-14 00:03:16 | ERROR | stderr | File "/home/Ubuntu/.conda/envs/llama-omni/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 833, in from_pretrained
2024-09-14 00:03:16 | ERROR | stderr | tokenizer_config = get_tokenizer_config(pretrained_model_name_or_path, **kwargs)
2024-09-14 00:03:16 | ERROR | stderr | File "/home/Ubuntu/.conda/envs/llama-omni/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 665, in get_tokenizer_config
2024-09-14 00:03:16 | ERROR | stderr | resolved_config_file = cached_file(
2024-09-14 00:03:16 | ERROR | stderr | File "/home/Ubuntu/.conda/envs/llama-omni/lib/python3.10/site-packages/transformers/utils/hub.py", line 425, in cached_file
2024-09-14 00:03:16 | ERROR | stderr | raise EnvironmentError(
2024-09-14 00:03:16 | ERROR | stderr | OSError: Llama-3.1-8B-Omni is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
2024-09-14 00:03:16 | ERROR | stderr | If this is a private repository, make sure to pass a token having permission to this repo either by logging in with huggingface-cli login or by passing token=<your_token>

I am not getting here, do we need to download some model as mentioned in Quick Start

  1. Download the Llama-3.1-8B-Omni model from 🤗Huggingface.

I have this problem too. It says repository not found 'https://huggingface.co/Llama-3.1-8B-Omni/resolve/main/tokenizer_config.json.'
If I try to open URL it says repository not found...

Sign up or log in to comment