Serving Model via mlserver-huggingface

#5
by gdagil - opened

Hi,

I'm encountering an error while trying to serve your model using mlserver-huggingface.
Here’s the error message I received:

[mlserver] INFO - Couldn't load model 'teapotllm'. Model will be removed from registry.
[mlserver.parallel] ERROR - An error occurred processing a model update of type 'Load'.
Traceback (most recent call last):
  File "/opt/conda/lib/python3.10/site-packages/mlserver/registry.py", line 167, in _load_model
    model.ready = await model.load()
  File "/opt/conda/lib/python3.10/site-packages/mlserver_huggingface/runtime.py", line 29, in load
    self._model = load_pipeline_from_settings(self.hf_settings, self.settings)
  File "/opt/conda/lib/python3.10/site-packages/mlserver_huggingface/common.py", line 53, in load_pipeline_from_settings
    hf_pipeline = pipeline(
  File "/opt/conda/lib/python3.10/site-packages/transformers/pipelines/__init__.py", line 1047, in pipeline
    tokenizer = AutoTokenizer.from_pretrained(
  File "/opt/conda/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 934, in from_pretrained
    return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
  File "/opt/conda/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2036, in from_pretrained
    return cls._from_pretrained(
  File "/opt/conda/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2074, in _from_pretrained
    slow_tokenizer = (cls.slow_tokenizer_class)._from_pretrained(
  File "/opt/conda/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2276, in _from_pretrained
    tokenizer = cls(*init_inputs, **init_kwargs)
  File "/opt/conda/lib/python3.10/site-packages/transformers/models/t5/tokenization_t5.py", line 150, in __init__
    self.sp_model.Load(vocab_file)
  File "/opt/conda/lib/python3.10/site-packages/sentencepiece/__init__.py", line 961, in Load
    return self.LoadFromFile(model_file)
  File "/opt/conda/lib/python3.10/site-packages/sentencepiece/__init__.py", line 316, in LoadFromFile
    return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg)
TypeError: not a string

It seems that the model fails to load due to an issue with the tokenizer.

My question is: How can I perform inference with this model without using the teapotai Python package?

Thank you for your assistance!

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment