Value Error when running the model

#3
by parkerswy - opened

Hi! I'm trying to set it up and am following the run inference section.
mm_utils doesn't seem to exist, so I replaced
"model_name": get_model_name_from_path(model_path),
with
model_name = "TinyLLaVA-1.5B"
and
"model_name": model_name,

Right now my main.py looks like this


from tinyllava.model.load_model import load_pretrained_model
from tinyllava.eval.run_tiny_llava import eval_model
# from tinyllava.mm_utils import get_model_name_from_path

# Define model path and example image and prompt
model_path = "bczhou/TinyLLaVA-1.5B"
model_name = "TinyLLaVA-1.5B"
prompt = "What do you see in this image?"
image_file = 'mypath' # edited for privacy

# Set up the arguments for evaluation
args = type('Args', (), {
"model_path": model_path,
"model_base": None,
"model_name": model_name,
"query": prompt,
"conv_mode": "v1",
"image_file": image_file,
"sep": ",",
"temperature": 0,
"top_p": None,
"num_beams": 1,
"max_new_tokens": 512
})()

# Run evaluation
eval_model(args)


I also changed phi_template.py since I got this error -- ValueError: mutable default <class 'tinyllava.data.template.formatter.StringFormatter'> for field format_image_token is not allowed: use default_factory

Right now phi_template.py looks like this.


from dataclasses import dataclass, field
from typing import TYPE_CHECKING, Dict, List, Optional, Sequence, Tuple, Union

from .formatter import EmptyFormatter, StringFormatter
from .base import Template
from .formatter import Formatter
from . import register_template

from transformers import PreTrainedTokenizer
import torch

system = "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions."

@register_template('phi')
@dataclass
class PhiTemplate(Template):
# format_image_token: "Formatter" = StringFormatter(slot="\n{{content}}")
# format_user: "Formatter" = StringFormatter(slot="USER" + ": " + "{{content}}" + " ")
# format_assistant: "Formatter" = StringFormatter(slot="ASSISTANT" + ": " + "{{content}}" + "<|endoftext|>")
# system: "Formatter" = EmptyFormatter(slot=system+" ")
# separator: "Formatter" = EmptyFormatter(slot=[' ASSISTANT: ', '<|endoftext|>'])
format_image_token: "Formatter" = field(default_factory=lambda: StringFormatter(slot="\n{{content}}"))
format_user: "Formatter" = field(default_factory=lambda: StringFormatter(slot="USER" + ": " + "{{content}}" + " "))
format_assistant: "Formatter" = field(default_factory=lambda: StringFormatter(slot="ASSISTANT" + ": " + "{{content}}" + "<|endoftext|>"))
system: "Formatter" = field(default_factory=lambda: EmptyFormatter(slot=system + " "))
separator: "Formatter" = field(default_factory=lambda: EmptyFormatter(slot=['ASSISTANT: ', '<|endoftext|>']))


I tried running it but then I got this error. Does anyone know how to fix it / run it properly?
ValueError: mutable default <class 'tinyllava.data.template.formatter.EmptyFormatter'> for field format_image_token is not allowed: use default_factory
I tried changing phi_template.py and formatter.py but it still didnt work.

I am new to machine learning so please let me know if I did anything wrong. Thank you so much and have a great day!!!

tldr - does anyone know how to fix this error - ValueError: mutable default <class 'tinyllava.data.template.formatter.EmptyFormatter'> for field format_image_token is not allowed: use default_factory ?

Sign up or log in to comment