How do I load the model quantized?

#33
by treehugg3 - opened

I'm getting CUDA OOM issues and it complains about tensors being on different devices when I set a device map to include CPU. Is there an easy way I can load the model with 8-bit precision?

If I try this:

quantization_config = BitsAndBytesConfig(load_in_8bit=True)
model = LlavaForConditionalGeneration.from_pretrained(".", device_map="auto", quantization_config=quantization_config)

I get this error:

Input type (torch.cuda.FloatTensor) and weight type (torch.cuda.HalfTensor) should be the same

Also happens when I configure AutoProcessor with a quantization_config...

Unofficial Mistral Community org

Pinging @RaushanTurganbay here

Unofficial Mistral Community org

@treehugg3 can you show the whole inference script? Looks like you didn't cast inputs to fp16 before inference which is the dtype used by BnB by default

Thank you for helping. Here is the script I used:

from transformers import LlavaForConditionalGeneration, AutoProcessor, BitsAndBytesConfig
from PIL import Image

model_id="../pixtral-12b"
image_url ="https://m.media-amazon.com/images/I/81tPIA63TrL._AC_SL1500_.jpg"
prompt = "Describe the picture in one sentence."

IMG_URLS = [
    "https://picsum.photos/id/237/400/300",
    "https://picsum.photos/id/231/200/300",
    "https://picsum.photos/id/27/500/500",
    "https://picsum.photos/id/17/150/600",
]
PROMPT = "<s>[INST]Describe the pictures.\n[IMG][IMG][IMG][IMG][/INST]"

quantization_config = BitsAndBytesConfig(load_in_8bit=True)
model = LlavaForConditionalGeneration.from_pretrained(model_id, device_map="auto", quantization_config=quantization_config)
processor = AutoProcessor.from_pretrained(model_id)

inputs = processor(images=IMG_URLS, text=PROMPT, return_tensors="pt").to("cuda")
generate_ids = model.generate(**inputs, max_new_tokens=500, torch_dtype="auto")
output = processor.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
print(output)

It turns out that by creating a new virtualenv, this code now runs properly. There must have been some strange package version combination that caused it to fail in the other one. Really sorry for bothering, I spent a long time trying to figure out why it wasn't working, but it works great now!

Unofficial Mistral Community org

Yeah, this has to be casted manually as follows:

inputs = processor(images=IMG_URLS, text=PROMPT, return_tensors="pt").to("cuda", dtype=torch.float16)

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment