This model is a quantized version of Intel's zoedepth-nyu-kitti model. The model was quantized using 4-bit bitsandbytes.

Quantize code

quantization_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_compute_dtype=torch.float16,
    bnb_4bit_use_double_quant=True,
    bnb_4bit_quant_type="nf4",
)

depth_model = AutoModelForDepthEstimation.from_pretrained(
    "Intel/zoedepth-nyu-kitti",
    quantization_config=quantization_config,
    device_map="auto",
    dtype="auto",
)

How to use it

pip install --upgrade transformers accelerate bitsandbytes
import torch
from PIL import Image
from transformers import AutoModelForDepthEstimation, AutoImageProcessor

device = "cuda" if torch.cuda.is_available() else "cpu"

depth_model = AutoModelForDepthEstimation.from_pretrained(
    "CineAI/zoedepth-nyu-kitti-4bit",
    device_map="auto",
)

image_processor = AutoImageProcessor.from_pretrained("Intel/zoedepth-nyu-kitti", use_fast=True)

image = Image.open("image path")
image = image.convert(mode="RGB")

inputs = image_processor(images=image, return_tensors="pt").to(device)

with torch.no_grad():
    outputs = depth_model(**inputs)

source_sizes = [(image.height, image.width)]
post_processed_output = image_processor.post_process_depth_estimation(outputs, source_sizes=source_sizes,)

depth = post_processed_output[0]["predicted_depth"]
depth_np = depth.cpu().detach().numpy()

depth_np
Downloads last month
33
Safetensors
Model size
0.4B params
Tensor type
F32
·
U8
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for CineAI/zoedepth-nyu-kitti-4bit

Quantized
(1)
this model