This model is a quantized version of Apple's DepthPro-hf model. The model was quantized using 4-bit bitsandbytes.

Quantize code

quantization_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_compute_dtype=torch.float16,
    bnb_4bit_use_double_quant=True,
    bnb_4bit_quant_type="nf4",
)

depth_model = DepthProForDepthEstimation.from_pretrained(
    "apple/DepthPro-hf",
    quantization_config=quantization_config,
    device_map="auto",
    dtype="auto",
)

How to use it

pip install --upgrade transformers accelerate bitsandbytes
import torch
from PIL import Image
from transformers import DepthProForDepthEstimation, DepthProImageProcessorFast

device = "cuda" if torch.cuda.is_available() else "cpu"

depth_model = DepthProForDepthEstimation.from_pretrained(
    "CineAI/Depth-Pro-hf-4bit",
    device_map="auto",
)

image_processor = DepthProImageProcessorFast.from_pretrained("apple/DepthPro-hf")

image = Image.open("image path")
image = image.convert(mode="RGB")

inputs = image_processor(images=image, return_tensors="pt").to(device)

with torch.no_grad():
    outputs = depth_model(**inputs)

source_sizes = [(image.height, image.width)]
post_processed_output = image_processor.post_process_depth_estimation(outputs, target_sizes=source_sizes,)

depth = post_processed_output[0]["predicted_depth"]
depth_np = depth.cpu().detach().numpy()

depth_np
Downloads last month
43
Safetensors
Model size
1.0B params
Tensor type
F32
F16
U8
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for CineAI/Depth-Pro-hf-4bit

Base model

apple/DepthPro-hf
Quantized
(1)
this model