Model Card for CycleReward-I2T

Project page | Paper | Code

Reward model for image-text alignment trained on image-to-text comparison pairs from CyclePrefDB-I2T.

This model has been pushed to the Hub using the PytorchModelHubMixin integration.

Loading the model

Download the model.py, med_config.json files and blip folder from this repository. You can load the pretrained model using the code below:

import torch
from PIL import Image
from model import CycleReward

device='cuda'
model = CycleReward.from_pretrained("carolineec/CycleReward-I2T")
model.to(device)
model.eval()

preprocess = model.preprocess
image_path = "cat.jpg"
caption = "a photo of a cat"
image = preprocess(Image.open(image_path)).unsqueeze(0).to(device)
print('prepared data')

score = model.score(image, caption) 
print('my score:', score.item())

Citation

@article{bahng2025cyclereward,
title={Cycle Consistency as Reward: Learning Image-Text Alignment without Human Preferences},
author= {Bahng, Hyojin and Chan, Caroline and Durand, Fredo and Isola, Phillip},
journal={arXiv preprint arXiv:2506.02095},
year={2025}
}
Downloads last month
7
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train carolineec/CycleReward-I2T

Collection including carolineec/CycleReward-I2T