ModelLock: Locking Your Model With a Spell
Official model repository for the paper: ModelLock: Locking Your Model With a Spell
Overview
This repository contains the locked model checkpoint for the Oxford-IIIT Pet dataset using the ModelLock framework with style-based transformation.
Checkpoint Information
Model: MAE (Masked Autoencoder) fine-tuned on Oxford-IIIT Pet dataset
Lock Type: Style lock
Dataset: Oxford-IIIT Pet (38 classes)
Model Hyperparameters
The model was locked using the following configuration:
Diffusion Model
- Model:
timbrooks/instruct-pix2pix(InstructPix2Pix)
Transformation Parameters
- Prompt:
"with oil pastel" - Alpha (blending ratio):
0.5 - Inference Steps:
5 - Image Guidance Scale:
1.5 - Guidance Scale:
4.5
Download Checkpoint
huggingface-cli download SFTJBD/ModelLock pets_mae_style_checkpoint-best.pth --local-dir ./checkpoints
Or using Python:
from huggingface_hub import hf_hub_download
checkpoint_path = hf_hub_download(
repo_id="SFTJBD/ModelLock",
filename="pets_mae_style_checkpoint-best.pth"
)
Usage
To evaluate the locked model, use the key prompt "with oil pastel" with the same hyperparameters listed above to unlock the model's full performance.
Citation
@article{gao2024modellock,
title={ModelLock: Locking Your Model With a Spell},
author={Gao, Yifeng and Sun, Yuhua and Ma, Xingjun and Wu, Zuxuan and Jiang, Yu-Gang},
journal={arXiv preprint arXiv:2405.16285},
year={2024}
}
License
MIT License