mplug2-vp-for-nriqa / README.md
yahya007's picture
Update README.md
b3e7c7c verified
---
datasets:
- chaofengc/IQA-PyTorch-Datasets
language:
- en
pipeline_tag: visual-question-answering
library_name: transformers
---
# Visual Prompt Checkpoints for NR-IQA
πŸ”¬ **Paper**: [Parameter-Efficient Adaptation of mPLUG-Owl2 via Pixel-Level Visual Prompts for NR-IQA](https://arxiv.org/abs/xxxx.xxxxx) (will be released soon)
πŸ’» **Code**: [GitHub Repository](https://github.com/yahya-ben/mplug2-vp-for-nriqa)
## Overview
Pre-trained visual prompt checkpoints for **No-Reference Image Quality Assessment (NR-IQA)** using mPLUG-Owl2-7B. Achieves competitive performance with only **~600K parameters** vs 7B+ for full fine-tuning.
## Available Checkpoints
**Download**: `visual_prompt_ckpt_trained_on_mplug2.zip`
| Dataset | SROCC | Experiment Folder |
|---------|-------|-------------------|
| KADID-10k | 0.932 | `SGD_mplug2_exp_04_kadid_padding_30px_add/` |
| KonIQ-10k | 0.852 | `SGD_mplug2_exp_05_koniq_padding_30px_add/` |
| AGIQA-3k | 0.810 | `SGD_mplug2_exp_06_agiqa_padding_30px_add/` |
**πŸ“– For detailed setup, training, and usage instructions, see the [GitHub repository](https://github.com/your-username/visual-prompt-nr-iqa).**