Update README.md
Browse files
README.md
CHANGED
@@ -1,31 +1,35 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
}
|
29 |
-
|
30 |
-
|
31 |
-
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
datasets:
|
3 |
+
- chaofengc/IQA-PyTorch-Datasets
|
4 |
+
language:
|
5 |
+
- en
|
6 |
+
pipeline_tag: visual-question-answering
|
7 |
+
library_name: transformers
|
8 |
+
---
|
9 |
+
# Visual Prompt Checkpoints for NR-IQA
|
10 |
+
π¬ **Paper**: [Parameter-Efficient Adaptation of mPLUG-Owl2 via Pixel-Level Visual Prompts for NR-IQA](https://arxiv.org/abs/xxxx.xxxxx)
|
11 |
+
π» **Code**: [GitHub Repository](https://github.com/your-username/visual-prompt-nr-iqa)
|
12 |
+
|
13 |
+
## Overview
|
14 |
+
Pre-trained visual prompt checkpoints for **No-Reference Image Quality Assessment (NR-IQA)** using mPLUG-Owl2-7B. Achieves competitive performance with only **~600K parameters** vs 7B+ for full fine-tuning.
|
15 |
+
|
16 |
+
## Available Checkpoints
|
17 |
+
**Download**: `visual_prompt_ckpt_trained_on_mplug2.zip`
|
18 |
+
|
19 |
+
| Dataset | SROCC | Experiment Folder |
|
20 |
+
|---------|-------|-------------------|
|
21 |
+
| KADID-10k | 0.932 | `SGD_mplug2_exp_04_kadid_padding_30px_add/` |
|
22 |
+
| KonIQ-10k | 0.852 | `SGD_mplug2_exp_05_koniq_padding_30px_add/` |
|
23 |
+
| AGIQA-3k | 0.810 | `SGD_mplug2_exp_06_agiqa_padding_30px_add/` |
|
24 |
+
|
25 |
+
## Citation
|
26 |
+
```bibtex
|
27 |
+
@article{benmahane2024parameter,
|
28 |
+
title={Parameter-Efficient Adaptation of mPLUG-Owl2 via Pixel-Level Visual Prompts for NR-IQA},
|
29 |
+
author={Benmahane, Yahya and El Hassouni, Mohammed},
|
30 |
+
journal={arXiv preprint},
|
31 |
+
year={2024}
|
32 |
+
}
|
33 |
+
```
|
34 |
+
|
35 |
+
**π For detailed setup, training, and usage instructions, see the [GitHub repository](https://github.com/your-username/visual-prompt-nr-iqa).**
|