yosepyossi commited on
Commit
a2b876a
·
verified ·
1 Parent(s): ee76582

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +41 -3
README.md CHANGED
@@ -1,3 +1,41 @@
1
- ---
2
- license: cc-by-nd-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nd-4.0
3
+ task_categories:
4
+ - text-to-3d
5
+ tags:
6
+ - 3d
7
+ - benchmark
8
+ - out-of-domain
9
+ - evaluation
10
+ ---
11
+
12
+ # OOD-Eval: Out-of-Domain Evaluation Prompts for Text-to-3D
13
+
14
+ This repository contains the **OOD-Eval** dataset, a new collection of challenging out-of-domain (OOD) prompts specifically designed to facilitate rigorous evaluation of text-to-3D generation models. It was introduced in the paper [MV-RAG: Retrieval Augmented Multiview Diffusion](https://huggingface.co/papers/2508.16577).
15
+
16
+ This dataset helps assess how well text-to-3D approaches perform on rare or novel concepts, addressing a limitation where models often struggle to produce consistent or accurate results for such inputs.
17
+
18
+ * **Paper:** [MV-RAG: Retrieval Augmented Multiview Diffusion](https://huggingface.co/papers/2508.16577)
19
+ * **Project Page:** https://yosefdayani.github.io/MV-RAG/
20
+ * **Code:** https://github.com/yosefdayani/MV-RAG
21
+
22
+ ## Paper Abstract
23
+
24
+ Text-to-3D generation approaches have advanced significantly by leveraging pretrained 2D diffusion priors, producing high-quality and 3D-consistent outputs. However, they often fail to produce out-of-domain (OOD) or rare concepts, yielding inconsistent or inaccurate results. To this end, we propose MV-RAG, a novel text-to-3D pipeline that first retrieves relevant 2D images from a large in-the-wild 2D database and then conditions a multiview diffusion model on these images to synthesize consistent and accurate multiview outputs. Training such a retrieval-conditioned model is achieved via a novel hybrid strategy bridging structured multiview data and diverse 2D image collections. This involves training on multiview data using augmented conditioning views that simulate retrieval variance for view-specific reconstruction, alongside training on sets of retrieved real-world 2D images using a distinctive held-out view prediction objective: the model predicts the held-out view from the other views to infer 3D consistency from 2D data. To facilitate a rigorous OOD evaluation, we introduce a new collection of challenging OOD prompts. Experiments against state-to-the-art text-to-3D, image-to-3D, and personalization baselines show that our approach significantly improves 3D consistency, photorealism, and text adherence for OOD/rare concepts, while maintaining competitive performance on standard benchmarks.
25
+
26
+
27
+ ## Citation
28
+
29
+ If you use this benchmark or the MV-RAG model in your research, please cite:
30
+
31
+ ``` bibtex
32
+ @misc{dayani2025mvragretrievalaugmentedmultiview,
33
+ title={MV-RAG: Retrieval Augmented Multiview Diffusion},
34
+ author={Yosef Dayani and Omer Benishu and Sagie Benaim},
35
+ year={2025},
36
+ eprint={2508.16577},
37
+ archivePrefix={arXiv},
38
+ primaryClass={cs.CV},
39
+ url={https://arxiv.org/abs/2508.16577},
40
+ }
41
+ ```