Viglong commited on
Commit
e9874e8
·
verified ·
1 Parent(s): e7d8220

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +186 -0
README.md CHANGED
@@ -4,4 +4,190 @@ datasets:
4
  - Viglong/Hunyuan3D-FLUX-Gen
5
  papers:
6
  space: Viglong/Orient-Anything-V2
 
7
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  - Viglong/Hunyuan3D-FLUX-Gen
5
  papers:
6
  space: Viglong/Orient-Anything-V2
7
+ model: Viglong/OriAnyV2_ckpt
8
  ---
9
+
10
+ <div align="center">
11
+ <h1>[NeurIPS 2025 Spotlight]<br>
12
+ Orient Anything V2: Unifying Orientation and Rotation Understanding</h1>
13
+
14
+ [**Zehan Wang**](https://scholar.google.com/citations?user=euXK0lkAAAAJ)<sup>1*</sup> · [**Ziang Zhang**](https://scholar.google.com/citations?hl=zh-CN&user=DptGMnYAAAAJ)<sup>1*</sup> · [**Jialei Wang**](https://scholar.google.com/citations?hl=en&user=OIuFz1gAAAAJ)<sup>1</sup> · [**Jiayang Xu**](https://github.com/1339354001)<sup>1</sup> · [**Tianyu Pang**](https://scholar.google.com/citations?hl=zh-CN&user=wYDbtFsAAAAJ)<sup>2</sup> · [**Du Chao**](https://scholar.google.com/citations?hl=zh-CN&user=QOp7xW0AAAAJ)<sup>2</sup> · [**Hengshuang Zhao**](https://scholar.google.com/citations?user=4uE10I0AAAAJ&hl&oi=ao)<sup>3</sup> · [**Zhou Zhao**](https://scholar.google.com/citations?user=IIoFY90AAAAJ&hl&oi=ao)<sup>1</sup>
15
+
16
+ <sup>1</sup>Zhejiang University&emsp;&emsp;&emsp;&emsp;<sup>2</sup>SEA AI Lab&emsp;&emsp;&emsp;&emsp;<sup>3</sup>HKU
17
+
18
+ *Equal Contribution
19
+
20
+
21
+ <a href='https://arxiv.org/abs/2412.18605'><img src='https://img.shields.io/badge/arXiv-PDF-red' alt='Paper PDF'></a>
22
+ <a href='https://orient-anythingv2.github.io'><img src='https://img.shields.io/badge/Project_Page-OriAnyV2-green' alt='Project Page'></a>
23
+ <a href='https://huggingface.co/spaces/Viglong/Orient-Anything-V2'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue'></a>
24
+ <a href='https://huggingface.co/datasets/Viglong/OriAnyV2_Train_Render'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Train Data-orange'></a>
25
+ <a href='https://huggingface.co/datasets/Viglong/OriAnyV2_Inference'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Test Data-orange'></a>
26
+ <a href='https://huggingface.co/papers/2412.18605'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Paper-yellow'></a>
27
+ </div>
28
+
29
+ **Orient Anything V2**, a unified spatial vision model for understanding orientation, symmetry, and relative rotation, achieves SOTA performance across 14 datasets.
30
+
31
+ <!-- ![teaser](https://github.com/SpatialVision/Orient-Anything-V2/blob/main/assets/overview.jpg) -->
32
+
33
+ ## News
34
+ * **2025-10-24:** 🔥[Paper](https://arxiv.org/abs/2412.18605), [Project Page](https://orient-anythingv2.github.io), [Code](https://github.com/SpatialVision/Orient-Anything-V2), [Model Checkpoint](https://huggingface.co/Viglong/OriAnyV2_ckpt/blob/main/demo_ckpts/rotmod_realrotaug_best.pt), and [Demo](https://huggingface.co/spaces/Viglong/Orient-Anything-V2) have been released!
35
+
36
+ * **2025-09-18:** 🔥Orient Anything V2 has been accepted as a Spotlight @ NeurIPS 2025!
37
+
38
+ ## Pre-trained Model Weights
39
+
40
+ We provide pre-trained model weights and are continuously iterating on them to support more inference scenarios:
41
+
42
+ | Model | Params | Checkpoint |
43
+ |:-|-:|:-:|
44
+ | Orient-Anything-V2 | 5.05 GB | [Download](https://huggingface.co/Viglong/OriAnyV2_ckpt/blob/main/demo_ckpts/rotmod_realrotaug_best.pt) |
45
+
46
+ ## Quick Start
47
+
48
+ ### 1 Dependency Installation
49
+
50
+ ```shell
51
+ conda create -n orianyv2 python=3.11
52
+
53
+ conda activate orianyv2
54
+
55
+ pip install -r requirements.txt
56
+ ```
57
+
58
+ ### 2 Gradio App
59
+ Start gradio by executing the following script:
60
+
61
+ ```bash
62
+ python app.py
63
+ ```
64
+ then open GUI page(default is https://127.0.0.1:7860) in web browser.
65
+
66
+ or, you can try it in our [Huggingface-Space](https://huggingface.co/spaces/Viglong/Orient-Anything-V2)
67
+
68
+ ### 3 Python Scripts
69
+ ```python
70
+ import numpy as np
71
+ from PIL import Image
72
+ import torch
73
+ import tempfile
74
+ import os
75
+
76
+ from paths import *
77
+ from vision_tower import VGGT_OriAny_Ref
78
+ from inference import *
79
+ from app_utils import *
80
+
81
+ mark_dtype = torch.bfloat16 if torch.cuda.get_device_capability()[0] >= 8 else torch.float16
82
+ # device = 'cuda:0'
83
+ device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
84
+
85
+ if os.path.exists(LOCAL_CKPT_PATH):
86
+ ckpt_path = LOCAL_CKPT_PATH
87
+ else:
88
+ from huggingface_hub import hf_hub_download
89
+ ckpt_path = hf_hub_download(repo_id="Viglong/Orient-Anything-V2", filename=HF_CKPT_PATH, repo_type="model", cache_dir='./', resume_download=True)
90
+
91
+ model = VGGT_OriAny_Ref(out_dim=900, dtype=mark_dtype, nopretrain=True)
92
+ model.load_state_dict(torch.load(ckpt_path, map_location='cpu'))
93
+ model.eval()
94
+ model = model.to(device)
95
+ print('Model loaded.')
96
+
97
+ @torch.no_grad()
98
+ def run_inference(pil_ref, pil_tgt=None, do_rm_bkg=True):
99
+ if pil_tgt is not None:
100
+ if do_rm_bkg:
101
+ pil_ref = background_preprocess(pil_ref, True)
102
+ pil_tgt = background_preprocess(pil_tgt, True)
103
+ else:
104
+ if do_rm_bkg:
105
+ pil_ref = background_preprocess(pil_ref, True)
106
+
107
+ try:
108
+ ans_dict = inf_single_case(model, pil_ref, pil_tgt)
109
+ except Exception as e:
110
+ print("Inference error:", e)
111
+ raise gr.Error(f"Inference failed: {str(e)}")
112
+
113
+ def safe_float(val, default=0.0):
114
+ try:
115
+ return float(val)
116
+ except:
117
+ return float(default)
118
+
119
+ az = safe_float(ans_dict.get('ref_az_pred', 0))
120
+ el = safe_float(ans_dict.get('ref_el_pred', 0))
121
+ ro = safe_float(ans_dict.get('ref_ro_pred', 0))
122
+ alpha = int(ans_dict.get('ref_alpha_pred', 1))
123
+
124
+ if pil_tgt is not None:
125
+ rel_az = safe_float(ans_dict.get('rel_az_pred', 0))
126
+ rel_el = safe_float(ans_dict.get('rel_el_pred', 0))
127
+ rel_ro = safe_float(ans_dict.get('rel_ro_pred', 0))
128
+
129
+ print("Relative Pose: Azi",rel_az,"Ele",rel_el,"Rot",rel_ro)
130
+
131
+ image_ref_path = 'assets/examples/F35-0.jpg'
132
+ image_tgt_path = 'assets/examples/F35-1.jpg' # optional
133
+
134
+ image_ref = Image.open(image_ref_path).convert('RGB')
135
+ image_tgt = Image.open(image_tgt_path).convert('RGB')
136
+
137
+ run_inference(image_ref, image_tgt, True)
138
+ ```
139
+
140
+ ## Evaluate Orient-Anything-V2
141
+
142
+ ### Data Preparation
143
+ Download the absolute orientation, relative rotation, and symm-orientation test datasets from [Huggingface Dataset](https://huggingface.co/datasets/Viglong/OriAnyV2_Inference).
144
+ ```shell
145
+ # set mirror endpoint to accelerate
146
+ # export HF_ENDPOINT='https://hf-mirror.com'
147
+
148
+ huggingface-cli download --repo-type dataset Viglong/OriAnyV2_Inference --local-dir OriAnyV2_Inference
149
+ ```
150
+ Use the following command to extract the dataset:
151
+
152
+ ```shell
153
+ cd OriAnyV2_Inference
154
+ for f in *.tar.gz; do
155
+ tar -xzf "$f"
156
+ done
157
+ ```
158
+
159
+ Modify `DATA_ROOT` in `paths.py` to point to the dataset root directory(`/path/to/OriAnyV2_Inference`).
160
+
161
+
162
+ ### Evaluate with torch-lightning
163
+ To evaluate on test datasets, run the following code:
164
+
165
+ ```shell
166
+ python eval_on_dataset.py
167
+ ```
168
+
169
+ ## Train Orient-Anything-V2
170
+
171
+ We use `FLUX.1-dev` and `Hunyuan3D-2.0` to generate our training data and render it with Blender. We provide the fully rendered data, which you can obtain from the link below.
172
+
173
+ [Hunyuan3D-FLUX-Gen](https://huggingface.co/datasets/Viglong/Hunyuan3D-FLUX-Gen)
174
+
175
+ To store all this data, we recommend having at least **2TB** of free disk space on your server.
176
+
177
+ We are currently organizing the complete **data construction pipeline** and **training code** for Orient-Anything-V2 — stay tuned.
178
+
179
+ ## Acknowledgement
180
+ We would like to express our sincere gratitude to the following excellent works:
181
+ - [VGGT](https://github.com/facebookresearch/vggt)
182
+ - [FLUX](https://github.com/black-forest-labs/flux)
183
+ - [Hunyuan3D-2.0](https://github.com/Tencent-Hunyuan/Hunyuan3D-2)
184
+ - [Blender](https://github.com/blender/blender)
185
+ - [rembg](https://github.com/danielgatis/rembg)
186
+
187
+
188
+ ## Citation
189
+ If you find this project useful, please consider citing:
190
+
191
+ ```bibtex
192
+
193
+ ```