Dataset Viewer
The dataset viewer is not available for this subset.
Cannot get the split names for the config 'default' of the dataset.
Exception:    SplitsNotFoundError
Message:      The split names could not be parsed from the dataset config.
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 289, in get_dataset_config_info
                  for split_generator in builder._split_generators(
                                         ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 83, in _split_generators
                  raise ValueError(
              ValueError: The TAR archives of the dataset should be in WebDataset format, but the files in the archive don't share the same prefix or the same types.
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
                  for split in get_dataset_split_names(
                               ^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 343, in get_dataset_split_names
                  info = get_dataset_config_info(
                         ^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 294, in get_dataset_config_info
                  raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
              datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

[NeurIPS 2025 Spotlight]
Orient Anything V2: Unifying Orientation and Rotation Understanding

Zehan Wang1* · Ziang Zhang1* · Jialei Wang1 · Jiayang Xu1 · Tianyu Pang2 · Du Chao2 · Hengshuang Zhao3 · Zhou Zhao1

1Zhejiang University    2SEA AI Lab    3HKU

*Equal Contribution

Paper PDF Project Page

Orient Anything V2, a unified spatial vision model for understanding orientation, symmetry, and relative rotation, achieves SOTA performance across 14 datasets.

News

Pre-trained Model Weights

We provide pre-trained model weights and are continuously iterating on them to support more inference scenarios:

Model Params Checkpoint
Orient-Anything-V2 5.05 GB Download

Quick Start

1 Dependency Installation

conda create -n orianyv2 python=3.11

conda activate orianyv2

pip install -r requirements.txt

2 Gradio App

Start gradio by executing the following script:

python app.py

then open GUI page(default is https://127.0.0.1:7860) in web browser.

or, you can try it in our Huggingface-Space

3 Python Scripts

import numpy as np
from PIL import Image
import torch
import tempfile
import os

from paths import *
from vision_tower import VGGT_OriAny_Ref
from inference import *
from app_utils import *

mark_dtype = torch.bfloat16 if torch.cuda.get_device_capability()[0] >= 8 else torch.float16
# device = 'cuda:0'
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

if os.path.exists(LOCAL_CKPT_PATH):
    ckpt_path = LOCAL_CKPT_PATH
else:
    from huggingface_hub import hf_hub_download
    ckpt_path = hf_hub_download(repo_id="Viglong/Orient-Anything-V2", filename=HF_CKPT_PATH, repo_type="model", cache_dir='./', resume_download=True)

model = VGGT_OriAny_Ref(out_dim=900, dtype=mark_dtype, nopretrain=True)
model.load_state_dict(torch.load(ckpt_path, map_location='cpu'))
model.eval()
model = model.to(device)
print('Model loaded.')

@torch.no_grad()
def run_inference(pil_ref, pil_tgt=None, do_rm_bkg=True):
    if pil_tgt is not None:
        if do_rm_bkg:
            pil_ref = background_preprocess(pil_ref, True)
            pil_tgt = background_preprocess(pil_tgt, True)
    else:
        if do_rm_bkg:
            pil_ref = background_preprocess(pil_ref, True)

    try:
        ans_dict = inf_single_case(model, pil_ref, pil_tgt)
    except Exception as e:
        print("Inference error:", e)
        raise gr.Error(f"Inference failed: {str(e)}")

    def safe_float(val, default=0.0):
        try:
            return float(val)
        except:
            return float(default)

    az = safe_float(ans_dict.get('ref_az_pred', 0))
    el = safe_float(ans_dict.get('ref_el_pred', 0))
    ro = safe_float(ans_dict.get('ref_ro_pred', 0))
    alpha = int(ans_dict.get('ref_alpha_pred', 1))

    if pil_tgt is not None:
      rel_az = safe_float(ans_dict.get('rel_az_pred', 0))
      rel_el = safe_float(ans_dict.get('rel_el_pred', 0))
      rel_ro = safe_float(ans_dict.get('rel_ro_pred', 0))

      print("Relative Pose: Azi",rel_az,"Ele",rel_el,"Rot",rel_ro)

image_ref_path = 'assets/examples/F35-0.jpg'
image_tgt_path = 'assets/examples/F35-1.jpg' # optional

image_ref = Image.open(image_ref_path).convert('RGB')
image_tgt = Image.open(image_tgt_path).convert('RGB')

run_inference(image_ref, image_tgt, True)

Evaluate Orient-Anything-V2

Data Preparation

Download the absolute orientation, relative rotation, and symm-orientation test datasets from Huggingface Dataset.

# set mirror endpoint to accelerate
# export HF_ENDPOINT='https://hf-mirror.com'

huggingface-cli download --repo-type dataset Viglong/OriAnyV2_Inference --local-dir OriAnyV2_Inference

Use the following command to extract the dataset:

cd OriAnyV2_Inference
for f in *.tar.gz; do
    tar -xzf "$f"
done

Modify DATA_ROOT in paths.py to point to the dataset root directory(/path/to/OriAnyV2_Inference).

Evaluate with torch-lightning

To evaluate on test datasets, run the following code:

python eval_on_dataset.py

Train Orient-Anything-V2

We use FLUX.1-dev and Hunyuan3D-2.0 to generate our training data and render it with Blender. We provide the fully rendered data, which you can obtain from the link below.

Hunyuan3D-FLUX-Gen

To store all this data, we recommend having at least 2TB of free disk space on your server.

We are currently organizing the complete data construction pipeline and training code for Orient-Anything-V2 — stay tuned.

Acknowledgement

We would like to express our sincere gratitude to the following excellent works:

Citation

If you find this project useful, please consider citing:


Downloads last month
20

Models trained or fine-tuned on Viglong/OriAnyV2_Inference