--- language: - en task_categories: - image-text-to-text pretty_name: DORI tags: - vlm - multimodal - evaluation - qna - mllm - orientation - spatial - benchmark - reasoning - vision license: mit --- ## Dataset Details ### Dataset Description DORI (Discriminative Orientation Reasoning Intelligence) is a comprehensive benchmark designed to evaluate object orientation understanding in multimodal large language models (MLLMs). The benchmark isolates and evaluates orientation perception as a primary capability, offering a systematic assessment framework that spans four essential dimensions of orientation comprehension: frontal alignment, rotational transformations, relative directional relationships, and canonical orientation understanding. DORI contains 33,656 carefully constructed multiple-choice questions based on 13,652 images spanning both natural (37%) and simulated (63%) environments. The benchmark covers 67 object categories (31 household and 36 outdoor item categories) across 11 diverse computer vision datasets. What makes DORI unique is its systematic approach to isolating orientation perception from confounding factors like object recognition difficulty, scene clutter, linguistic ambiguity, and contextual distractions. For each orientation dimension, DORI provides both coarse-grained questions (basic categorical judgments) and fine-grained questions (precise angular measurements). - **Curated by:** Anonymous Authors - **Language(s) (NLP):** English - **License:** MIT License ### Dataset Sources - **Repository:** https://huggingface.co/datasets/appledora/DORI-Benchmark - **Paper:** "Right Side Up? Disentangling Orientation Understanding in MLLMs with Fine-grained Multi-axis Perception Tasks" (NeurIPS 2025 submission) The dataset incorporates images from multiple existing datasets including: - KITTI - Cityscapes - COCO - JTA - 3D-FUTURE - Objectron - ShapeNet - OmniObject3D - NOCS REAL - Get3D - COCO Space SEA ## Uses ### Direct Use DORI is designed for evaluating and benchmarking orientation reasoning capabilities in multimodal large language models (MLLMs). Its primary uses include: 1. Evaluating MLLMs' understanding of object orientation across four core dimensions 2. Comparing model performance on coarse vs. fine-grained orientation perception 3. Diagnosing specific weaknesses in spatial reasoning across different model architectures 4. Supporting research to improve MLLMs' abilities in applications that require spatial understanding (robotics, augmented reality, autonomous navigation) 5. Providing a standardized framework for measuring progress in orientation understanding capabilities ### Out-of-Scope Use The DORI benchmark is not intended for: - Training models directly (it's an evaluation benchmark) - Evaluating general vision capabilities beyond orientation understanding - Assessing human perception or cognitive abilities - Commercial applications without proper attribution ## Dataset Structure DORI consists of multiple-choice questions with a standardized format. Each question includes: 1. A task description specifying the orientation dimension being evaluated 2. Contextual information explaining relevant orientation concepts 3. Step-by-step analysis instructions 4. Multiple-choice options 5. Examples illustrating expected reasoning The benchmark is organized into four core dimensions with seven specific tasks: 1. **Frontal Alignment** - View Parallelism Perception - Directional Facing Perception 2. **Rotational Transformation** - Single-axis Rotation - Compound Rotation 3. **Relative Orientation** - Inter-object Direction Perception - Viewer-scene Direction Perception 4. **Canonical Orientation** - Canonical Orientation Reasoning Each task has two levels of assessment: - **Coarse-grained** questions evaluating basic categorical understanding - **Fine-grained** questions probing precise quantitative estimations The distribution of tasks in the dataset is: - Compound Rotation: 26% - Viewer-Scene Direction: 20% - Inter-Object Direction: 19% - View Parallelism: 10% - Single-axis Rotation: 9% - Directional Facing: 9% - Canonical Orientation: 5% Major object categories include chairs (15%), cars (14%), cameras (13%), sofas (10%), people (8%), tables (7%), and motorbikes (6%). ## Dataset Creation ### Curation Rationale DORI was created to address limitations in existing orientation benchmarks, which often: - Focus only on simple directional judgments without fine-grained assessment - Do not represent the naturality or nuances of real-life scenarios - Present tasks in ambiguous ways - Fail to systematically evaluate orientation across different frames of reference - Include too few samples for reliable evaluation DORI aims to provide a comprehensive, hierarchical evaluation framework specifically targeting orientation understanding, as this capability is fundamental for numerous AI applications including autonomous navigation, augmented reality, and robotic manipulation. ### Source Data #### Data Collection and Processing DORI collected data via two primary means: 1. Converting existing 3D information from established datasets into orientation questions 2. Manually annotating samples where needed The benchmark uses both real-world images (37%) and simulated renders (63%) to ensure comprehensive coverage of visual complexities. For simulated datasets, precise orientation parameters provided ground truth angular measurements with known accuracy. For real-world images, expert annotation established clear ground truth values. Each question was created following a rigorous process involving: 1. Isolating objects with bounding boxes to tackle cluttered scenes 2. Employing standardized orientation terminology with explicit spatial frames of reference 3. Ensuring difficulty progression from simple categorical judgments to precise angular measurements The prompts were iteratively refined through multiple cycles of human feedback to address ambiguities, clarify terminology, and improve task specificity. #### Who are the source data producers? The source data comes from 11 established computer vision datasets created by various research groups: - KITTI (Geiger et al., 2012) - Cityscapes (Cordts et al., 2016) - COCO (Lin et al., 2014) - JTA (Fabbri et al., 2018) - 3D-FUTURE (Fu et al., 2021) - Objectron (Ahmadyan et al., 2021) - ShapeNet (Chang et al., 2015) - OmniObject3D (Wu et al., 2023) - NOCS REAL (Wang et al., 2019) - Get3D (Gao et al., 2022) - COCO Space SEA (a combination of datasets) ### Annotations #### Annotation process For datasets with available 3D information, orientation information was derived algorithmically. For example: - JTA: Orientation was calculated by analyzing shoulder positions relative to the camera and head angle - KITTI: Rotation matrices were used to categorize vehicles and pedestrians based on orientation - 3D-Future: 6-DoF parameters were used to calculate precise rotational adjustments - COCO: Expert manual labeling was performed for object orientations The annotation process included rigorous quality control with multiple human evaluators checking for ambiguities and edge cases. The process created standardized, clear annotations for both coarse and fine-grained orientation judgments. #### Who are the annotators? The annotations were performed by a combination of: - Automated conversion from existing 3D parameters (for synthetic datasets) - Expert human annotators with experience in computer vision and spatial reasoning (particularly for natural images) - Non-expert annotators providing feedback for prompt refinement and disambiguation #### Personal and Sensitive Information The dataset uses established computer vision datasets and does not introduce new personal or sensitive information. The focus is on object orientation rather than identifying individuals or private data. ## Bias, Risks, and Limitations The DORI benchmark has several limitations: - Performance may be influenced by the quality of bounding box annotations - Some objects inherently have more ambiguous orientations than others - The distribution of objects is not entirely balanced across all categories - While diverse, the benchmark cannot cover every possible orientation scenario - Performance on synthetic vs. real images may vary due to domain differences - The benchmark primarily features static orientation reasoning rather than dynamic manipulation ### Recommendations Users of the DORI benchmark should: - Consider results across both coarse and fine-grained questions for a complete understanding of model capabilities - Pay attention to performance differences across the four core dimensions to identify specific weaknesses - Note that orientation understanding is just one component of spatial reasoning - Be aware that orientation perception in controlled environments may differ from real-world deployment scenarios - Consider that poor performance on DORI may indicate fundamental limitations in a model's spatial representation capabilities ## Citation **BibTeX:** ``` @misc{nichols2025rightupdisentanglingorientation, title={Right Side Up? Disentangling Orientation Understanding in MLLMs with Fine-grained Multi-axis Perception Tasks}, author={Keanu Nichols and Nazia Tasnim and Yuting Yan and Nicholas Ikechukwu and Elva Zou and Deepti Ghadiyaram and Bryan A. Plummer}, year={2025}, eprint={2505.21649}, archivePrefix={arXiv}, primaryClass={cs.CV}, url={https://arxiv.org/abs/2505.21649}, } ``` ## Glossary - **Frontal Alignment**: The ability to perceive how an object's front-facing surface is oriented relative to the viewer - **Rotational Transformation**: Understanding orientation changes through rotation, reflecting requirements for object manipulation - **Relative Orientation**: Understanding how objects are oriented in relation to each other and with respect to the viewer - **Canonical Orientation**: The ability to recognize when objects deviate from their expected orientations - **Coarse-grained questions**: Basic categorical judgments about orientation (e.g., "is the car facing toward or away from the camera?") - **Fine-grained questions**: Precise metric relationships about orientation (e.g., "at what angle is the car oriented relative to the camera?") - **Egocentric reference frame**: Orientation relative to the camera/viewer - **Allocentric reference frame**: Orientation independent of the viewer's perspective ## More Information The DORI benchmark represents a significant advancement in the assessment of orientation understanding in MLLMs. Initial evaluations of 15 state-of-the-art MLLMs revealed significant limitations, with even the best models achieving only 54.2% accuracy on coarse tasks and 33.0% on granular orientation judgments, compared to human performance of 86.6% and 80.9% respectively. Performance patterns indicate that current models struggle most with precise angular estimations, multi-axis rotational transformations, and perspective shifts beyond egocentric frames. These findings strongly suggest that future architectural advancements must develop specialized mechanisms for continuous geometric representation to bridge this critical gap in machine perception. ## Dataset Card Authors Keanu Nichols and Nazia Tasnim and Yuting Yan and Nicholas Ikechukwu and Elva Zou and Deepti Ghadiyaram and Bryan A. Plummer ## Dataset Card Contact For more information, please contact: nimzia@bu.edu