Datasets:
Improve dataset card: Add task categories, metadata tags, paper info, abstract, links, sample usage, and evaluation scripts
Browse filesThis PR significantly enhances the dataset card for `DSI-Bench` by adding crucial information for users. Specifically, it includes:
- The `task_categories: ['video-text-to-text']` to the metadata, improving discoverability.
- Relevant `tags` such as `video-question-answering`, `video-understanding`, `spatial-reasoning`, `benchmark`, and `3d`.
- A link to the official Hugging Face paper: https://huggingface.co/papers/2510.18873
- Direct links to the project page (https://dsibench.github.io/) and the GitHub repository (https://github.com/SpatialVision/dsibench).
- The paper abstract for a concise overview of the dataset.
- A comprehensive "Sample Usage" section, featuring code snippets for installation, data download, inference with the Qwen API, and evaluation of model performance, all directly from the GitHub README.
- The BibTeX citation for referencing the work.
These additions provide a much more comprehensive and user-friendly resource for researchers interested in Dynamic Spatial Intelligence.
|
@@ -1,3 +1,371 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: cc-by-4.0
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: cc-by-4.0
|
| 3 |
+
task_categories:
|
| 4 |
+
- video-text-to-text
|
| 5 |
+
tags:
|
| 6 |
+
- video-question-answering
|
| 7 |
+
- video-understanding
|
| 8 |
+
- spatial-reasoning
|
| 9 |
+
- benchmark
|
| 10 |
+
- 3d
|
| 11 |
+
---
|
| 12 |
+
|
| 13 |
+
# DSI-Bench: A Benchmark for Dynamic Spatial Intelligence
|
| 14 |
+
|
| 15 |
+
[Paper](https://huggingface.co/papers/2510.18873) | [Project Page](https://dsibench.github.io/) | [Code](https://github.com/SpatialVision/dsibench)
|
| 16 |
+
|
| 17 |
+
## Abstract
|
| 18 |
+
Reasoning about dynamic spatial relationships is essential, as both observers and objects often move simultaneously. Although vision-language models (VLMs) and visual expertise models excel in 2D tasks and static scenarios, their ability to fully understand dynamic 3D scenarios remains limited. We introduce Dynamic Spatial Intelligence and propose DSI-Bench, a benchmark with nearly 1,000 dynamic videos and over 1,700 manually annotated questions covering nine decoupled motion patterns of observers and objects. Spatially and temporally symmetric designs reduce biases and enable systematic evaluation of models' reasoning about self-motion and object motion. Our evaluation of 14 VLMs and expert models reveals key limitations: models often conflate observer and object motion, exhibit semantic biases, and fail to accurately infer relative relationships in dynamic scenarios. Our DSI-Bench provides valuable findings and insights about the future development of general and expertise models with dynamic spatial intelligence.
|
| 19 |
+
|
| 20 |
+
## Sample Usage
|
| 21 |
+
|
| 22 |
+
This section provides a quick guide to set up the environment, download the dataset, perform inference, and evaluate model performance.
|
| 23 |
+
|
| 24 |
+
### 1. Install dependency
|
| 25 |
+
```shell
|
| 26 |
+
pip install -r requirements.txt
|
| 27 |
+
```
|
| 28 |
+
|
| 29 |
+
### 2. Download full dataset
|
| 30 |
+
```shell
|
| 31 |
+
huggingface-cli download --repo-type dataset Viglong/DSI-Bench --local-dir DSI-Bench
|
| 32 |
+
```
|
| 33 |
+
|
| 34 |
+
### 3. Inference with Qwen API
|
| 35 |
+
Here we provide a sample for testing on DSI-bench using the Qwen API.
|
| 36 |
+
|
| 37 |
+
```python
|
| 38 |
+
import os
|
| 39 |
+
import re
|
| 40 |
+
import pandas as pd
|
| 41 |
+
from tqdm import tqdm
|
| 42 |
+
from concurrent.futures import ThreadPoolExecutor, as_completed
|
| 43 |
+
import dashscope
|
| 44 |
+
|
| 45 |
+
# Configuration
|
| 46 |
+
DASHSCOPE_API_KEY = "YOUR_DASHSCOPE_API_KEY_HERE" # Replace or load from env
|
| 47 |
+
|
| 48 |
+
VLM_MODEL = "qwen2.5-vl-32b-instruct" # e.g., "qwen2.5-vl-72b-instruct"
|
| 49 |
+
VIDEO_AUG = "std" # Video variant: 'std', 'hflip', etc.
|
| 50 |
+
NUM_WORKERS = 2
|
| 51 |
+
MAX_RETRIES = 10
|
| 52 |
+
FPS = 5 # Frames per second for video input
|
| 53 |
+
|
| 54 |
+
# Relative paths (relative to this script)
|
| 55 |
+
METADATA_BASE_DIR = "/path/to/metadatas"# Contains {VIDEO_AUG}.csv
|
| 56 |
+
VIDEO_BASE_DIR = "/path/to/videos" # Structure: videos/{VIDEO_AUG}/xxx.mp4
|
| 57 |
+
OUTPUT_BASE_DIR = "/path/to/outputs" # Output: outputs/{VIDEO_AUG}/{model}.csv
|
| 58 |
+
|
| 59 |
+
# Prompt
|
| 60 |
+
rawqa_prompt = """You are a vision-language expert.
|
| 61 |
+
You are given a clip of video and your task is to answer a question about the video.
|
| 62 |
+
You only need to provide *ONE* correct answer selecting from the options listed below.
|
| 63 |
+
For example, if you think the correct answer is 'A' from 'A. Above B. Under C. Front D. Behind',
|
| 64 |
+
your response should **only** be '<answer>A</answer>'.
|
| 65 |
+
Please answer the question in this format strictly:
|
| 66 |
+
<answer>[A, B, C, or D]</answer>
|
| 67 |
+
"""
|
| 68 |
+
|
| 69 |
+
def extract_single_choice_with_word_boundary(text):
|
| 70 |
+
"""
|
| 71 |
+
Extract the answer letter (A/B/C/D) from <answer>X</answer> in the response.
|
| 72 |
+
Returns None if not found.
|
| 73 |
+
"""
|
| 74 |
+
match = re.search(r"<answer>\s*([A-D])\s*</answer>", text, re.IGNORECASE)
|
| 75 |
+
return match.group(1).upper() if match else None
|
| 76 |
+
|
| 77 |
+
# Call VLM with video and question
|
| 78 |
+
def query_vlm(mp4_path, question, options):
|
| 79 |
+
"""
|
| 80 |
+
Send video and question to Qwen-VL via DashScope API.
|
| 81 |
+
Returns raw model response text.
|
| 82 |
+
"""
|
| 83 |
+
question_and_options = f"
|
| 84 |
+
Question:
|
| 85 |
+
{question} {options}"
|
| 86 |
+
messages = [
|
| 87 |
+
{
|
| 88 |
+
"role": "user",
|
| 89 |
+
"content": [
|
| 90 |
+
{"video": f"file://{mp4_path}", "fps": FPS},
|
| 91 |
+
{"text": rawqa_prompt + question_and_options}
|
| 92 |
+
]
|
| 93 |
+
}
|
| 94 |
+
]
|
| 95 |
+
|
| 96 |
+
response = dashscope.MultiModalConversation.call(
|
| 97 |
+
api_key=DASHSCOPE_API_KEY,
|
| 98 |
+
model=VLM_MODEL,
|
| 99 |
+
messages=messages,
|
| 100 |
+
max_length=2048,
|
| 101 |
+
stream=False,
|
| 102 |
+
top_k=1
|
| 103 |
+
)
|
| 104 |
+
return response.output.choices[0].message.content[0]["text"]
|
| 105 |
+
|
| 106 |
+
# Process a single sample
|
| 107 |
+
def process_sample(meta, idx):
|
| 108 |
+
"""
|
| 109 |
+
Process one video-question sample.
|
| 110 |
+
Returns (raw_response, extracted_answer) or (None, None) on failure.
|
| 111 |
+
"""
|
| 112 |
+
mp4_path = os.path.join(VIDEO_BASE_DIR, VIDEO_AUG, meta["relative_path"][idx])
|
| 113 |
+
question = meta["question"][idx]
|
| 114 |
+
options = meta["options"][idx]
|
| 115 |
+
|
| 116 |
+
try:
|
| 117 |
+
raw_response = query_vlm(mp4_path, question, options)
|
| 118 |
+
final_answer = extract_single_choice_with_word_boundary(raw_response)
|
| 119 |
+
return raw_response, final_answer
|
| 120 |
+
except Exception as e:
|
| 121 |
+
return None, None
|
| 122 |
+
|
| 123 |
+
# Batch processing with retries
|
| 124 |
+
def run_batch_inference(meta, num_workers=NUM_WORKERS, max_retries=MAX_RETRIES):
|
| 125 |
+
"""
|
| 126 |
+
Run inference on all samples with parallel execution and retry logic.
|
| 127 |
+
Ensures output order matches input order.
|
| 128 |
+
"""
|
| 129 |
+
print(f"Running inference with model: {VLM_MODEL}, video variant: {VIDEO_AUG}")
|
| 130 |
+
n = len(meta)
|
| 131 |
+
results = [None] * n
|
| 132 |
+
|
| 133 |
+
def worker(idx):
|
| 134 |
+
raw, ans = process_sample(meta, idx)
|
| 135 |
+
return idx, raw, ans
|
| 136 |
+
|
| 137 |
+
def run_parallel(indices, desc="Processing"):
|
| 138 |
+
temp = [None] * n
|
| 139 |
+
with ThreadPoolExecutor(max_workers=num_workers) as executor:
|
| 140 |
+
futures = {executor.submit(worker, i): i for i in indices}
|
| 141 |
+
for future in tqdm(as_completed(futures), total=len(futures), desc=desc):
|
| 142 |
+
idx, raw, ans = future.result()
|
| 143 |
+
temp[idx] = {"result_text": raw, "final_answer": ans}
|
| 144 |
+
return temp
|
| 145 |
+
|
| 146 |
+
# Initial run
|
| 147 |
+
current_results = run_parallel(list(range(n)), desc="Initial Run")
|
| 148 |
+
for i, res in enumerate(current_results):
|
| 149 |
+
results[i] = res
|
| 150 |
+
|
| 151 |
+
# Retry failed samples
|
| 152 |
+
for retry in range(1, max_retries + 1):
|
| 153 |
+
failed = [i for i in range(n) if results[i]["final_answer"] is None]
|
| 154 |
+
if not failed:
|
| 155 |
+
print(f"All samples succeeded after {retry - 1} retries.")
|
| 156 |
+
break
|
| 157 |
+
|
| 158 |
+
print(f"Retry {retry}/{max_retries} for {len(failed)} failed samples...")
|
| 159 |
+
retry_results = run_parallel(failed, desc=f"Retry {retry}")
|
| 160 |
+
for i in failed:
|
| 161 |
+
results[i] = retry_results[i]
|
| 162 |
+
|
| 163 |
+
# Handle permanently failed samples
|
| 164 |
+
for i in range(n):
|
| 165 |
+
if results[i]["final_answer"] is None:
|
| 166 |
+
results[i]["final_answer"] = "E" # Default error answer
|
| 167 |
+
if results[i]["result_text"] is None:
|
| 168 |
+
results[i]["result_text"] = ""
|
| 169 |
+
|
| 170 |
+
success_count = sum(1 for r in results if r["final_answer"] != "E")
|
| 171 |
+
print(f"Completed. Success: {success_count}/{n}")
|
| 172 |
+
return results
|
| 173 |
+
|
| 174 |
+
if __name__ == "__main__":
|
| 175 |
+
for aug in ["std", "hflip", "reverse", "reverse_hflip"]:
|
| 176 |
+
VIDEO_AUG = aug
|
| 177 |
+
meta_path = os.path.join(METADATA_BASE_DIR, f"{VIDEO_AUG}.csv")
|
| 178 |
+
df = pd.read_csv(meta_path)
|
| 179 |
+
print(f"Loaded metadata: {meta_path} with {len(df)} samples")
|
| 180 |
+
|
| 181 |
+
# Set directories
|
| 182 |
+
output_dir = os.path.join(OUTPUT_BASE_DIR, VIDEO_AUG)
|
| 183 |
+
os.makedirs(output_dir, exist_ok=True)
|
| 184 |
+
|
| 185 |
+
# Run inference
|
| 186 |
+
results = run_batch_inference(df)
|
| 187 |
+
|
| 188 |
+
# Save results
|
| 189 |
+
output_file = os.path.join(output_dir, f"{VLM_MODEL}.csv")
|
| 190 |
+
pd.DataFrame(results).to_csv(output_file, index=False)
|
| 191 |
+
print(f"Results saved to: {output_file}")
|
| 192 |
+
```
|
| 193 |
+
|
| 194 |
+
### 4. Evaluate model performance
|
| 195 |
+
Use the following code to get the Sample-wise Accuracy and Group-wise Accuracy of the model.
|
| 196 |
+
|
| 197 |
+
```python
|
| 198 |
+
import pandas as pd
|
| 199 |
+
import os
|
| 200 |
+
from typing import Dict
|
| 201 |
+
|
| 202 |
+
# ==============================
|
| 203 |
+
# Configuration
|
| 204 |
+
# ==============================
|
| 205 |
+
VLM_MODEL = "qwen2.5-vl-32b-instruct" # Model name
|
| 206 |
+
VIDEO_AUGS = ["std", "reverse", "hflip", "reverse_hflip"]
|
| 207 |
+
|
| 208 |
+
# Base paths (relative to project root)
|
| 209 |
+
META_BASE_PATH = "/path/to/metadatas" # Contains {aug}.csv
|
| 210 |
+
OUTPUT_BASE_PATH = "/path/to/outputs" # Contains {aug}/{VLM_MODEL}.csv
|
| 211 |
+
|
| 212 |
+
# Category name mapping
|
| 213 |
+
CATE_NAMES = [
|
| 214 |
+
"Obj:static cam",
|
| 215 |
+
"Obj:moving cam",
|
| 216 |
+
"Cam:static scene",
|
| 217 |
+
"Cam:dynamic scene",
|
| 218 |
+
"Obj-Cam distance",
|
| 219 |
+
"Obj-Cam orientation"
|
| 220 |
+
]
|
| 221 |
+
|
| 222 |
+
def load_all_data() -> Dict[str, Dict[str, pd.DataFrame]]:
|
| 223 |
+
"""
|
| 224 |
+
Load metadata and prediction results for all augmentations.
|
| 225 |
+
Returns: {aug: {'meta': DataFrame, 'result': DataFrame}}
|
| 226 |
+
"""
|
| 227 |
+
data_dict = {}
|
| 228 |
+
|
| 229 |
+
for aug in VIDEO_AUGS:
|
| 230 |
+
meta_path = os.path.join(META_BASE_PATH, f"{aug}.csv")
|
| 231 |
+
res_path = os.path.join(OUTPUT_BASE_PATH, aug, f"{VLM_MODEL}.csv")
|
| 232 |
+
|
| 233 |
+
if not os.path.exists(meta_path):
|
| 234 |
+
raise FileNotFoundError(f"Metadata not found: {meta_path}")
|
| 235 |
+
if not os.path.exists(res_path):
|
| 236 |
+
raise FileNotFoundError(f"Result file not found: {res_path}")
|
| 237 |
+
|
| 238 |
+
meta = pd.read_csv(meta_path)
|
| 239 |
+
result = pd.read_csv(res_path)
|
| 240 |
+
|
| 241 |
+
if len(meta) != len(result):
|
| 242 |
+
raise ValueError(f"Length mismatch in {aug}: meta={len(meta)}, result={len(result)}")
|
| 243 |
+
if "GT" not in meta.columns:
|
| 244 |
+
raise ValueError(f"'GT' column missing in metadata: {meta_path}")
|
| 245 |
+
if "final_answer" not in result.columns:
|
| 246 |
+
raise ValueError(f"'final_answer' column missing in results: {res_path}")
|
| 247 |
+
|
| 248 |
+
data_dict[aug] = {"meta": meta, "result": result}
|
| 249 |
+
|
| 250 |
+
# Ensure all augmentations have the same number of samples
|
| 251 |
+
lengths = [len(data_dict[aug]["meta"]) for aug in VIDEO_AUGS]
|
| 252 |
+
if len(set(lengths)) > 1:
|
| 253 |
+
raise ValueError(f"Inconsistent sample counts: {dict(zip(VIDEO_AUGS, lengths))}")
|
| 254 |
+
|
| 255 |
+
return data_dict
|
| 256 |
+
|
| 257 |
+
def sample_wise_evaluation():
|
| 258 |
+
"""
|
| 259 |
+
Treat all samples across all augmentations as independent.
|
| 260 |
+
Compute per-category and overall accuracy.
|
| 261 |
+
"""
|
| 262 |
+
print("=== Method 1: Independent Samples ===")
|
| 263 |
+
data_dict = load_all_data()
|
| 264 |
+
|
| 265 |
+
records = []
|
| 266 |
+
for aug in VIDEO_AUGS:
|
| 267 |
+
meta = data_dict[aug]["meta"]
|
| 268 |
+
res = data_dict[aug]["result"]
|
| 269 |
+
for i in range(len(meta)):
|
| 270 |
+
gt = str(meta.iloc[i]["GT"]).strip()
|
| 271 |
+
pred = str(res.iloc[i]["final_answer"]).strip()
|
| 272 |
+
pred_letter = pred[0] if pred else ""
|
| 273 |
+
correct = int(gt == pred_letter)
|
| 274 |
+
records.append({"cate": meta.iloc[i]["cate"], "correct": correct})
|
| 275 |
+
|
| 276 |
+
df = pd.DataFrame(records)
|
| 277 |
+
acc_by_cat = df.groupby("cate")["correct"].mean()
|
| 278 |
+
overall_acc = df["correct"].mean()
|
| 279 |
+
print_metrics(acc_by_cat, overall_acc)
|
| 280 |
+
|
| 281 |
+
def group_wise_evaluation(n: int):
|
| 282 |
+
"""
|
| 283 |
+
For each original question (4 views), count how many augmented views are correct
|
| 284 |
+
under their own ground truth. If >= n are correct, count as robustly correct.
|
| 285 |
+
"""
|
| 286 |
+
print(f"=== Method 2: Ensemble Voting (n>={n}) ===")
|
| 287 |
+
data_dict = load_all_data()
|
| 288 |
+
num_samples = len(data_dict[VIDEO_AUGS[0]]["meta"])
|
| 289 |
+
|
| 290 |
+
records = []
|
| 291 |
+
total_robust_correct = 0
|
| 292 |
+
|
| 293 |
+
for i in range(num_samples):
|
| 294 |
+
correct_count = 0
|
| 295 |
+
cate = None
|
| 296 |
+
for aug in VIDEO_AUGS:
|
| 297 |
+
meta = data_dict[aug]["meta"]
|
| 298 |
+
res = data_dict[aug]["result"]
|
| 299 |
+
gt = str(meta.iloc[i]["GT"]).strip()
|
| 300 |
+
pred = str(res.iloc[i]["final_answer"]).strip()
|
| 301 |
+
pred_letter = pred[0] if pred else ""
|
| 302 |
+
if gt == pred_letter:
|
| 303 |
+
correct_count += 1
|
| 304 |
+
if cate is None:
|
| 305 |
+
cate = meta.iloc[i]["cate"]
|
| 306 |
+
|
| 307 |
+
is_robust_correct = int(correct_count >= n)
|
| 308 |
+
records.append({"cate": cate, "correct": is_robust_correct})
|
| 309 |
+
total_robust_correct += is_robust_correct
|
| 310 |
+
|
| 311 |
+
df = pd.DataFrame(records)
|
| 312 |
+
acc_by_cat = df.groupby("cate")["correct"].mean()
|
| 313 |
+
overall_acc = total_robust_correct / num_samples
|
| 314 |
+
print_metrics(acc_by_cat, overall_acc)
|
| 315 |
+
|
| 316 |
+
def single_evaluation(aug: str):
|
| 317 |
+
"""
|
| 318 |
+
Evaluate performance on a single augmentation variant.
|
| 319 |
+
"""
|
| 320 |
+
if aug not in VIDEO_AUGS:
|
| 321 |
+
raise ValueError(f"Invalid augmentation: {aug}. Choose from {VIDEO_AUGS}")
|
| 322 |
+
|
| 323 |
+
print(f"=== Method 3: Single View Evaluation ({aug}) ===")
|
| 324 |
+
data_dict = load_all_data()
|
| 325 |
+
meta = data_dict[aug]["meta"]
|
| 326 |
+
res = data_dict[aug]["result"]
|
| 327 |
+
|
| 328 |
+
correct_list = []
|
| 329 |
+
for i in range(len(meta)):
|
| 330 |
+
gt = str(meta.iloc[i]["GT"]).strip()
|
| 331 |
+
pred = str(res.iloc[i]["final_answer"]).strip()
|
| 332 |
+
pred_letter = pred[0] if pred else ""
|
| 333 |
+
correct_list.append(int(gt == pred_letter))
|
| 334 |
+
|
| 335 |
+
df = pd.DataFrame({"cate": meta["cate"], "correct": correct_list})
|
| 336 |
+
acc_by_cat = df.groupby("cate")["correct"].mean()
|
| 337 |
+
overall_acc = df["correct"].mean()
|
| 338 |
+
print_metrics(acc_by_cat, overall_acc)
|
| 339 |
+
|
| 340 |
+
def print_metrics(acc_by_cat: pd.Series, overall_acc: float):
|
| 341 |
+
"""
|
| 342 |
+
Print per-category and overall accuracy in a readable format.
|
| 343 |
+
"""
|
| 344 |
+
for cat, ratio in acc_by_cat.items():
|
| 345 |
+
name = CATE_NAMES[cat] if cat < len(CATE_NAMES) else f"Category {cat}"
|
| 346 |
+
print('category = {0} {1:<20} Acc = {2:.2%}'.format(cat, name, ratio))
|
| 347 |
+
print(f'
|
| 348 |
+
Overall Acc = {overall_acc:.2%}
|
| 349 |
+
')
|
| 350 |
+
|
| 351 |
+
if __name__ == "__main__":
|
| 352 |
+
print(f"Model: {VLM_MODEL}")
|
| 353 |
+
|
| 354 |
+
sample_wise_evaluation()
|
| 355 |
+
group_wise_evaluation(n=3)
|
| 356 |
+
```
|
| 357 |
+
|
| 358 |
+
## Citation
|
| 359 |
+
|
| 360 |
+
If you find this repository useful for your research, please use the following BibTeX entry:
|
| 361 |
+
```bibtex
|
| 362 |
+
@misc{zhang2025dsibenchbenchmarkdynamicspatial,
|
| 363 |
+
title={DSI-Bench: A Benchmark for Dynamic Spatial Intelligence},
|
| 364 |
+
author={Ziang Zhang and Zehan Wang and Guanghao Zhang and Weilong Dai and Yan Xia and Ziang Yan and Minjie Hong and Zhou Zhao},
|
| 365 |
+
year={2025},
|
| 366 |
+
eprint={2510.18873},
|
| 367 |
+
archivePrefix={arXiv},
|
| 368 |
+
primaryClass={cs.CV},
|
| 369 |
+
url={https://arxiv.org/abs/2510.18873},
|
| 370 |
+
}
|
| 371 |
+
```
|