Datasets:

Modalities:
Image
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
License:
danielchyeh commited on
Commit
4ed9329
·
1 Parent(s): 4c7f1ae

Update PR with scripts and README.md

Browse files
README.md CHANGED
@@ -17,6 +17,28 @@ size_categories:
17
  # Dataset Card for All-Angles Bench
18
 
19
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
  ## Dataset Description
21
 
22
  <!-- Provide a longer summary of what this dataset is. -->
@@ -31,6 +53,98 @@ The dataset presents a comprehensive benchmark consisting of over 2,100 human-an
31
  - **[Ego-Exo4D](https://github.com/facebookresearch/Ego4d)** - Large-scale egocentric and exocentric video dataset for multi-person interaction understanding
32
 
33
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
34
  ## Usage
35
 
36
  ```python
 
17
  # Dataset Card for All-Angles Bench
18
 
19
 
20
+ ## Dataset Description
21
+
22
+ <!-- Provide a longer summary of what this dataset is. -->
23
+ The dataset presents a comprehensive benchmark consistin---
24
+ license: mit
25
+ language:
26
+ - en
27
+ size_categories:
28
+ - 1K<n<10K
29
+ ---
30
+
31
+ <h1>Seeing from Another Perspective: Evaluating Multi-View Understanding in MLLMs</h1>
32
+
33
+
34
+ <a href='https://danielchyeh.github.io/All-Angles-Bench/'><img src='https://img.shields.io/badge/Project-Page-Green'></a>
35
+ <a href='https://arxiv.org/pdf/2504.15280'><img src='https://img.shields.io/badge/Paper-PDF-orange'></a>
36
+ <a href='https://arxiv.org/abs/2504.15280'><img src='https://img.shields.io/badge/Arxiv-Page-purple'></a>
37
+ <a href="https://github.com/Chenyu-Wang567/All-Angles-Bench/tree/main"><img src='https://img.shields.io/badge/Code-Github-red'></a>
38
+
39
+ # Dataset Card for All-Angles Bench
40
+
41
+
42
  ## Dataset Description
43
 
44
  <!-- Provide a longer summary of what this dataset is. -->
 
53
  - **[Ego-Exo4D](https://github.com/facebookresearch/Ego4d)** - Large-scale egocentric and exocentric video dataset for multi-person interaction understanding
54
 
55
 
56
+ ## Usage
57
+
58
+ ```python
59
+ from datasets import load_dataset
60
+
61
+ dataset = load_dataset("ch-chenyu/All-Angles-Bench")
62
+ ```
63
+
64
+
65
+ ## Prepare Full Benchmark Data on Local Machine
66
+
67
+ 1. **Set up Git lfs and clone the benchmark:**
68
+ ```bash
69
+ $ conda install git-lfs
70
+ $ git lfs install
71
+
72
+ $ git lfs clone https://huggingface.co/datasets/ch-chenyu/All-Angles-Bench
73
+ ```
74
+
75
+
76
+ 2. **Download Ego4D-Exo dataset and extract the frames for the benchmark scenes:**
77
+
78
+ We provide the image files for the EgoHumans dataset. For the Ego-Exo4D dataset, due to licensing restrictions, you will need to first sign the license agreement from the official Ego-Exo4D repository at https://ego4ddataset.com/egoexo-license/. After signing the license, you can download the dataset (`downscaled_takes/448`) and then use the preprocessing scripts to extract the corresponding images.
79
+
80
+ ```bash
81
+ $ pip install ego4d --upgrade
82
+ $ egoexo -o All-Angles-Bench/ --parts downscaled_takes/448
83
+
84
+ $ python All-Angles-Bench/scripts/process_ego4d_exo.py --input All-Angles-Bench
85
+ ```
86
+
87
+ 3. **Transform JSON metadata into benchmark TSV format:**
88
+
89
+ To convert the metadata from JSON format into a structured TSV format compatible with benchmark evaluation scripts in [VLMEvalKit](https://github.com/open-compass/VLMEvalKit), run:
90
+ ```bash
91
+ $ python All-Angles-Bench/scripts/json2tsv_pair.py --input All-Angles-Bench/data.json
92
+
93
+ ```
94
+
95
+
96
+ ## Dataset Structure
97
+
98
+ <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
99
+
100
+
101
+ The JSON data contains the following key-value pairs:
102
+
103
+ | Key | Type | Description |
104
+ |------------------|------------|-----------------------------------------------------------------------------|
105
+ | `index` | Integer | Unique identifier for the data entry (e.g. `1221`) |
106
+ | `folder` | String | Directory name where the scene is stored (e.g. `"05_volleyball"`) |
107
+ | `category` | String | Task category (e.g. `"counting"`) |
108
+ | `pair_idx` | String | Index of a corresponding paired question (if applicable) |
109
+ | `image_path` | List | Array of input image paths |
110
+ | `question` | String | Natural language query about the scene |
111
+ | `A`/`B`/`C` | String | Multiple choice options |
112
+ | `answer` | String | Correct option label (e.g. `"B"`) |
113
+ | `sourced_dataset`| String | Source dataset name (e.g. `"EgoHumans"`) |
114
+
115
+
116
+
117
+
118
+
119
+ ## Citation
120
+
121
+ <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
122
+
123
+ ```bibtex
124
+ @article{yeh2025seeing,
125
+ title={Seeing from Another Perspective: Evaluating Multi-View Understanding in MLLMs},
126
+ author={Chun-Hsiao Yeh, Chenyu Wang, Shengbang Tong, Ta-Ying Cheng, Ruoyu Wang, Tianzhe Chu, Yuexiang Zhai, Yubei Chen, Shenghua Gao and Yi Ma},
127
+ journal={arXiv preprint arXiv:2504.15280},
128
+ year={2025}
129
+ }
130
+ ```
131
+
132
+ ## Acknowledgements
133
+ You may refer to related work that serves as foundations for our framework and code repository,
134
+ [EgoHumans](https://github.com/rawalkhirodkar/egohumans),
135
+ [Ego-Exo4D](https://github.com/facebookresearch/Ego4d),
136
+ [VLMEvalKit](https://github.com/open-compass/VLMEvalKit).
137
+ Thanks for their wonderful work and data.g of over 2,100 human-annotated multi-view question-answer (QA) pairs, spanning 90 real-world scenes. Each scene is captured from multiple viewpoints, providing diverse perspectives and context for the associated questions.
138
+
139
+
140
+ ## Dataset Sources
141
+
142
+ <!-- Provide the basic links for the dataset. -->
143
+
144
+ - **[EgoHumans](https://github.com/rawalkhirodkar/egohumans)** - Egocentric multi-view human activity understanding dataset
145
+ - **[Ego-Exo4D](https://github.com/facebookresearch/Ego4d)** - Large-scale egocentric and exocentric video dataset for multi-person interaction understanding
146
+
147
+
148
  ## Usage
149
 
150
  ```python
folder_list.txt ADDED
@@ -0,0 +1,83 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ uniandes_bouldering_012_29
2
+ georgiatech_bike_15_4
3
+ georgiatech_cooking_11_02_6
4
+ utokyo_omelet_5_1001_4
5
+ utokyo_pcr_2001_23_2
6
+ indiana_cooking_23_2
7
+ nus_cooking_17_5
8
+ indiana_bike_11_9
9
+ nus_cpr_25_1
10
+ georgiatech_covid_09_2
11
+ cmu_soccer03_1
12
+ minnesota_cooking_060_2
13
+ sfu_cooking027_9
14
+ iiith_piano_003_2
15
+ cmu_soccer12_2
16
+ utokyo_salad_12_1018_4
17
+ cmu_bike20_3
18
+ cmu_bike21_3
19
+ cmu_bike18_6
20
+ utokyo_sushi_4_1008_4
21
+ sfu_cooking032_3
22
+ utokyo_soccer_8000_17_6
23
+ cmu_bike19_5
24
+ cmu_bike15_3
25
+ cmu_bike14_3
26
+ unc_music_04-26-23_02_11
27
+ cmu_bike16_3
28
+ cmu_bike17_3
29
+ sfu_cooking031_10
30
+ iiith_guitar_002_10
31
+ cmu_bike11_5
32
+ cmu_bike12_5
33
+ fair_cooking_05_6
34
+ sfu_cooking024_2
35
+ sfu_cooking022_2
36
+ cmu_bike01_7
37
+ sfu_cooking029_2
38
+ sfu_cooking015_2
39
+ cmu_soccer08_3
40
+ sfu_cooking030_6
41
+ cmu_bike06_2
42
+ cmu_soccer11_2
43
+ nus_covidtest_49_1
44
+ sfu_basketball016_5
45
+ sfu_basketball_04_2
46
+ sfu_basketball014_9
47
+ uniandes_dance_020_31
48
+ sfu_basketball017_9
49
+ unc_soccer_09-22-23_01_13
50
+ indiana_music_04_2
51
+ minnesota_rockclimbing_021_32
52
+ sfu_basketball012_12
53
+ iiith_cooking_90_2
54
+ unc_basketball_03-31-23_02_38
55
+ fair_bike_06_7
56
+ uniandes_basketball_005_10
57
+ utokyo_cpr_2005_28_2
58
+ nus_soccer_17_3
59
+ sfu_cooking_005_4
60
+ cmu_bike08_2
61
+ cmu_bike09_4
62
+ iiith_soccer_033_6
63
+ sfu_covid_013_12
64
+ cmu_bike05_2
65
+ cmu_bike02_5
66
+ sfu_basketball015_7
67
+ cmu_soccer16_6
68
+ cmu_soccer15_5
69
+ cmu_bike13_4
70
+ uniandes_cooking_001_5
71
+ sfu_basketball013_28
72
+ cmu_bike10_7
73
+ sfu_cooking020_8
74
+ sfu_cooking026_4
75
+ sfu_cooking023_8
76
+ sfu_cooking025_5
77
+ sfu_cooking028_8
78
+ cmu_bike03_2
79
+ cmu_soccer06_6
80
+ cmu_soccer07_2
81
+ sfu_cooking017_4
82
+ cmu_soccer09_2
83
+ cmu_soccer14_4
scripts/json2tsv_pair.py ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ import csv
3
+ import os
4
+ import argparse
5
+ from pathlib import Path
6
+
7
+ def process_question(question, A, B, C):
8
+ options = [A, B, C]
9
+ question_no_options = question.replace(A, "").replace(B, "").replace(C, "")
10
+ question_no_options = question_no_options.strip().replace(",", "").strip()
11
+
12
+ while len(options) < 3:
13
+ options.append("")
14
+ return question_no_options, options
15
+
16
+ def main(json_path):
17
+ json_path = Path(json_path).resolve()
18
+ base_path = json_path.parent # All-Angles-Bench/
19
+ output_dir = base_path / "bench_tsv"
20
+ output_dir.mkdir(parents=True, exist_ok=True)
21
+ output_path = output_dir / "all_angles_bench_huggingface.tsv"
22
+
23
+ with open(json_path, 'r', encoding='utf-8') as f:
24
+ data = json.load(f)
25
+
26
+ tsv_data = []
27
+
28
+ for entry in data:
29
+ folder = entry["folder"]
30
+ category = entry["category"]
31
+ image_paths = entry["image_path"]
32
+ modified_input_images = [str(base_path / p) for p in image_paths]
33
+
34
+ question = entry["question"]
35
+ answer = entry["answer"]
36
+ index = entry["index"]
37
+ pair_idx = entry["pair_idx"]
38
+
39
+ A = entry["A"]
40
+ B = entry["B"]
41
+ C = entry["C"]
42
+ question_no_options, options = process_question(question, A, B, C)
43
+
44
+ tsv_data.append([
45
+ index, folder, category, pair_idx, modified_input_images,
46
+ question_no_options
47
+ ] + options + [answer])
48
+
49
+ with open(output_path, 'w', newline='', encoding='utf-8') as tsv_file:
50
+ tsv_writer = csv.writer(tsv_file, delimiter='\t')
51
+ tsv_writer.writerow(['index', 'folder', 'category', 'pair_idx', 'image_path', 'question', 'A', 'B', 'C', 'answer'])
52
+ tsv_writer.writerows(tsv_data)
53
+
54
+ print(f"Saved TSV to {output_path}")
55
+
56
+ if __name__ == "__main__":
57
+ parser = argparse.ArgumentParser()
58
+ parser.add_argument('--input', required=True, help='Path to the data.json file (e.g., All-Angles-Bench/data.json)')
59
+ args = parser.parse_args()
60
+ main(args.input)
scripts/process_ego4d_exo.py ADDED
@@ -0,0 +1,89 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import shutil
3
+ import argparse
4
+ from pathlib import Path
5
+ import cv2
6
+
7
+ def extract_frames_from_video(video_path, output_folder, num_frames=16):
8
+ cap = cv2.VideoCapture(video_path)
9
+
10
+ total_frames = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
11
+ fps = cap.get(cv2.CAP_PROP_FPS)
12
+
13
+ interval = total_frames // num_frames
14
+
15
+ output_folder.mkdir(parents=True, exist_ok=True)
16
+
17
+ for i in range(num_frames):
18
+ cap.set(cv2.CAP_PROP_POS_FRAMES, i * interval)
19
+
20
+ ret, frame = cap.read()
21
+ if not ret:
22
+ break
23
+
24
+ frame_filename = output_folder / f"frame{i + 1}_{Path(video_path).stem}.jpg"
25
+ cv2.imwrite(str(frame_filename), frame)
26
+
27
+ cap.release()
28
+
29
+ def process_subfolders(root_path, folder_list, output_root):
30
+ for subfolder in Path(root_path).iterdir():
31
+ if subfolder.is_dir():
32
+ folder_name = subfolder.name
33
+ if folder_name not in folder_list:
34
+ continue
35
+
36
+ target_folder = subfolder / "frame_aligned_videos/downscaled/448"
37
+
38
+ if target_folder.exists() and target_folder.is_dir():
39
+ for video_file in target_folder.glob("cam*.mp4"):
40
+ print(f"Processing video: {video_file}")
41
+ new_subfolder = Path(output_root) / subfolder.name
42
+ new_subfolder.mkdir(parents=True, exist_ok=True)
43
+
44
+ extract_frames_from_video(video_file, new_subfolder)
45
+
46
+ def move_and_rename_frames_from_file(file_path, destination_root, source_root):
47
+ with open(file_path, 'r') as f:
48
+ folder_list = [line.strip() for line in f.readlines()]
49
+
50
+ for folder in folder_list:
51
+ folder_path = Path(os.path.join(source_root, folder))
52
+ if folder_path.exists() and folder_path.is_dir():
53
+ target_folder = Path(destination_root) / folder_path.name
54
+ target_folder.mkdir(parents=True, exist_ok=True)
55
+
56
+ for frame_file in folder_path.glob("frame2_*.jpg"):
57
+ new_filename = frame_file.name.replace("frame2_", "")
58
+
59
+ destination_file = target_folder / new_filename
60
+
61
+ shutil.move(str(frame_file), str(destination_file))
62
+ print(f"Moved and renamed {frame_file} to {destination_file}")
63
+ else:
64
+ print(f"Folder {folder} does not exist or is not a directory.")
65
+
66
+ if __name__ == "__main__":
67
+ parser = argparse.ArgumentParser()
68
+ parser.add_argument('--input', required=True, help='Path to All-Angles-Bench (root folder)')
69
+ args = parser.parse_args()
70
+
71
+ root_input = Path(args.input)
72
+ folder_list_file = root_input / "folder_list.txt"
73
+ input_path = root_input / "takes"
74
+ output_root = root_input / "extracted_frames_huggingface"
75
+ destination_root = root_input / "ego_exo4d_scenes"
76
+
77
+ with open(folder_list_file, 'r') as f:
78
+ folder_list = [line.strip() for line in f.readlines()]
79
+
80
+ process_subfolders(input_path, folder_list, output_root)
81
+ print("Processing frames extraction completed!")
82
+
83
+ move_and_rename_frames_from_file(folder_list_file, destination_root, output_root)
84
+ print("Files moved and renamed successfully!")
85
+
86
+ # Clean up
87
+ if output_root.exists() and output_root.is_dir():
88
+ shutil.rmtree(output_root)
89
+ print(f"Removed intermediate folder: {output_root}")