ftaioli e-zorzi commited on
Commit
a20f01f
·
verified ·
1 Parent(s): 35d099f

Update README with description, usage and metadata (#7)

Browse files

- Update README with description, usage and metadata (d254efb7a82d40d760f2306c65773a3eadc6393c)


Co-authored-by: Edoardo <[email protected]>

Files changed (1) hide show
  1. README.md +81 -34
README.md CHANGED
@@ -1,34 +1,40 @@
1
- ---
2
- task_categories:
3
- - question-answering
4
- - zero-shot-classification
5
- pretty_name: I Don't Know Visual Question Answering
6
- dataset_info:
7
- features:
8
- - name: image
9
- dtype: image
10
- - name: question
11
- dtype: string
12
- - name: answers
13
- struct:
14
- - name: I don't know
15
- dtype: int64
16
- - name: 'No'
17
- dtype: int64
18
- - name: 'Yes'
19
- dtype: int64
20
- splits:
21
- - name: val
22
- num_bytes: 395276320.0
23
- num_examples: 502
24
- download_size: 40823223
25
- dataset_size: 395276320.0
26
- configs:
27
- - config_name: default
28
- data_files:
29
- - split: val
30
- path: data/val-*
31
- ---
 
 
 
 
 
 
32
 
33
  # I Don't Know Visual Question Answering - IDKVQA dataset - ICCV 25
34
 
@@ -37,20 +43,61 @@ configs:
37
  We introduce IDKVQA, an embodied dataset specifically designed and annotated for visual question answering using the agent’s observations during navigation,
38
  where the answer includes not only ```Yes``` and ```No```, but also ```I don’t know```.
39
  ## Dataset Details
40
- Please see our ICCV 25 accepted paper: [```Collaborative Instance Object Navigation: Leveraging Uncertainty-Awareness to Minimize Human-Agent Dialogues.```](https://arxiv.org/abs/2412.01250)
41
 
42
  For more information, visit our [Github repo.](https://github.com/intelligolabs/CoIN)
43
 
 
 
44
  ### Dataset Description
45
 
46
  <!-- Provide a longer summary of what this dataset is. -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
47
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
48
 
49
 
50
- - **Curated by:** [Francesco Taioli](https://francescotaioli.github.io/) and Edoardo Zorzi.
51
 
 
52
 
53
- <!-- ## Uses -->
54
 
55
  <!-- Address questions around how the dataset is intended to be used. -->
56
 
 
1
+ ---
2
+ task_categories:
3
+ - question-answering
4
+ - zero-shot-classification
5
+ pretty_name: I Don't Know Visual Question Answering
6
+ dataset_info:
7
+ features:
8
+ - name: image
9
+ dtype: image
10
+ - name: question
11
+ dtype: string
12
+ - name: answers
13
+ struct:
14
+ - name: I don't know
15
+ dtype: int64
16
+ - name: 'No'
17
+ dtype: int64
18
+ - name: 'Yes'
19
+ dtype: int64
20
+ splits:
21
+ - name: val
22
+ num_bytes: 395276320
23
+ num_examples: 502
24
+ download_size: 40823223
25
+ dataset_size: 395276320
26
+ configs:
27
+ - config_name: default
28
+ data_files:
29
+ - split: val
30
+ path: data/val-*
31
+ license: apache-2.0
32
+ language:
33
+ - en
34
+ tags:
35
+ - VQA
36
+ - Multimodal
37
+ ---
38
 
39
  # I Don't Know Visual Question Answering - IDKVQA dataset - ICCV 25
40
 
 
43
  We introduce IDKVQA, an embodied dataset specifically designed and annotated for visual question answering using the agent’s observations during navigation,
44
  where the answer includes not only ```Yes``` and ```No```, but also ```I don’t know```.
45
  ## Dataset Details
46
+ Please see our ICCV 25 accepted paper: [```Collaborative Instance Object Navigation: Leveraging Uncertainty-Awareness to Minimize Human-Agent Dialogues```](https://arxiv.org/abs/2412.01250)
47
 
48
  For more information, visit our [Github repo.](https://github.com/intelligolabs/CoIN)
49
 
50
+ **Curated by:** [Francesco Taioli](https://francescotaioli.github.io/) and [Edoardo Zorzi](https://huggingface.co/e-zorzi).
51
+
52
  ### Dataset Description
53
 
54
  <!-- Provide a longer summary of what this dataset is. -->
55
+ The dataset contains 502 rows and only one split ('val').
56
+
57
+ Each row is a triple (image, question, answers), where 'image' is the image which 'question' refers to, and 'answers' is a dictionary mapping each possible answer (```Yes```, ```No```, ```I don't know```) to the number of annotators picking that answer.
58
+
59
+ ```
60
+ DatasetDict({
61
+ val: Dataset({
62
+ features: ['image', 'question', 'answers'],
63
+ num_rows: 502
64
+ })
65
+ })
66
+ ```
67
+
68
+ ## Visualization
69
+
70
+ ```
71
+ from datasets import load_dataset
72
 
73
+ idkvqa = load_dataset("ftaioli/IDKVQA")
74
+
75
+ sample_index = 42
76
+ split = "val"
77
+
78
+ row = idkvqa[split][sample_index]
79
+ image = row["image"]
80
+ question = row["question"]
81
+ answers = row["answers"]
82
+
83
+ print(question), print(answers)
84
+ image
85
+ ```
86
+
87
+ You will obtain:
88
+
89
+ ```
90
+ Does the couch have a tufted backrest? You must answer only with Yes, No, or ?=I don't know.
91
+ {"I don't know": 0, 'No': 0, 'Yes': 3}
92
+ ```
93
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6621462611c923d051d62072/qC8oKkhoFqyYNY5ACTSqX.png)
94
 
95
 
96
+ ## Uses
97
 
98
+ You can use this dataset to train or test a model's visual-question answering capabilities about everyday objects.
99
 
100
+ To reproduce the baselines in our paper [```Collaborative Instance Object Navigation: Leveraging Uncertainty-Awareness to Minimize Human-Agent Dialogues```](https://arxiv.org/abs/2412.01250), please check the README in the [official repository](https://github.com/intelligolabs/CoIN).
101
 
102
  <!-- Address questions around how the dataset is intended to be used. -->
103