IDKVQA / README.md
e-zorzi's picture
Upload dataset
ef1edf7 verified
|
raw
history blame
2.55 kB
metadata
task_categories:
  - question-answering
  - zero-shot-classification
pretty_name: I Don't Know Visual Question Answering
dataset_info:
  features:
    - name: image
      dtype: image
    - name: question
      dtype: string
    - name: answers
      struct:
        - name: I don't know
          dtype: int64
        - name: 'No'
          dtype: int64
        - name: 'Yes'
          dtype: int64
  splits:
    - name: test
      num_bytes: 79527177
      num_examples: 101
    - name: train
      num_bytes: 395276383
      num_examples: 502
  download_size: 100480463
  dataset_size: 474803560
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: test
        path: data/test-*

I Don't Know Visual Question Answering - IDKVQA dataset - ICCV 25

We introduce IDKVQA, an embodied dataset specifically designed and annotated for visual question answering using the agent’s observations during navigation, where the answer includes not only Yes and No, but also I don’t know.

Dataset Details

Please see our ICCV 25 accepted paper: Collaborative Instance Object Navigation: Leveraging Uncertainty-Awareness to Minimize Human-Agent Dialogues.

For more information, visit our Github repo.

Dataset Description

Citation

BibTeX:

@misc{taioli2025collaborativeinstanceobjectnavigation,
      title={Collaborative Instance Object Navigation: Leveraging Uncertainty-Awareness to Minimize Human-Agent Dialogues}, 
      author={Francesco Taioli and Edoardo Zorzi and Gianni Franchi and Alberto Castellini and Alessandro Farinelli and Marco Cristani and Yiming Wang},
      year={2025},
      eprint={2412.01250},
      archivePrefix={arXiv},
      primaryClass={cs.AI},
      url={https://arxiv.org/abs/2412.01250}, 
}