Datasets:

FindMeIfYouCan / README.md
Aymen-Bouguerra's picture
Update README.md
c5d4000 verified
|
raw
history blame
8.81 kB
metadata
license: cc-by-4.0
tags:
  - image-object-detection
  - fmiyc
  - out-of-distribution-detection
pretty_name: FindMeIfYouCan
task_categories:
  - object-detection
  - other
size_categories:
  - 1K<n<10K

FMIYC (Find Me If You Can) Dataset

Dataset Description

Paper: FindMeIfYouCan: Bringing Open Set metrics to near, far and farther Out-of-Distribution Object detection

The FMIYC (Find Me If You Can) dataset is designed for Out-Of-Distribution (OOD) Object Detection tasks. It comprises images and annotations derived and adapted from the COCO (Common Objects in Context) and OpenImages datasets. The FMIYC dataset curates these sources into new evaluation splits categorized as near, far, and farther from In-Distribution (ID) data, based on semantic similarity. This categorization allows for a nuanced evaluation of OOD detection models.

This version of the dataset is structured into several configurations. Each configuration contains image files and a corresponding metadata.jsonl file, which details the annotations for those images.

Supported Tasks and Leaderboards

  • Object Detection: The dataset can be used for standard object detection tasks, with a focus on evaluating robustness to distributional shifts.
  • Out-of-Distribution Object Detection: This is the primary task for which FMIYC is designed, allowing researchers to benchmark models on their ability to handle objects with varying semantic distances from the training distribution.

[More Information Needed]

Languages

The annotations and descriptions are primarily in English.

Dataset Structure

The dataset is organized into several configurations, each representing a distinct subset of the data. Each configuration is loaded separately and contains a train split (though it's typically used for evaluation in the OOD context).

Data Instances

A typical data instance (one line in metadata.jsonl plus the corresponding image) looks like this:

{
  "file_name": "COCO_val2014_000000000139.jpg",
  "image_id": "139",
  "height": 426,
  "width": 640,
  "dataset_origin": "COCO",
  "distance_category": "near",
  "objects": [
    {
      "id": 1,
      "area": 70362,
      "bbox_x": 300,
      "bbox_y": 100,
      "bbox_width": 200,
      "bbox_height": 350,
      "category_id": 18
    }
  ],
  "categories": [
    {"id": 1, "name": "person", "supercategory": "person"},
    {"id": 18, "name": "dog", "supercategory": "animal"}
    
  ]
}

The image itself is loaded by the datasets library when accessed.

Data Fields

Each instance in the dataset has the following fields: image: A PIL.Image.Image object containing the image. file_name: (string) The filename of the image. image_id: (string/int) The original unique identifier for the image from its source dataset. height: (int) Height of the image in pixels. width: (int) Width of the image in pixels. dataset_origin: (string) Source dataset, either "COCO" or "OpenImages". distance_category: (string) Semantic distance perspective: "near", "far", or "farther". objects: A list of dictionaries, where each dictionary represents an annotated object. Each object dictionary contains: id: (int) Unique annotation ID for this object instance. area: (float/int) Area of the bounding box. bbox_x: (float) The x-coordinate of the top-left corner of the bounding box. bbox_y: (float) The y-coordinate of the top-left corner of the bounding box. bbox_width: (float) The width of the bounding box. bbox_height: (float) The height of the bounding box. category_id: (int) The ID of the category this object belongs to. This ID maps to an entry in the categories list. categories: A list of dictionaries, where each dictionary defines an object category relevant to the current configuration. Each category dictionary contains: id: (int) Unique category ID. name: (string) Category name (e.g., "dog", "car"). supercategory: (string) Name of the supercategory (e.g., "animal", "vehicle").

Data Splits

Each configuration has a single split, named train. Despite the name, these splits are typically used for evaluation in the context of OOD detection research.

Dataset Configurations

The FMIYC dataset provides the following configurations: coco_far_voc: Images from COCO, considered "far" OOD when VOC is the ID. coco_farther_bdd: Images from COCO, considered "farther" OOD when BDD is the ID. coco_near_voc: Images from COCO, considered "near" OOD when VOC is the ID. oi_far_voc: Images from OpenImages, considered "far" OOD when VOC is the ID. oi_farther_bdd: Images from OpenImages, considered "farther" OOD when BDD is the ID. oi_near_voc: Images from OpenImages, considered "near" OOD when VOC is the ID.

Dataset Creation

The FMIYC dataset was manually curated and enriched. The process involved selecting images and annotations from existing benchmarks, primarily COCO and OpenImages. These selections were then organized into new evaluation splits based on semantic similarity to create the "near", "far", and "farther" OOD categories. For comprehensive details on the curation methodology, semantic distance calculation, and split creation, please refer to the associated research paper.

Source Data

The images and initial annotations are sourced from: COCO (Common Objects in Context): Lin et al., 2014. https://cocodataset.org/ OpenImages: Kuznetsova et al., 2020. https://storage.googleapis.com/openimages/web/index.html The FMIYC dataset creators do not claim ownership of the original images or annotations from COCO or OpenImages.

Considerations for Using the Data

Social Impact and Bias The FMIYC dataset is a derivative work. As such, any biases present in the original COCO and OpenImages datasets (e.g., geographical, cultural, or object class distribution biases) may be propagated to this dataset. Users should be mindful of these potential biases when training models or interpreting results. The curation process for FMIYC focuses on semantic novelty for OOD evaluation and does not explicitly mitigate biases from the source datasets.

Limitations

The "near", "far", and "farther" categorizations are based on specific semantic similarity metrics and In-Distribution reference points (VOC, BDD). These categorizations might vary if different metrics or reference datasets are used. The dataset's primary utility is for evaluating OOD generalization, not for training OOD detection models from scratch, due to its evaluation-focused splits.

Disclaimers

The FMIYC dataset creators do not claim ownership of the original images or annotations from COCO or OpenImages. The contribution of FMIYC lies in the novel curation, categorization, and benchmarking methodology for OOD object detection. Users of the FMIYC dataset should also be aware of and adhere to the licenses and terms of use of the original source datasets (COCO and OpenImages).

Additional Information and Licensing Information

The FMIYC dataset annotations and curation scripts are licensed under CC BY 4.0. The images themselves are subject to the licenses of their original sources: COCO: Primarily Flickr images, various licenses. Refer to COCO website for details. OpenImages: Images have a variety of licenses, including CC BY 2.0. Refer to OpenImages website for details. Users must comply with the licensing terms of both FMIYC and the original image sources.

Citation Information

If you use the FMIYC dataset in your research, please cite the FMIYC paper: If you use the FMIYC dataset in your research, please cite the FMIYC paper:

@misc{Montoya_FindMeIfYouCan_YYYY,
  author    = {Montoya, Daniel and Bouguerra, Aymen and Gomez-Villa, Alexandra and Arnez, Fabio},
  title     = {FindMeIfYouCan: Bringing Open Set metrics to near, far and farther Out-of-Distribution Object detection},
  year      = {YYYY},
  publisher = {TODO: },
  url       = {TODO: }
}

@inproceedings{lin2014microsoft,
  title={Microsoft COCO: Common objects in context},
  author={Lin, Tsung-Yi and Maire, Michael and Belongie, Serge and Hays, James and Perona, Pietro and Ramanan, Deva and Doll{\'a}r, Piotr and Zitnick, C Lawrence},
  booktitle={European conference on computer vision},
  pages={740--755},
  year={2014},
  organization={Springer}
}

@article{OpenImages,
  author = {Kuznetsova, A. and Rom, H. and Alldrin, N. and Uijlings, J. and Krasin, I. and Pont-Tuset, J. and Kamali, S. and Popov, S. and Malloci, M. and Kolesnikov, A. and Duerig, T. and Ferrari, V.},
  title = {{The Open Images Dataset V4: Unified image classification, object detection, and visual relationship detection at scale}},
  journal = {International Journal of Computer Vision (IJCV)},
  year = {2020},
  volume = {128},
  pages = {1956--1981}
}

Contributions

Thanks to Daniel Montoya for creating and curating this dataset.