Datasets:

Aymen-Bouguerra commited on
Commit
677ce46
·
verified ·
1 Parent(s): 07f22b2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -12
README.md CHANGED
@@ -67,9 +67,9 @@ A typical data instance (one line in `metadata.jsonl` plus the corresponding ima
67
  // ... more categories relevant to this configuration
68
  ]
69
  }
70
-
71
  The image itself is loaded by the datasets library when accessed.
72
- Data Fields
73
  Each instance in the dataset has the following fields:
74
  image: A PIL.Image.Image object containing the image.
75
  file_name: (string) The filename of the image.
@@ -90,9 +90,9 @@ categories: A list of dictionaries, where each dictionary defines an object cate
90
  id: (int) Unique category ID.
91
  name: (string) Category name (e.g., "dog", "car").
92
  supercategory: (string) Name of the supercategory (e.g., "animal", "vehicle").
93
- Data Splits
94
  Each configuration has a single split, named train. Despite the name, these splits are typically used for evaluation in the context of OOD detection research.
95
- Dataset Configurations
96
  The FMIYC dataset provides the following configurations:
97
  coco_far_voc: Images from COCO, considered "far" OOD when VOC is the ID.
98
  coco_farther_bdd: Images from COCO, considered "farther" OOD when BDD is the ID.
@@ -100,29 +100,28 @@ coco_near_voc: Images from COCO, considered "near" OOD when VOC is the ID.
100
  oi_far_voc: Images from OpenImages, considered "far" OOD when VOC is the ID.
101
  oi_farther_bdd: Images from OpenImages, considered "farther" OOD when BDD is the ID.
102
  oi_near_voc: Images from OpenImages, considered "near" OOD when VOC is the ID.
103
- Dataset Creation
104
  The FMIYC dataset was manually curated and enriched. The process involved selecting images and annotations from existing benchmarks, primarily COCO and OpenImages. These selections were then organized into new evaluation splits based on semantic similarity to create the "near", "far", and "farther" OOD categories. For comprehensive details on the curation methodology, semantic distance calculation, and split creation, please refer to the associated research paper.
105
- Source Data
106
  The images and initial annotations are sourced from:
107
  COCO (Common Objects in Context): Lin et al., 2014. https://cocodataset.org/
108
  OpenImages: Kuznetsova et al., 2020. https://storage.googleapis.com/openimages/web/index.html
109
  The FMIYC dataset creators do not claim ownership of the original images or annotations from COCO or OpenImages.
110
- Considerations for Using the Data
111
  Social Impact and Bias
112
  The FMIYC dataset is a derivative work. As such, any biases present in the original COCO and OpenImages datasets (e.g., geographical, cultural, or object class distribution biases) may be propagated to this dataset. Users should be mindful of these potential biases when training models or interpreting results. The curation process for FMIYC focuses on semantic novelty for OOD evaluation and does not explicitly mitigate biases from the source datasets.
113
- Limitations
114
  The "near", "far", and "farther" categorizations are based on specific semantic similarity metrics and In-Distribution reference points (VOC, BDD). These categorizations might vary if different metrics or reference datasets are used.
115
  The dataset's primary utility is for evaluating OOD generalization, not for training OOD detection models from scratch, due to its evaluation-focused splits.
116
- Disclaimers
117
  The FMIYC dataset creators do not claim ownership of the original images or annotations from COCO or OpenImages. The contribution of FMIYC lies in the novel curation, categorization, and benchmarking methodology for OOD object detection. Users of the FMIYC dataset should also be aware of and adhere to the licenses and terms of use of the original source datasets (COCO and OpenImages).
118
- Additional Information
119
- Licensing Information
120
  The FMIYC dataset annotations and curation scripts are licensed under CC BY 4.0.
121
  The images themselves are subject to the licenses of their original sources:
122
  COCO: Primarily Flickr images, various licenses. Refer to COCO website for details.
123
  OpenImages: Images have a variety of licenses, including CC BY 2.0. Refer to OpenImages website for details.
124
  Users must comply with the licensing terms of both FMIYC and the original image sources.
125
- Citation Information
126
  If you use the FMIYC dataset in your research, please cite the FMIYC paper:
127
  @misc{Montoya_FindMeIfYouCan_YYYY,
128
  author = {Montoya, Daniel and Bouguerra, Aymen and Gomez-Villa, Alexandra and Arnez, Fabio},
 
67
  // ... more categories relevant to this configuration
68
  ]
69
  }
70
+ ```
71
  The image itself is loaded by the datasets library when accessed.
72
+ ## Data Fields
73
  Each instance in the dataset has the following fields:
74
  image: A PIL.Image.Image object containing the image.
75
  file_name: (string) The filename of the image.
 
90
  id: (int) Unique category ID.
91
  name: (string) Category name (e.g., "dog", "car").
92
  supercategory: (string) Name of the supercategory (e.g., "animal", "vehicle").
93
+ ## Data Splits
94
  Each configuration has a single split, named train. Despite the name, these splits are typically used for evaluation in the context of OOD detection research.
95
+ ## Dataset Configurations
96
  The FMIYC dataset provides the following configurations:
97
  coco_far_voc: Images from COCO, considered "far" OOD when VOC is the ID.
98
  coco_farther_bdd: Images from COCO, considered "farther" OOD when BDD is the ID.
 
100
  oi_far_voc: Images from OpenImages, considered "far" OOD when VOC is the ID.
101
  oi_farther_bdd: Images from OpenImages, considered "farther" OOD when BDD is the ID.
102
  oi_near_voc: Images from OpenImages, considered "near" OOD when VOC is the ID.
103
+ ## Dataset Creation
104
  The FMIYC dataset was manually curated and enriched. The process involved selecting images and annotations from existing benchmarks, primarily COCO and OpenImages. These selections were then organized into new evaluation splits based on semantic similarity to create the "near", "far", and "farther" OOD categories. For comprehensive details on the curation methodology, semantic distance calculation, and split creation, please refer to the associated research paper.
105
+ ## Source Data
106
  The images and initial annotations are sourced from:
107
  COCO (Common Objects in Context): Lin et al., 2014. https://cocodataset.org/
108
  OpenImages: Kuznetsova et al., 2020. https://storage.googleapis.com/openimages/web/index.html
109
  The FMIYC dataset creators do not claim ownership of the original images or annotations from COCO or OpenImages.
110
+ ## Considerations for Using the Data
111
  Social Impact and Bias
112
  The FMIYC dataset is a derivative work. As such, any biases present in the original COCO and OpenImages datasets (e.g., geographical, cultural, or object class distribution biases) may be propagated to this dataset. Users should be mindful of these potential biases when training models or interpreting results. The curation process for FMIYC focuses on semantic novelty for OOD evaluation and does not explicitly mitigate biases from the source datasets.
113
+ ## Limitations
114
  The "near", "far", and "farther" categorizations are based on specific semantic similarity metrics and In-Distribution reference points (VOC, BDD). These categorizations might vary if different metrics or reference datasets are used.
115
  The dataset's primary utility is for evaluating OOD generalization, not for training OOD detection models from scratch, due to its evaluation-focused splits.
116
+ ## Disclaimers
117
  The FMIYC dataset creators do not claim ownership of the original images or annotations from COCO or OpenImages. The contribution of FMIYC lies in the novel curation, categorization, and benchmarking methodology for OOD object detection. Users of the FMIYC dataset should also be aware of and adhere to the licenses and terms of use of the original source datasets (COCO and OpenImages).
118
+ ## Additional Information and Licensing Information
 
119
  The FMIYC dataset annotations and curation scripts are licensed under CC BY 4.0.
120
  The images themselves are subject to the licenses of their original sources:
121
  COCO: Primarily Flickr images, various licenses. Refer to COCO website for details.
122
  OpenImages: Images have a variety of licenses, including CC BY 2.0. Refer to OpenImages website for details.
123
  Users must comply with the licensing terms of both FMIYC and the original image sources.
124
+ ## Citation Information
125
  If you use the FMIYC dataset in your research, please cite the FMIYC paper:
126
  @misc{Montoya_FindMeIfYouCan_YYYY,
127
  author = {Montoya, Daniel and Bouguerra, Aymen and Gomez-Villa, Alexandra and Arnez, Fabio},