harpreetsahota commited on
Commit
a3d95e0
·
verified ·
1 Parent(s): 738aae8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +141 -74
README.md CHANGED
@@ -46,7 +46,7 @@ dataset_summary: '
46
 
47
  # Note: other available arguments include ''max_samples'', etc
48
 
49
- dataset = load_from_hub("harpreetsahota/sku110k_test")
50
 
51
 
52
  # Launch the App
@@ -60,10 +60,7 @@ dataset_summary: '
60
 
61
  # Dataset Card for harpreetsahota/sku110k_test
62
 
63
- <!-- Provide a quick summary of the dataset. -->
64
-
65
-
66
-
67
 
68
 
69
  This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 2936 samples.
@@ -84,141 +81,211 @@ from fiftyone.utils.huggingface import load_from_hub
84
 
85
  # Load the dataset
86
  # Note: other available arguments include 'max_samples', etc
87
- dataset = load_from_hub("harpreetsahota/sku110k_test")
88
 
89
  # Launch the App
90
  session = fo.launch_app(dataset)
91
  ```
92
 
93
 
 
 
94
  ## Dataset Details
95
 
96
  ### Dataset Description
97
 
98
- <!-- Provide a longer summary of what this dataset is. -->
99
 
 
100
 
 
 
 
 
 
101
 
102
- - **Curated by:** [More Information Needed]
103
- - **Funded by [optional]:** [More Information Needed]
104
- - **Shared by [optional]:** [More Information Needed]
105
- - **Language(s) (NLP):** en
106
- - **License:** [More Information Needed]
107
 
108
- ### Dataset Sources [optional]
109
-
110
- <!-- Provide the basic links for the dataset. -->
111
-
112
- - **Repository:** [More Information Needed]
113
- - **Paper [optional]:** [More Information Needed]
114
- - **Demo [optional]:** [More Information Needed]
115
 
116
  ## Uses
117
 
118
- <!-- Address questions around how the dataset is intended to be used. -->
119
-
120
  ### Direct Use
121
 
122
- <!-- This section describes suitable use cases for the dataset. -->
123
 
124
- [More Information Needed]
 
 
 
 
125
 
126
- ### Out-of-Scope Use
127
-
128
- <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
129
-
130
- [More Information Needed]
131
 
132
  ## Dataset Structure
133
 
134
- <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
135
 
136
- [More Information Needed]
137
 
138
- ## Dataset Creation
 
 
 
 
 
139
 
140
- ### Curation Rationale
141
 
142
- <!-- Motivation for the creation of this dataset. -->
143
 
144
- [More Information Needed]
 
 
 
 
 
 
 
145
 
146
- ### Source Data
147
 
148
- <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
149
 
150
- #### Data Collection and Processing
151
 
152
- <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
 
 
 
 
153
 
154
- [More Information Needed]
 
 
 
 
 
 
155
 
156
- #### Who are the source data producers?
157
 
158
- <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
159
 
160
- [More Information Needed]
 
 
161
 
162
- ### Annotations [optional]
 
 
163
 
164
- <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
 
 
 
165
 
166
- #### Annotation process
 
 
 
167
 
168
- <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
 
 
 
 
169
 
170
- [More Information Needed]
171
 
172
- #### Who are the annotators?
173
 
174
- <!-- This section describes the people or systems who created the annotations. -->
175
 
176
- [More Information Needed]
 
 
 
177
 
178
- #### Personal and Sensitive Information
179
 
180
- <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
181
 
182
- [More Information Needed]
183
 
184
- ## Bias, Risks, and Limitations
 
 
 
 
 
 
185
 
186
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
 
 
 
 
187
 
188
- [More Information Needed]
 
 
189
 
190
- ### Recommendations
191
 
192
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
193
 
194
- Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
 
 
 
 
 
 
195
 
196
- ## Citation [optional]
197
 
198
- <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
199
 
200
- **BibTeX:**
201
 
202
- [More Information Needed]
203
 
204
- **APA:**
205
 
206
- [More Information Needed]
 
 
207
 
208
- ## Glossary [optional]
209
 
210
- <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
211
 
212
- [More Information Needed]
213
 
214
- ## More Information [optional]
 
 
 
 
 
 
 
215
 
216
- [More Information Needed]
217
 
218
- ## Dataset Card Authors [optional]
219
 
220
- [More Information Needed]
221
 
222
- ## Dataset Card Contact
223
 
224
- [More Information Needed]
 
 
 
 
46
 
47
  # Note: other available arguments include ''max_samples'', etc
48
 
49
+ dataset = load_from_hub("Voxel51/sku110k_test")
50
 
51
 
52
  # Launch the App
 
60
 
61
  # Dataset Card for harpreetsahota/sku110k_test
62
 
63
+ ![image](sku110k.gif)
 
 
 
64
 
65
 
66
  This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 2936 samples.
 
81
 
82
  # Load the dataset
83
  # Note: other available arguments include 'max_samples', etc
84
+ dataset = load_from_hub("Voxel51/sku110k_test")
85
 
86
  # Launch the App
87
  session = fo.launch_app(dataset)
88
  ```
89
 
90
 
91
+ # Dataset Card for SKU-110K (test split)
92
+
93
  ## Dataset Details
94
 
95
  ### Dataset Description
96
 
97
+ The SKU-110K dataset is a large-scale benchmark for object detection in densely packed retail scenes. It consists of 11,762 images of retail shelves from thousands of supermarkets worldwide, encompassing diverse geographic locations including the United States, Europe, and East Asia. The dataset contains over 1.73 million bounding box annotations, with an average of approximately 147 objects per image. All images have been resized to a resolution of one million pixels.
98
 
99
+ The dataset addresses the challenge of precise detection in densely packed scenes where objects are closely positioned, often overlapping, and typically oriented within a range of [-15°, 15°]. This makes it particularly valuable for developing and evaluating object detection algorithms for real-world retail applications where traditional detection methods often struggle due to extreme object density and occlusion.
100
 
101
+ - **Curated by:** Eran Goldman, Roei Herzig, Aviv Eisenschtat, Jacob Goldberger, and Tal Hassner
102
+ - **Funded by:** Trax (based on license information)
103
+ - **Shared by:** Research team from Bar-Ilan University and Trax
104
+ - **Language(s) (NLP):** Not applicable (computer vision dataset)
105
+ - **License:** Academic and non-commercial use only (proprietary license by Trax)
106
 
107
+ ### Dataset Sources
 
 
 
 
108
 
109
+ - **Repository:** https://github.com/eg4000/SKU110K_CVPR19
110
+ - **Paper:** "Precise Detection in Densely Packed Scenes" - CVPR 2019
111
+ - **ArXiv:** https://arxiv.org/abs/1904.00853
 
 
 
 
112
 
113
  ## Uses
114
 
 
 
115
  ### Direct Use
116
 
117
+ The SKU-110K dataset is designed for the following use cases:
118
 
119
+ - **Object Detection Research:** Training and evaluating object detection models, particularly for densely packed scenes
120
+ - **Retail Analytics:** Developing algorithms for automated shelf monitoring, inventory management, and planogram compliance
121
+ - **Benchmark Evaluation:** Comparing performance of detection algorithms in challenging, high-density scenarios
122
+ - **Dense Object Detection:** Research on handling extreme object density, occlusion, and scale variation
123
+ - **Academic Research:** Educational purposes and non-commercial research projects
124
 
125
+ The dataset is particularly suitable for:
126
+ - Studying detection performance in scenes with 50-200+ objects per image
127
+ - Developing algorithms robust to varying lighting conditions, viewpoints, and scales
128
+ - Research on handling closely packed objects with minimal spacing
 
129
 
130
  ## Dataset Structure
131
 
132
+ The dataset is organized into three splits with CSV annotation files:
133
 
134
+ ### Split Statistics
135
 
136
+ | Split | Images | Annotations | Avg. Objects/Image |
137
+ |-------|--------|-------------|-------------------|
138
+ | Train | 8,233 | 1,208,482 | ~147 |
139
+ | Validation | 588 | 90,968 | ~155 |
140
+ | Test | 2,941 | 431,546 | ~147 |
141
+ | **Total** | **11,762** | **1,730,996** | **~147** |
142
 
143
+ ### Annotation Format
144
 
145
+ The CSV annotation files contain the following columns:
146
 
147
+ - `image_name`: Filename of the image (e.g., "test_0.jpg")
148
+ - `x1`: X-coordinate of the top-left corner of the bounding box (pixels)
149
+ - `y1`: Y-coordinate of the top-left corner of the bounding box (pixels)
150
+ - `x2`: X-coordinate of the bottom-right corner of the bounding box (pixels)
151
+ - `y2`: Y-coordinate of the bottom-right corner of the bounding box (pixels)
152
+ - `class`: Class label (all objects labeled as "object" - no fine-grained categories)
153
+ - `image_width`: Width of the image in pixels
154
+ - `image_height`: Height of the image in pixels
155
 
156
+ **Note:** Each annotation appears on a separate line in the CSV file, meaning images with multiple objects have multiple rows.
157
 
158
+ ### FiftyOne Dataset Structure
159
 
160
+ The dataset has been converted to FiftyOne format with the following enhancements:
161
 
162
+ #### Base Structure
163
+ - **Dataset Name:** `sku110k_test` (test split)
164
+ - **Sample Structure:** Each sample represents one image with associated detections
165
+ - **Image Path:** `SKU110K_fixed/images/{image_name}`
166
+ - **Detection Field:** `ground_truth` (FiftyOne Detections object)
167
 
168
+ #### Bounding Box Format
169
+ Bounding boxes are stored in FiftyOne's normalized format:
170
+ - `[x, y, width, height]` where all values are in range [0, 1]
171
+ - `x`: Normalized x-coordinate of top-left corner (x1 / image_width)
172
+ - `y`: Normalized y-coordinate of top-left corner (y1 / image_height)
173
+ - `width`: Normalized width ((x2 - x1) / image_width)
174
+ - `height`: Normalized height ((y2 - y1) / image_height)
175
 
176
+ #### Enriched Fields
177
 
178
+ The FiftyOne dataset includes the following enrichments:
179
 
180
+ 1. **Bounding Box Areas** (`area` field on each detection)
181
+ - Computed as: `width × height` (in normalized coordinates)
182
+ - Range: [0, 1] representing the proportion of image covered
183
 
184
+ 2. **Detection Counts** (`num_detections` field at sample level)
185
+ - Integer count of objects detected in each image
186
+ - Useful for filtering and analyzing image complexity
187
 
188
+ 3. **RADIO Embeddings** (`radio_embeddings` field at sample level)
189
+ - Global semantic features extracted using C-RADIO v3-h model
190
+ - High-dimensional vectors capturing visual semantics
191
+ - Enables similarity search and clustering
192
 
193
+ 4. **UMAP Visualization** (Brain key: `radio_viz`)
194
+ - 2D projection of RADIO embeddings for visualization
195
+ - Allows exploration of visual similarity patterns
196
+ - Interactive visualization in FiftyOne App
197
 
198
+ 5. **Attention Heatmaps** (`radio_heatmap` field at sample level)
199
+ - Spatial attention maps from C-RADIO v3-h model
200
+ - Generated with smoothing (sigma=0.51)
201
+ - Format: NCHW (channels first)
202
+ - Highlights salient regions in each image
203
 
204
+ ## Dataset Creation
205
 
206
+ ### Curation Rationale
207
 
208
+ The SKU-110K dataset was created to address a critical gap in object detection research: the lack of large-scale datasets for densely packed scenes. While existing datasets like COCO and Pascal VOC contain object detection annotations, they typically feature relatively sparse scenes with well-separated objects. Real-world retail environments present unique challenges:
209
 
210
+ - **Extreme Density:** Shelves contain 50-200+ products in close proximity
211
+ - **Heavy Occlusion:** Objects frequently overlap and obscure one another
212
+ - **Scale Variation:** Products vary greatly in size within the same scene
213
+ - **Orientation Patterns:** Most objects aligned within [-15°, 15°] range
214
 
215
+ The dataset enables research on precise localization and detection algorithms capable of handling these challenging conditions, with applications in automated retail analytics, inventory management, and planogram compliance.
216
 
217
+ ### Source Data
218
 
219
+ #### Data Collection and Processing
220
 
221
+ - **Collection Method:** Images captured from thousands of supermarket stores worldwide
222
+ - **Geographic Diversity:** United States, Europe, and East Asia
223
+ - **Scene Variation:** Diverse scales, viewpoints, lighting conditions, and noise levels
224
+ - **Image Processing:** All images resized to one million pixels for consistency
225
+ - **Quality Control:** Images selected to represent challenging, densely packed scenarios
226
+ - **Annotation Tool:** Manual annotation using bounding box annotation software
227
+ - **Format:** CSV files with one annotation per line
228
 
229
+ The dataset focuses on "in-the-wild" conditions with natural variations in:
230
+ - Camera angles and distances
231
+ - Lighting (fluorescent, natural, mixed)
232
+ - Shelf arrangements and product placement
233
+ - Image quality and noise levels
234
 
235
+ #### Who are the source data producers?
236
+
237
+ The source images were captured from retail stores operated by various supermarket chains across multiple continents. The images represent real retail environments and were collected through Trax, a retail technology company specializing in computer vision solutions for in-store execution.
238
 
239
+ ### Annotations
240
 
241
+ #### Annotation process
242
 
243
+ - **Annotation Type:** Manual bounding box annotation
244
+ - **Annotation Guidelines:** Annotators were instructed to draw tight bounding boxes around each visible product on retail shelves
245
+ - **Class Labels:** All objects labeled uniformly as "object" (no product-level categorization)
246
+ - **Annotation Density:** Average of 147 bounding boxes per image, with some images containing 200+ annotations
247
+ - **Quality Assurance:** Manual review and validation process to ensure annotation accuracy
248
+ - **Tools Used:** Professional annotation tools for computer vision tasks
249
+ - **Completeness:** All visible products in each image were annotated
250
 
251
+ **Note:** The dataset does not include fine-grained product categories or SKU-level identification. All objects are labeled with a single "object" class, making this a class-agnostic detection task focused on localization precision rather than classification.
252
 
253
+ #### Who are the annotators?
254
 
255
+ The annotations were created by trained professional annotators working with the research team. Specific demographic information about the annotators is not publicly available. The annotation process was conducted with quality control measures to ensure consistency and accuracy across the large annotation volume (1.7M+ bounding boxes).
256
 
257
+ #### Personal and Sensitive Information
258
 
259
+ The dataset consists of images of retail shelf scenes containing packaged products. The images do not intentionally capture or focus on people. However, users should be aware that:
260
 
261
+ - Retail environments are public spaces where incidental capture of individuals may occur
262
+ - Product brands and packaging visible in images are proprietary to their respective manufacturers
263
+ - Store layouts and product arrangements may be considered proprietary information
264
 
265
+ The dataset is provided with restrictions on redistribution and commercial use to protect potential proprietary interests.
266
 
267
+ ## Citation
268
 
269
+ ### BibTeX
270
 
271
+ ```bibtex
272
+ @inproceedings{goldman2019dense,
273
+ author = {Eran Goldman and Roei Herzig and Aviv Eisenschtat and Jacob Goldberger and Tal Hassner},
274
+ title = {Precise Detection in Densely Packed Scenes},
275
+ booktitle = {Proc. Conf. Comput. Vision Pattern Recognition (CVPR)},
276
+ year = {2019}
277
+ }
278
+ ```
279
 
280
+ ### APA
281
 
282
+ Goldman, E., Herzig, R., Eisenschtat, A., Goldberger, J., & Hassner, T. (2019). Precise Detection in Densely Packed Scenes. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)* (pp. 5227-5236).
283
 
284
+ ## More Information
285
 
286
+ ### Additional Resources
287
 
288
+ - **GitHub Repository:** https://github.com/eg4000/SKU110K_CVPR19
289
+ - **ArXiv Paper:** https://arxiv.org/abs/1904.00853
290
+ - **FiftyOne Documentation:** https://docs.voxel51.com/
291
+ - **RADIO Model:** https://github.com/harpreetsahota204/NVLabs_CRADIOV3