Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -1,74 +1,34 @@
|
|
1 |
---
|
2 |
-
annotations_creators:
|
|
|
3 |
language: en
|
4 |
license: mit
|
5 |
size_categories:
|
6 |
- n<1K
|
7 |
task_categories:
|
8 |
- object-detection
|
|
|
9 |
task_ids: []
|
10 |
-
pretty_name:
|
11 |
tags:
|
12 |
- fiftyone
|
13 |
- image
|
14 |
- object-detection
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 375 samples.
|
21 |
-
|
22 |
-
|
23 |
-
## Installation
|
24 |
-
|
25 |
-
|
26 |
-
If you haven''t already, install FiftyOne:
|
27 |
-
|
28 |
-
|
29 |
-
```bash
|
30 |
-
|
31 |
-
pip install -U fiftyone
|
32 |
-
|
33 |
-
```
|
34 |
-
|
35 |
-
|
36 |
-
## Usage
|
37 |
-
|
38 |
-
|
39 |
-
```python
|
40 |
-
|
41 |
-
import fiftyone as fo
|
42 |
-
|
43 |
-
from fiftyone.utils.huggingface import load_from_hub
|
44 |
-
|
45 |
-
|
46 |
-
# Load the dataset
|
47 |
-
|
48 |
-
# Note: other available arguments include ''max_samples'', etc
|
49 |
-
|
50 |
-
dataset = load_from_hub("andandandand/food_waste_dataset")
|
51 |
-
|
52 |
-
|
53 |
-
# Launch the App
|
54 |
-
|
55 |
-
session = fo.launch_app(dataset)
|
56 |
-
|
57 |
-
```
|
58 |
-
|
59 |
-
'
|
60 |
---
|
61 |
|
62 |
-
# Dataset Card for
|
63 |
-
|
64 |
-
<!-- Provide a quick summary of the dataset. -->
|
65 |
|
|
|
66 |
|
|
|
67 |
|
68 |
|
69 |
|
70 |
-
This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 375 samples.
|
71 |
-
|
72 |
## Installation
|
73 |
|
74 |
If you haven't already, install FiftyOne:
|
@@ -85,141 +45,223 @@ from fiftyone.utils.huggingface import load_from_hub
|
|
85 |
|
86 |
# Load the dataset
|
87 |
# Note: other available arguments include 'max_samples', etc
|
88 |
-
dataset = load_from_hub("andandandand/
|
89 |
|
90 |
# Launch the App
|
91 |
session = fo.launch_app(dataset)
|
92 |
```
|
93 |
|
94 |
-
|
95 |
## Dataset Details
|
96 |
|
97 |
### Dataset Description
|
98 |
|
99 |
-
|
100 |
|
|
|
|
|
|
|
|
|
|
|
101 |
|
|
|
|
|
|
|
|
|
102 |
|
103 |
-
|
104 |
-
- **Funded by [optional]:** [More Information Needed]
|
105 |
-
- **Shared by [optional]:** [More Information Needed]
|
106 |
-
- **Language(s) (NLP):** en
|
107 |
-
- **License:** mit
|
108 |
|
109 |
-
|
110 |
-
|
111 |
-
|
112 |
-
|
113 |
-
- **Repository:** [More Information Needed]
|
114 |
-
- **Paper [optional]:** [More Information Needed]
|
115 |
-
- **Demo [optional]:** [More Information Needed]
|
116 |
|
117 |
## Uses
|
118 |
|
119 |
-
<!-- Address questions around how the dataset is intended to be used. -->
|
120 |
-
|
121 |
### Direct Use
|
122 |
|
123 |
-
|
124 |
-
|
125 |
-
|
|
|
|
|
|
|
|
|
126 |
|
127 |
### Out-of-Scope Use
|
128 |
|
129 |
-
|
130 |
-
|
131 |
-
|
|
|
|
|
132 |
|
133 |
## Dataset Structure
|
134 |
|
135 |
-
|
136 |
-
|
137 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
138 |
|
139 |
## Dataset Creation
|
140 |
|
141 |
-
|
|
|
|
|
142 |
|
143 |
-
|
144 |
|
145 |
-
|
|
|
|
|
|
|
|
|
146 |
|
147 |
### Source Data
|
148 |
|
149 |
-
|
150 |
|
151 |
#### Data Collection and Processing
|
152 |
|
153 |
-
|
|
|
|
|
|
|
154 |
|
155 |
-
|
|
|
|
|
|
|
|
|
|
|
156 |
|
157 |
#### Who are the source data producers?
|
158 |
|
159 |
-
|
160 |
|
161 |
-
|
162 |
-
|
163 |
-
### Annotations [optional]
|
164 |
-
|
165 |
-
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
|
166 |
|
167 |
#### Annotation process
|
168 |
|
169 |
-
|
170 |
-
|
171 |
-
|
|
|
172 |
|
173 |
#### Who are the annotators?
|
174 |
|
175 |
-
|
|
|
|
|
176 |
|
177 |
-
|
178 |
|
179 |
-
|
180 |
|
181 |
-
|
|
|
|
|
|
|
|
|
|
|
182 |
|
183 |
-
|
184 |
|
185 |
-
|
|
|
|
|
|
|
186 |
|
187 |
-
|
188 |
-
|
189 |
-
[More Information Needed]
|
190 |
-
|
191 |
-
### Recommendations
|
192 |
-
|
193 |
-
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
194 |
|
195 |
-
|
196 |
|
197 |
-
|
|
|
|
|
|
|
|
|
198 |
|
199 |
-
|
200 |
|
201 |
-
**
|
|
|
|
|
202 |
|
203 |
-
|
204 |
|
205 |
-
|
|
|
|
|
|
|
|
|
206 |
|
207 |
-
|
208 |
|
209 |
-
|
210 |
|
211 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
212 |
|
213 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
214 |
|
215 |
-
## More Information
|
216 |
|
217 |
-
|
218 |
|
219 |
-
|
220 |
|
221 |
-
|
|
|
|
|
|
|
222 |
|
223 |
## Dataset Card Contact
|
224 |
|
225 |
-
|
|
|
|
|
|
1 |
---
|
2 |
+
annotations_creators:
|
3 |
+
- machine-generated
|
4 |
language: en
|
5 |
license: mit
|
6 |
size_categories:
|
7 |
- n<1K
|
8 |
task_categories:
|
9 |
- object-detection
|
10 |
+
- image-segmentation
|
11 |
task_ids: []
|
12 |
+
pretty_name: Food Waste Dataset with FiftyOne
|
13 |
tags:
|
14 |
- fiftyone
|
15 |
- image
|
16 |
- object-detection
|
17 |
+
- food-waste
|
18 |
+
- segmentation
|
19 |
+
- nutrition
|
20 |
+
- sustainability
|
21 |
+
dataset_summary: 'A computer vision dataset containing 375 images of meals with detailed nutritional information, ingredient segmentation, and food waste measurements. The dataset includes before/after consumption data to study food waste patterns and nutritional content analysis.'
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
22 |
---
|
23 |
|
24 |
+
# Dataset Card for Food Waste Dataset
|
|
|
|
|
25 |
|
26 |
+
This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 375 samples focused on food waste analysis and nutritional content detection.
|
27 |
|
28 |
+

|
29 |
|
30 |
|
31 |
|
|
|
|
|
32 |
## Installation
|
33 |
|
34 |
If you haven't already, install FiftyOne:
|
|
|
45 |
|
46 |
# Load the dataset
|
47 |
# Note: other available arguments include 'max_samples', etc
|
48 |
+
dataset = load_from_hub("andandandand/food-waste-dataset")
|
49 |
|
50 |
# Launch the App
|
51 |
session = fo.launch_app(dataset)
|
52 |
```
|
53 |
|
|
|
54 |
## Dataset Details
|
55 |
|
56 |
### Dataset Description
|
57 |
|
58 |
+
This dataset contains detailed information about food waste, combining visual data with comprehensive nutritional measurements. Each sample includes an image of a meal along with ingredient-level nutritional information measured both before and after consumption, enabling food waste analysis and nutritional content detection.
|
59 |
|
60 |
+
The dataset has been enhanced with:
|
61 |
+
- **YOLO-E segmentation** for ingredient detection and segmentation
|
62 |
+
- **DINOv2 embeddings** for visual similarity analysis
|
63 |
+
- **Translated ingredient names** from German to English
|
64 |
+
- **Nutritional metadata** including calories, fats, proteins, carbohydrates, and salt content
|
65 |
|
66 |
+
- **Curated by:** L. Stroetmann, a la QUARTO, AI Service Center at HPI (Hasso Plattner Institute), Voxel51
|
67 |
+
- **Enhanced by:** FiftyOne computer vision pipeline
|
68 |
+
- **Language(s):** English (translated from German)
|
69 |
+
- **License:** MIT
|
70 |
|
71 |
+
### Dataset Sources
|
|
|
|
|
|
|
|
|
72 |
|
73 |
+
- **Original Repository:** [AI-ServicesBB/food-waste-dataset](https://huggingface.co/datasets/AI-ServicesBB/food-waste-dataset)
|
74 |
+
- **Processing Code:** Available in the accompanying Jupyter notebook
|
75 |
+
- **Enhanced Version:** Includes segmentation masks and embeddings
|
|
|
|
|
|
|
|
|
76 |
|
77 |
## Uses
|
78 |
|
|
|
|
|
79 |
### Direct Use
|
80 |
|
81 |
+
This dataset is suitable for:
|
82 |
+
- **Food waste analysis** and sustainability research
|
83 |
+
- **Nutritional content detection** from images
|
84 |
+
- **Ingredient segmentation** and recognition
|
85 |
+
- **Computer vision model training** for food-related tasks
|
86 |
+
- **Multi-modal learning** combining visual and nutritional data
|
87 |
+
- **Food portion estimation** and consumption analysis
|
88 |
|
89 |
### Out-of-Scope Use
|
90 |
|
91 |
+
This dataset should not be used for:
|
92 |
+
- Medical diagnosis or personalized dietary recommendations
|
93 |
+
- Commercial food recognition without proper validation
|
94 |
+
- Applications requiring real-time nutritional analysis without expert oversight
|
95 |
+
- Any use that could promote harmful eating behaviors
|
96 |
|
97 |
## Dataset Structure
|
98 |
|
99 |
+
The dataset contains 375 samples split into train and test sets, with each sample containing:
|
100 |
+
|
101 |
+
### Image Data
|
102 |
+
- **filepath**: Path to the meal image
|
103 |
+
- **metadata**: Image dimensions, format, and technical details
|
104 |
+
|
105 |
+
### Nutritional Information (Per Ingredient)
|
106 |
+
- **ingredient_name**: Name of each ingredient (translated to English)
|
107 |
+
- **article_number**: Unique identifier for ingredients
|
108 |
+
- **number_of_portions**: Portion count
|
109 |
+
- **weight_per_portion**: Weight per individual portion
|
110 |
+
- **weight_per_plate**: Total weight on plate
|
111 |
+
- **kcal_per_plate**, **kj_per_plate**: Caloric content
|
112 |
+
- **fat_per_plate**, **saturated_fat_per_plate**: Fat content
|
113 |
+
- **carbohydrates_per_plate**, **sugar_per_plate**: Carbohydrate content
|
114 |
+
- **protein_per_plate**: Protein content
|
115 |
+
- **salt_per_plate**: Salt content
|
116 |
+
|
117 |
+
### Before/After Consumption Measurements
|
118 |
+
- **weight_before/after**: Total meal weight
|
119 |
+
- **kcal_before/after**: Total calories
|
120 |
+
- **fat_before/after**: Total fat content
|
121 |
+
- **carbohydrates_before/after**: Total carbohydrates
|
122 |
+
- **protein_before/after**: Total protein
|
123 |
+
- **salt_before/after**: Total salt
|
124 |
+
|
125 |
+
### Food Waste Metrics
|
126 |
+
- **return_quantity**: Amount of food returned/wasted
|
127 |
+
- **return_percentage**: Percentage of food wasted
|
128 |
+
|
129 |
+
### Computer Vision Annotations
|
130 |
+
- **yoloe_segmentation**: Ingredient segmentation masks from YOLO-E
|
131 |
+
- **segment_embeddings**: DINOv2 embeddings for segmented regions
|
132 |
+
- **dinov2-image-embeddings**: Full image embeddings
|
133 |
+
- **similarity indices**: For content-based search and analysis
|
134 |
|
135 |
## Dataset Creation
|
136 |
|
137 |
+
The Google Colab notebook used to curate and produce the dataset is available here:
|
138 |
+
|
139 |
+
[](https://colab.research.google.com/github/andandandand/practical-computer-vision-with-pytorch-mooc/blob/main/Food_Dataset_Curation_with_Fiftyone.ipynb)
|
140 |
|
141 |
+
### Curation Rationale
|
142 |
|
143 |
+
This dataset was created to support research in food waste reduction and nutritional analysis. By combining visual data with detailed nutritional measurements, it enables the development of computer vision systems that can:
|
144 |
+
- Automatically detect and quantify food waste
|
145 |
+
- Estimate nutritional content from images
|
146 |
+
- Analyze consumption patterns
|
147 |
+
- Support sustainability initiatives in food service
|
148 |
|
149 |
### Source Data
|
150 |
|
151 |
+
https://huggingface.co/datasets/AI-ServicesBB/food-waste-dataset
|
152 |
|
153 |
#### Data Collection and Processing
|
154 |
|
155 |
+
The original dataset was collected by the L. Stroetmann, a la QUARTO, and the AI Service Center at HPI and contained:
|
156 |
+
- Images of meals in German food service settings
|
157 |
+
- Detailed nutritional information in German
|
158 |
+
- Before and after consumption measurements
|
159 |
|
160 |
+
Processing steps included:
|
161 |
+
3. **Embeddings**: DINOv2 model used for visual feature extraction
|
162 |
+
4. **Similarity indexing**: Computed for both full images and segmented regions
|
163 |
+
1. **Translation**: German ingredient names and field names translated to English
|
164 |
+
2. **Segmentation**: YOLO-E model applied for ingredient detection
|
165 |
+
5. **Metadata computation**: Image technical details extracted
|
166 |
|
167 |
#### Who are the source data producers?
|
168 |
|
169 |
+
The original data was produced by the AI Service Center at the Hasso Plattner Institute (HPI) as part of food waste research initiatives.
|
170 |
|
171 |
+
### Annotations
|
|
|
|
|
|
|
|
|
172 |
|
173 |
#### Annotation process
|
174 |
|
175 |
+
- **Ingredient Translation**: Manual mapping of 40+ German ingredient names to English equivalents
|
176 |
+
- **Segmentation**: Automated using YOLO-E model trained on food ingredients
|
177 |
+
- **Embedding Generation**: Automated using DINOv2 vision transformer
|
178 |
+
- **Quality Control**: Visual inspection of segmentation results
|
179 |
|
180 |
#### Who are the annotators?
|
181 |
|
182 |
+
- **Translation**: Manual annotation by dataset curator
|
183 |
+
- **Segmentation**: YOLO-E model (yoloe-11s-seg.pt)
|
184 |
+
- **Embeddings**: DINOv2-ViT-L14 model
|
185 |
|
186 |
+
## Technical Details
|
187 |
|
188 |
+
### Ingredients Covered
|
189 |
|
190 |
+
The dataset includes 40+ food ingredients including:
|
191 |
+
- Proteins: meatballs, fish fillet, chicken, beef, pork, sausages
|
192 |
+
- Carbohydrates: rice, potatoes, bread dumplings, spaetzle
|
193 |
+
- Vegetables: green beans, carrots, cabbage, cauliflower, peas
|
194 |
+
- Sauces and condiments: various gravies, mustard sauce, dressings
|
195 |
+
- Dairy: cream, vegetable-based cream alternatives
|
196 |
|
197 |
+
### Model Performance
|
198 |
|
199 |
+
The dataset includes pre-computed:
|
200 |
+
- **Segmentation masks** with ingredient-level precision
|
201 |
+
- **Visual embeddings** enabling similarity search
|
202 |
+
- **UMAP visualization** for dataset exploration
|
203 |
|
204 |
+
## Bias, Risks, and Limitations
|
|
|
|
|
|
|
|
|
|
|
|
|
205 |
|
206 |
+
### Limitations
|
207 |
|
208 |
+
- **Cultural bias**: Dataset reflects German food service context
|
209 |
+
- **Ingredient coverage**: Limited to ~40 common ingredients
|
210 |
+
- **Portion size**: Focused on institutional serving sizes
|
211 |
+
- **Image quality**: Consistent lighting/background conditions
|
212 |
+
- **Temporal scope**: Snapshot data, not longitudinal study
|
213 |
|
214 |
+
### Risks
|
215 |
|
216 |
+
- **Nutritional accuracy**: Automated estimates should not replace professional dietary advice
|
217 |
+
- **Generalization**: Model performance may vary on different food cultures/preparations
|
218 |
+
- **Privacy**: While anonymized, institutional food service data patterns might be identifiable
|
219 |
|
220 |
+
### Recommendations
|
221 |
|
222 |
+
Users should:
|
223 |
+
- Validate nutritional estimates with professional dietary knowledge
|
224 |
+
- Consider cultural context, this dataset was collected in Germany
|
225 |
+
- Use appropriate evaluation metrics for food waste applications
|
226 |
+
- Acknowledge dataset limitations in publications and applications
|
227 |
|
228 |
+
## Citation
|
229 |
|
230 |
+
If you use this dataset, please cite both the original source and the enhanced version:
|
231 |
|
232 |
+
**Original Dataset:**
|
233 |
+
```bibtex
|
234 |
+
@dataset{hpi_food_waste_2024,
|
235 |
+
title={Food Waste Dataset},
|
236 |
+
author={Felix Boelter and Felix Venner},
|
237 |
+
year={2024},
|
238 |
+
url={https://huggingface.co/datasets/AI-ServicesBB/food-waste-dataset}
|
239 |
+
}
|
240 |
+
```
|
241 |
|
242 |
+
**Enhanced Version:**
|
243 |
+
```bibtex
|
244 |
+
@dataset{food_waste_fiftyone_2024,
|
245 |
+
title={Food Waste Dataset with FiftyOne Enhancements},
|
246 |
+
author={Felix Boelter and Felix Venner and Antonio Rueda-Toicen},
|
247 |
+
year={2024},
|
248 |
+
url={https://huggingface.co/datasets/andandandand/food-waste-dataset}
|
249 |
+
}
|
250 |
+
```
|
251 |
|
252 |
+
## More Information
|
253 |
|
254 |
+
For technical details about the processing pipeline, see the accompanying Google Colab notebook. The dataset supports various computer vision tasks and can be explored interactively using the FiftyOne application.
|
255 |
|
256 |
+
### Related Work
|
257 |
|
258 |
+
- FiftyOne: Open-source tool for dataset curation and model analysis
|
259 |
+
- YOLO-E: State-of-the-art object detection and segmentation
|
260 |
+
- DINOv2: Self-supervised vision transformer for embeddings
|
261 |
+
- Food waste reduction and sustainability research
|
262 |
|
263 |
## Dataset Card Contact
|
264 |
|
265 |
+
Antonio Rueda-Toicen
|
266 |
+
|
267 |
+
For questions about the original dataset, please refer to the AI Service Center, HPI.
|