Datasets:

Modalities:
Image
rassulya tomirisss commited on
Commit
eddad8d
Β·
verified Β·
1 Parent(s): 2d11d8e

Create README.md (#3)

Browse files

- Create README.md (776a9fce07ce12f2cc2c0149f8706da9d1e60840)


Co-authored-by: Tomiris_R <[email protected]>

Files changed (1) hide show
  1. README.md +119 -0
README.md ADDED
@@ -0,0 +1,119 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Hyperspectral Imaging for Quality Assessment of Processed Foods: A Case Study on Sugar Content in Apple Jam
2
+ <img width="4271" height="2484" alt="Picture1" src="https://github.com/user-attachments/assets/524db8f0-99a2-414a-85c5-cc3b3d959f6a" />
3
+
4
+ This repository accompanies our study on **non-destructive sugar content estimation** in apple jam using **VNIR hyperspectral imaging (HSI)** and machine learning. It includes a reproducible set of Jupyter notebooks covering preprocessing, dataset construction, and model training/evaluation with classical ML and deep learning.
5
+
6
+
7
+ ---
8
+
9
+ ## Dataset
10
+ The Apples_HSI dataset is available on Hugging Face:
11
+ [issai/Apples_HSI](https://huggingface.co/datasets/issai/Apples_HSI).
12
+
13
+ ### Dataset structure
14
+
15
+ ```text
16
+ Apples_HSI/
17
+ β”œβ”€β”€ Catalogs/ # per-cultivar & sugar-ratio sessions
18
+ β”‚ β”œβ”€β”€ apple_jam_{cultivar}_{sugar proportion}_{apple proportion}_{date}/ # e.g., apple_jam_gala_50_50_17_Dec
19
+ β”‚ β”‚ β”œβ”€β”€ {sample_id}/ # numeric sample folders (e.g., 911, 912, …)
20
+ β”‚ β”‚ β”‚ β”œβ”€β”€ capture/ # raw camera outputs + references
21
+ β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ {sample_id}.raw # raw hyperspectral cube
22
+ β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ {sample_id}.hdr # header/metadata for the raw cube
23
+ β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ DARKREF_{sample_id}.raw # dark reference (raw)
24
+ β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ DARKREF_{sample_id}.hdr
25
+ β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ WHITEREF_{sample_id}.raw # white reference (raw)
26
+ β”‚ β”‚ β”‚ β”‚ └── WHITEREF_{sample_id}.hdr
27
+ β”‚ β”‚ β”‚ β”œβ”€β”€ metadata/
28
+ β”‚ β”‚ β”‚ β”‚ └── {sample_id}.xml # per-sample metadata/annotations
29
+ β”‚ β”‚ β”‚ β”œβ”€β”€ results/ # calibrated reflectance + previews
30
+ β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ REFLECTANCE_{sample_id}.dat # ENVI-style reflectance cube
31
+ β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ REFLECTANCE_{sample_id}.hdr
32
+ β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ REFLECTANCE_{sample_id}.png # reflectance preview
33
+ β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ RGBSCENE_{sample_id}.png # RGB scene snapshot
34
+ β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ RGBVIEWFINDER_{sample_id}.png
35
+ β”‚ β”‚ β”‚ β”‚ └── RGBBACKGROUND_{sample_id}.png
36
+ β”‚ β”‚ β”‚ β”œβ”€β”€ manifest.xml # per-sample manifest
37
+ β”‚ β”‚ β”‚ β”œβ”€β”€ {sample_id}.png # sample preview image
38
+ β”‚ β”‚ β”‚ └── .validated # empty marker file
39
+ β”‚ β”‚ └── … # more samples
40
+ β”‚ └── … # more cultivar/ratio/date folders
41
+ β”‚
42
+ β”œβ”€β”€ .cache/ # service files (upload tool)
43
+ β”œβ”€β”€ ._.cache
44
+ β”œβ”€β”€ ._paths.rtf
45
+ β”œβ”€β”€ .gitattributes # LFS rules for large files
46
+ └── paths.rtf # path list (RTF)
47
+ ```
48
+
49
+ ## Repository structure
50
+ This repository contains:
51
+ - **Pre-processing**: `1_preprocessing.ipynb` (import HSI, calibration, masking (SAM), ROI crop, grid subdivision).
52
+ - **Dataset building**: `2_dataset preparation.ipynb` (train/val/test splits, sugar concentration/apple cultivar splits, average spectral vectors extraction).
53
+ - **Model training & evaluation**:
54
+ - `3_svm.ipynb` β€” SVM, scaling, hyperparameter search.
55
+ - `4_xgboost.ipynb` β€” XGBoost, tuning & early stopping.
56
+ - `5_resnet.ipynb` β€” 1D ResNet training loops, checkpoints, metrics.
57
+
58
+
59
+
60
+ ## Preprocessing β†’ Dataset β†’ Models (How to Run)
61
+
62
+ ### 1) **Preprocessing**
63
+
64
+ Inputs to set (near the bottom of the notebook)
65
+ ```python
66
+ input_root = "path/to/input" # root that contains the dataset folders (e.g., Apples_HSI/Catalogs)
67
+ output_root = "path/to/output" # where the NPZ files will be written
68
+ paths_txt = "path/to/paths.txt" # text file with relative paths to .hdr files (one per line)
69
+ ```
70
+ - Run all cells. The notebook:
71
+ - reads `REFLECTANCE_*.hdr` with `spectral.open_image`
72
+ - builds a SAM mask (ref pixel `(255, 247)`, threshold `0.19`)
73
+ - crops ROI and saves `cropped_{ID}.npz` under `output_root/...`
74
+
75
+ - Each NPZ contains: `cube` (cropped HΓ—WΓ—Bands), `offset` (`y_min`, `x_min`), `metadata` (JSON).
76
+
77
+
78
+ ### 2) **Dataset building**
79
+
80
+ Run all cells. The notebook:
81
+ - loads each NPZ (`np.load(path)["cube"]`)
82
+ - extracts **mean spectra per patch** for grid sizes **1, ..., 5**
83
+ - creates tables with columns `band_0..band_(B-1)`, `apple_content`, `apple_type`
84
+ - writes splits per grid:
85
+ - **apple-based:** `{g}x{g}_train_apple.csv`, `{g}x{g}_val_apple.csv`, `{g}x{g}_test_apple.csv`
86
+ - **rule-based:** `{g}x{g}_train_rule.csv`, `{g}x{g}_val_rule.csv`, `{g}x{g}_test_rule.csv`
87
+
88
+
89
+ ### 3) **Model training**
90
+
91
+ Classical ML β€” `3_svm.ipynb`
92
+ Run all cells. The notebook:
93
+ - loads pre-split CSVs (e.g., `{g}x{g}_train_apple.csv`, `{g}x{g}_test_apple.csv`)
94
+ - scales inputs and targets with **MinMaxScaler**
95
+ - fits **SVR** with hyperparameters: `C=110`, `epsilon=0.2`, `gamma="scale"`
96
+ - reports **RMSE / MAE / RΒ²** on Train/Test (targets inverse-transformed)
97
+
98
+ Classical ML β€” `4_xgboost.ipynb`
99
+ Run all cells. The notebook:
100
+ - loads Train/Val/Test CSVs and scales inputs with **MinMaxScaler**
101
+ - builds **DMatrix** and trains with:
102
+ objective = "reg:squarederror", eval_metric = "rmse",
103
+ max_depth = 2, eta = 0.15, subsample = 0.8, colsample_bytree = 1.0,
104
+ lambda = 2.0, alpha = 0.1, seed = 42
105
+ num_boost_round = 400, early_stopping_rounds = 40
106
+ - evaluates and prints **RMSE / MAE / RΒ²** (Train/Test)
107
+
108
+ Deep model β€” `5_resnet.ipynb`
109
+ Run all cells. The notebook:
110
+ - builds a **ResNet1D** and DataLoaders (`batch_size=16`)
111
+ - trains with **Adam** (`lr=1e-3`, `weight_decay=1e-4`), **epochs=150**, **MAE** loss
112
+ - uses target **MinMaxScaler** (inverse-transforms predictions for metrics)
113
+ - early-stopping on **Val MAE**; saves best checkpoint to **`best_resnet1d_model.pth`**
114
+ - reports **RMSE / MAE / RΒ²** on the Test set
115
+
116
+
117
+ ## If you use the dataset/source code/pre-trained models in your research, please cite our work:
118
+
119
+ Lissovoy, D., Zakeryanova, A., Orazbayev, R., Rakhimzhanova, T., Lewis, M., Varol, H. A., & Chan, M.-Y. (2025). Hyperspectral Imaging for Quality Assessment of Processed Foods: A Case Study on Sugar Content in Apple Jam. Foods, 14(21), 3585. https://doi.org/10.3390/foods14213585