Datasets:

Modalities:
Image
File size: 6,429 Bytes
eddad8d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
# Hyperspectral Imaging for Quality Assessment of Processed Foods: A Case Study on Sugar Content in Apple Jam
<img width="4271" height="2484" alt="Picture1" src="https://github.com/user-attachments/assets/524db8f0-99a2-414a-85c5-cc3b3d959f6a" />

This repository accompanies our study on **non-destructive sugar content estimation** in apple jam using **VNIR hyperspectral imaging (HSI)** and machine learning. It includes a reproducible set of Jupyter notebooks covering preprocessing, dataset construction, and model training/evaluation with classical ML and deep learning.


---

## Dataset
The Apples_HSI dataset is available on Hugging Face:
[issai/Apples_HSI](https://huggingface.co/datasets/issai/Apples_HSI).

### Dataset structure

```text
Apples_HSI/
β”œβ”€β”€ Catalogs/                                    # per-cultivar & sugar-ratio sessions
β”‚   β”œβ”€β”€ apple_jam_{cultivar}_{sugar proportion}_{apple proportion}_{date}/   # e.g., apple_jam_gala_50_50_17_Dec
β”‚   β”‚   β”œβ”€β”€ {sample_id}/                         # numeric sample folders (e.g., 911, 912, …)
β”‚   β”‚   β”‚   β”œβ”€β”€ capture/                         # raw camera outputs + references
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ {sample_id}.raw              # raw hyperspectral cube
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ {sample_id}.hdr              # header/metadata for the raw cube
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ DARKREF_{sample_id}.raw      # dark reference (raw)
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ DARKREF_{sample_id}.hdr
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ WHITEREF_{sample_id}.raw     # white reference (raw)
β”‚   β”‚   β”‚   β”‚   └── WHITEREF_{sample_id}.hdr
β”‚   β”‚   β”‚   β”œβ”€β”€ metadata/
β”‚   β”‚   β”‚   β”‚   └── {sample_id}.xml              # per-sample metadata/annotations
β”‚   β”‚   β”‚   β”œβ”€β”€ results/                         # calibrated reflectance + previews
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ REFLECTANCE_{sample_id}.dat  # ENVI-style reflectance cube
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ REFLECTANCE_{sample_id}.hdr
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ REFLECTANCE_{sample_id}.png  # reflectance preview
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ RGBSCENE_{sample_id}.png     # RGB scene snapshot
β”‚   β”‚   β”‚   β”‚   β”œβ”€β”€ RGBVIEWFINDER_{sample_id}.png
β”‚   β”‚   β”‚   β”‚   └── RGBBACKGROUND_{sample_id}.png
β”‚   β”‚   β”‚   β”œβ”€β”€ manifest.xml                     # per-sample manifest
β”‚   β”‚   β”‚   β”œβ”€β”€ {sample_id}.png                  # sample preview image
β”‚   β”‚   β”‚   └── .validated                       # empty marker file
β”‚   β”‚   └── …                                    # more samples
β”‚   └── …                                        # more cultivar/ratio/date folders
β”‚
β”œβ”€β”€ .cache/                                      # service files (upload tool)
β”œβ”€β”€ ._.cache
β”œβ”€β”€ ._paths.rtf
β”œβ”€β”€ .gitattributes                               # LFS rules for large files
└── paths.rtf                                    # path list (RTF)
```

## Repository structure
This repository contains:
- **Pre-processing**: `1_preprocessing.ipynb` (import HSI, calibration, masking (SAM), ROI crop, grid subdivision).
- **Dataset building**: `2_dataset preparation.ipynb` (train/val/test splits, sugar concentration/apple cultivar splits, average spectral vectors extraction).
- **Model training & evaluation**:
  - `3_svm.ipynb` β€” SVM, scaling, hyperparameter search.
  - `4_xgboost.ipynb` β€” XGBoost, tuning & early stopping.
  - `5_resnet.ipynb` β€” 1D ResNet training loops, checkpoints, metrics.



## Preprocessing β†’ Dataset β†’ Models (How to Run)

### 1) **Preprocessing**  

Inputs to set (near the bottom of the notebook)
```python
input_root = "path/to/input"     # root that contains the dataset folders (e.g., Apples_HSI/Catalogs)
output_root = "path/to/output"   # where the NPZ files will be written
paths_txt = "path/to/paths.txt"  # text file with relative paths to .hdr files (one per line)
```
- Run all cells. The notebook:
  - reads `REFLECTANCE_*.hdr` with `spectral.open_image`
  - builds a SAM mask (ref pixel `(255, 247)`, threshold `0.19`)
  - crops ROI and saves `cropped_{ID}.npz` under `output_root/...`

- Each NPZ contains: `cube` (cropped HΓ—WΓ—Bands), `offset` (`y_min`, `x_min`), `metadata` (JSON).


### 2) **Dataset building**  

Run all cells. The notebook:
- loads each NPZ (`np.load(path)["cube"]`)
- extracts **mean spectra per patch** for grid sizes **1, ..., 5**
- creates tables with columns `band_0..band_(B-1)`, `apple_content`, `apple_type`
- writes splits per grid:
  - **apple-based:** `{g}x{g}_train_apple.csv`, `{g}x{g}_val_apple.csv`, `{g}x{g}_test_apple.csv`
  - **rule-based:** `{g}x{g}_train_rule.csv`, `{g}x{g}_val_rule.csv`, `{g}x{g}_test_rule.csv`


### 3) **Model training**  

Classical ML β€” `3_svm.ipynb`
Run all cells. The notebook:
- loads pre-split CSVs (e.g., `{g}x{g}_train_apple.csv`, `{g}x{g}_test_apple.csv`)
- scales inputs and targets with **MinMaxScaler**
- fits **SVR** with hyperparameters: `C=110`, `epsilon=0.2`, `gamma="scale"`
- reports **RMSE / MAE / RΒ²** on Train/Test (targets inverse-transformed)

Classical ML β€” `4_xgboost.ipynb`
Run all cells. The notebook:
- loads Train/Val/Test CSVs and scales inputs with **MinMaxScaler**
- builds **DMatrix** and trains with:
    objective = "reg:squarederror", eval_metric = "rmse",
    max_depth = 2, eta = 0.15, subsample = 0.8, colsample_bytree = 1.0,
    lambda = 2.0, alpha = 0.1, seed = 42
    num_boost_round = 400, early_stopping_rounds = 40
- evaluates and prints **RMSE / MAE / RΒ²** (Train/Test)

Deep model β€” `5_resnet.ipynb`
Run all cells. The notebook:
- builds a **ResNet1D** and DataLoaders (`batch_size=16`)
- trains with **Adam** (`lr=1e-3`, `weight_decay=1e-4`), **epochs=150**, **MAE** loss
- uses target **MinMaxScaler** (inverse-transforms predictions for metrics)
- early-stopping on **Val MAE**; saves best checkpoint to **`best_resnet1d_model.pth`**
- reports **RMSE / MAE / RΒ²** on the Test set


## If you use the dataset/source code/pre-trained models in your research, please cite our work:

Lissovoy, D., Zakeryanova, A., Orazbayev, R., Rakhimzhanova, T., Lewis, M., Varol, H. A., & Chan, M.-Y. (2025). Hyperspectral Imaging for Quality Assessment of Processed Foods: A Case Study on Sugar Content in Apple Jam. Foods, 14(21), 3585. https://doi.org/10.3390/foods14213585