Image Segmentation
ONNX
English
biology
File size: 8,132 Bytes
218e12b
 
 
 
 
 
 
 
 
1dee41f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ea260c8
1dee41f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ea260c8
1dee41f
 
 
 
 
 
 
 
 
 
 
 
 
 
365bfc4
1dee41f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
---
license: mit
datasets:
- HakaiInstitute/kelp-seg-rgb-1024-1024
language:
- en
pipeline_tag: image-segmentation
tags:
- biology
---

# Kelp-RGB: Kelp Segmentation Model for RGB Drone Imagery

**Model Type:** ONNX Semantic Segmentation  
**Application:** Kelp forest detection in high-resolution RGB aerial imagery  
**Input:** 3-band RGB imagery (Red, Green, Blue)  
**Output:** Binary segmentation mask (kelp vs. non-kelp)

## Model Description

The Kelp-RGB model is a deep learning semantic segmentation model specifically trained for detecting kelp forests in RGB drone imagery. This model processes standard RGB imagery to provide accurate kelp segmentation for marine habitat monitoring and research, making it accessible for standard consumer drones and cameras.

**Key Features:**
- Optimized for standard RGB imagery from drones
- ImageNet-pretrained normalization statistics
- Efficient ONNX format for cross-platform deployment
- Designed for high-resolution aerial photography (~3-7cm resolution)

## Model Details

- **Version:** 20250728
- **Input Channels:** 3 (RGB)
- **Input Size:** Dynamic tiling (recommended: 2048x2048 tiles)
- **Normalization:** Standard (ImageNet statistics)
- **Output:** Multi-class segmentation (0: background, 1: giant kelp, 2: bull kelp)
- **Format:** ONNX

### Normalization Parameters

The model expects input images to be normalized using ImageNet statistics:

```json
{
  "mean": [0.485, 0.456, 0.406],
  "std": [0.229, 0.224, 0.225],
  "max_pixel_value": 255.0
}
```

## Usage

### 1. Using kelp-o-matic CLI (recommended)

For command-line usage:

```bash
# Install kelp-o-matic
pip install git+https://github.com/HakaiInstitute/kelp-o-matic@dev

# List available models
kom list-models

# Run kelp species segmentation on RGB drone imagery
kom segment \
    --model kelp-rgb \
    --input /path/to/rgb_drone_image.tif \
    --output /path/to/kelp_species_segmentation.tif \
    --batch-size 8 \
    --crop-size 2048 \
    --blur-kernel 5 \
    --morph-kernel 3

# Use specific model version
kom segment \
    --model kelp-rgb \
    --version 20250728 \
    --input image.tif \
    --output result.tif

# For high-resolution imagery, use larger tiles
kom segment \
    --model kelp-rgb \
    --input high_res_drone_image.tif \
    --output result.tif \
    --batch-size 4 \
    --crop-size 1024
```

### 2. Using kelp-o-matic Python API

The easiest way to use this model is through the kelp-o-matic package:

```python
from kelp_o_matic import model_registry

# Load the model (automatically downloads if needed)
model = model_registry["kelp-rgb"]

# Process a large aerial image with automatic tiling
model.process(
    input_path="path/to/your/rgb_drone_image.tif",
    output_path="path/to/output/kelp_species_segmentation.tif",
    batch_size=8,  # Higher batch size for RGB
    crop_size=2048,
    blur_kernel_size=5,  # Post-processing median blur
    morph_kernel_size=3,  # Morphological operations
)

# For more control, use the predict method directly
import rasterio
import numpy as np

with rasterio.open("drone_image.tif") as src:
    # Read a 2048x2048 tile (3 bands: RGB)
    tile = src.read(window=((0, 2048), (0, 2048)))  # Shape: (3, 2048, 2048)
    tile = np.transpose(tile, (1, 2, 0))  # Convert to HWC
    
    # Add batch dimension and predict
    batch = np.expand_dims(tile, axis=0)  # Shape: (1, 2048, 2048, 3)
    batch = np.transpose(batch, (0, 3, 1, 2))  # Convert to BCHW
    
    # Run inference (preprocessing handled automatically)
    predictions = model.predict(batch)
    
    # Post-process to get final segmentation
    segmentation = model.postprocess(predictions)
    # Result: 0=background, 1=giant kelp, 2=bull kelp
```

### 3. Direct ONNX Runtime Usage

```python
import numpy as np
import onnxruntime as ort
from huggingface_hub import hf_hub_download
from PIL import Image

# Download the model
model_path = hf_hub_download(repo_id="HakaiInstitute/kelp-rgb", filename="model.onnx")

# Load the model
session = ort.InferenceSession(model_path)

# ImageNet normalization parameters
mean = np.array([0.485, 0.456, 0.406])
std = np.array([0.229, 0.224, 0.225])

# Preprocess your RGB image
def preprocess(image):
    """
    Preprocess RGB image for model input
    image: numpy array of shape [height, width, 3] with pixel values 0-255
    """
    # Normalize to 0-1
    image = image.astype(np.float32) / 255.0
    
    # Apply ImageNet normalization
    image = (image - mean) / std
    
    # Reshape to model input format [batch, channels, height, width]
    image = np.transpose(image, (2, 0, 1))  # HWC to CHW
    image = np.expand_dims(image, axis=0)  # Add batch dimension
    
    return image

# Load and preprocess image
image = np.array(Image.open("drone_image.jpg"))
preprocessed = preprocess(image)

# Run inference
input_name = session.get_inputs()[0].name
output = session.run(None, {input_name: preprocessed})

# Postprocess to get class predictions
logits = output[0]  # Raw probabilities for each class
prediction = np.argmax(logits, axis=1).squeeze(0).astype(np.uint8)
# Result: 0=background, 1=giant kelp, 2=bull kelp
```

### 4. Using HuggingFace Hub Integration

```python
from huggingface_hub import hf_hub_download
import onnxruntime as ort

# Download and load model
model_path = hf_hub_download(
    repo_id="HakaiInstitute/kelp-rgb",
    filename="model.onnx",
    cache_dir="./models"
)

session = ort.InferenceSession(model_path)
# ... continue with preprocessing and inference as above
```

## Installation

### For kelp-o-matic usage:

```bash
# Via pip
pip install git+https://github.com/HakaiInstitute/kelp-o-matic@dev
```

### For direct ONNX usage:

```bash
pip install onnxruntime huggingface-hub numpy pillow
# For GPU support:
pip install onnxruntime-gpu
```

## Input Requirements

- **Image Format:** 3-band RGB raster (JPEG, PNG, GeoTIFF)
- **Band Order:** Red, Green, Blue
- **Pixel Values:** Standard 8-bit (0-255 range)
- **Spatial Resolution:** Optimized for high-resolution drone imagery (cm-level)

## Output Format

- **Type:** Single-band raster with class labels
- **Values:** 
  - 0: Background (water, other features)
  - 1: *Macrocystis pyrifera* (Giant kelp)
  - 2: *Nereocystis luetkeana* (Bull kelp)
- **Format:** Matches input raster format and projection
- **Spatial Resolution:** Same as input

**Note:** The model outputs class probabilities, but kelp-o-matic automatically applies argmax to convert these to discrete class labels.

## Performance Notes

- **Dynamic Tile Size:** Supports flexible tile sizes (recommended: 2048x2048 or 1024x1024)
- **Batch Size:** Start with 4, adjust based on available GPU memory

## Large Image Processing

For processing large geospatial images, the kelp-o-matic package handles:

- **Automatic Tiling:** Splits large images into manageable tiles
- **Overlap Handling:** Uses overlapping tiles to avoid edge artifacts
- **Memory Management:** Processes tiles in batches to manage memory usage
- **Geospatial Metadata:** Preserves coordinate reference system and geotransforms
- **Post-processing:** Optional median filtering and morphological operations

## Citation

If you use this model in your research, please cite:

```bibtex
@software{Denouden_Kelp-O-Matic,
  author = {Denouden, Taylor and Reshitnyk, Luba},
  doi = {10.5281/zenodo.7672166},
  title = {{Kelp-O-Matic}},
  url = {https://github.com/HakaiInstitute/kelp-o-matic}
}
```

## License

MIT License - see the [kelp-o-matic repository](https://github.com/HakaiInstitute/kelp-o-matic/blob/main/LICENSE) for details.

## Related Resources

- **Documentation:** [kelp-o-matic.readthedocs.io](https://kelp-o-matic.readthedocs.io)
- **Source Code:** [github.com/HakaiInstitute/kelp-o-matic](https://github.com/HakaiInstitute/kelp-o-matic)
- **Other Models:** Check the [Hakai Institute HuggingFace organization](https://huggingface.co/HakaiInstitute) for additional kelp segmentation models

## Contact

For questions or issues:
- Open an issue on the [GitHub repository](https://github.com/HakaiInstitute/kelp-o-matic/issues)
- Contact: [Hakai Institute](https://www.hakai.org)