File size: 3,998 Bytes
b637b0f
904abb1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b637b0f
 
904abb1
b637b0f
904abb1
b637b0f
904abb1
b637b0f
904abb1
 
 
 
 
b637b0f
904abb1
b637b0f
904abb1
b637b0f
904abb1
 
b637b0f
904abb1
b637b0f
904abb1
 
b637b0f
904abb1
b637b0f
904abb1
 
 
b637b0f
904abb1
b637b0f
904abb1
 
 
 
b637b0f
904abb1
b637b0f
904abb1
 
 
b637b0f
904abb1
 
 
b637b0f
904abb1
 
b637b0f
904abb1
 
 
 
 
b637b0f
904abb1
b637b0f
904abb1
 
 
 
 
 
 
 
 
b637b0f
904abb1
b637b0f
904abb1
 
 
 
 
 
 
 
 
 
b637b0f
904abb1
b637b0f
904abb1
 
 
 
 
 
 
 
 
 
 
 
b637b0f
904abb1
b637b0f
904abb1
 
b637b0f
904abb1
b637b0f
904abb1
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
---
license: mit
language:
  - ko
metrics:
  - accuracy
  - f1
base_model:
  - facebook/convnext-tiny-224
pipeline_tag: image-classification
tags:
  - multispectral
  - convnext
  - image-classification
  - remote-sensing
  - agriculture
  - xai
---

# ConvNext_Multi ๋ชจ๋ธ ์นด๋“œ

## Model Details

ConvNext_Multi๋Š” ๋‹ค์ค‘๋ถ„๊ด‘(๋ฉ€ํ‹ฐ์ŠคํŽ™ํŠธ๋Ÿผ) ์˜์ƒ ๋ฐ์ดํ„ฐ๋ฅผ ์ž…๋ ฅ์œผ๋กœ ํ•˜์—ฌ ์ž‘๋ฌผ ๋ฐ ์‹์ƒ์„ ๋ถ„๋ฅ˜ํ•˜๋Š” ConvNeXt ๊ธฐ๋ฐ˜ ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜ ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. ๋“œ๋ก  ๋ฐ ์œ„์„ฑ์—์„œ ์ดฌ์˜ํ•œ 5๋ฐด๋“œ (Blue, Green, Red, Near-Infrared, RedEdge) ์˜์ƒ์„ ํšจ์œจ์ ์œผ๋กœ ์ฒ˜๋ฆฌํ•˜๋„๋ก ์„ค๊ณ„๋˜์–ด, ๊ณ ํ•ด์ƒ๋„ ๋†์—…ยทํ™˜๊ฒฝ ๋ชจ๋‹ˆํ„ฐ๋ง์— ์ ํ•ฉํ•ฉ๋‹ˆ๋‹ค.

- **Developed by:** AI Research Team, MuhanRnd  
- **License:** MIT  
- **Base model:** facebook/convnext-tiny-224  
- **Languages:** Korean (๋ชจ๋ธ ์ฃผ์„ ๋ฐ ๋ฌธ์„œํ™”)  
- **Model type:** ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜ (๋ฉ€ํ‹ฐ๋ฐด๋“œ ์ž…๋ ฅ)  

## Uses

### Direct Use

- ๋‹ค์ค‘๋ถ„๊ด‘ ์˜์ƒ ๊ธฐ๋ฐ˜ ์ƒ์œก ์ƒํƒœ ๋ถ„๋ฅ˜  
- ๋“œ๋ก  ์˜์ƒ์˜ 5๋ฐด๋“œ ์ž…๋ ฅ ๋ฉ€ํ‹ฐ์ŠคํŽ™ํŠธ๋Ÿผ ์ด๋ฏธ์ง€ ๋ถ„๋ฅ˜ ์ž‘์—…

### Downstream Use

- ์œ ์‚ฌํ•œ ๋‹ค์ค‘๋ถ„๊ด‘ ๋ฐ์ดํ„ฐ์…‹์— ๋Œ€ํ•œ ํŒŒ์ธํŠœ๋‹  
- ๋†์—… ์™ธ ๊ธฐํƒ€ ํ™˜๊ฒฝ ๋ชจ๋‹ˆํ„ฐ๋ง ๋Œ€์ƒ ๋ถ„๋ฅ˜ ๋ฌธ์ œ ์ ์šฉ ๊ฐ€๋Šฅ

### Out-of-Scope Use

- RGB 3๋ฐด๋“œ ์˜์ƒ๋งŒ์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ (์ž…๋ ฅ ๊ตฌ์กฐ์ƒ ํ™œ์šฉ ๋ถˆ๊ฐ€)  
- ๋ณด์ •๋˜์ง€ ์•Š์€ ๋ฉ€ํ‹ฐ๋ฐด๋“œ ์ด๋ฏธ์ง€(๋‹ค์ค‘๋ถ„๊ด‘ ๋ณด์ •๊ฐ’ ์ฒ˜๋ฆฌ ํ•„์š”)  
- ๊ฐ์ฒด ๊ฒ€์ถœ, ๋ถ„ํ•  ๋“ฑ ๋ถ„๋ฅ˜ ์ด์™ธ์˜ ํƒœ์Šคํฌ  

## Bias, Risks, and Limitations

- ๋ณธ ๋ชจ๋ธ์€ ํŠน์ • ์ง€์—ญ ๋ฐ ์ž‘๋ฌผ ๋ฐ์ดํ„ฐ๋ฅผ ์ค‘์‹ฌ์œผ๋กœ ํ•™์Šต๋˜์—ˆ์œผ๋ฏ€๋กœ, ๋ฏธํ•™์Šต ํ™˜๊ฒฝ์—์„œ๋Š” ์„ฑ๋Šฅ ์ €ํ•˜๊ฐ€ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ์Œ  
- ๋‹ค์ค‘๋ถ„๊ด‘ ์˜์ƒ์˜ ํ’ˆ์งˆ, ์ดฌ์˜ ์กฐ๊ฑด, ์ „์ฒ˜๋ฆฌ ๊ณผ์ •์— ๋ฏผ๊ฐํ•จ  
- ๋ฐ์ดํ„ฐ ํŽธํ–ฅ์œผ๋กœ ์ธํ•ด ํŠน์ • ์ž‘๋ฌผ์ด๋‚˜ ๋ฐฐ๊ฒฝ์— ๊ณผ์ ํ•ฉ ๊ฐ€๋Šฅ์„ฑ ์กด์žฌ  
- ๋ชจ๋ธ ์˜ˆ์ธก์€ ๋ณด์กฐ์  ํŒ๋‹จ ์ž๋ฃŒ๋กœ ํ™œ์šฉํ•ด์•ผ ํ•˜๋ฉฐ, ์ตœ์ข… ์˜์‚ฌ๊ฒฐ์ •์€ ์ „๋ฌธ๊ฐ€ ํŒ๋‹จ๊ณผ ๋ณ‘ํ–‰ ํ•„์š”

## How to Get Started

```python
from transformers import AutoModelForImageClassification, AutoFeatureExtractor
import torch

# ๋ชจ๋ธ๊ณผ ํŠน์ง• ์ถ”์ถœ๊ธฐ ๋ถˆ๋Ÿฌ์˜ค๊ธฐ
model = AutoModelForImageClassification.from_pretrained("MhRnd/ConvNext_Multi")
extractor = AutoFeatureExtractor.from_pretrained("MhRnd/ConvNext_Multi")

# ๋‹ค์ค‘๋ฐด๋“œ ์ด๋ฏธ์ง€ ํ…์„œ (์˜ˆ: [batch_size, 5, H, W])
inputs = extractor(multi_band_images, return_tensors="pt")

# ๋ชจ๋ธ ์ถ”๋ก 
outputs = model(**inputs)
logits = outputs.logits
predicted_class = torch.argmax(logits, dim=1)
```

## Training Details

- **Training Data:**  
  - ๋“œ๋ก  ๋ฐ ์œ„์„ฑ ์ดฌ์˜ ๋‹ค์ค‘๋ถ„๊ด‘(5๋ฐด๋“œ) ์ด๋ฏธ์ง€ ๋ฐ์ดํ„ฐ์…‹  
  - ๋ผ๋ฒจ: ์ฃผ์š” ์ž‘๋ฌผ ๋ฐ ์ƒ์œก ์ƒํƒœ ํด๋ž˜์Šค  
- **Training Procedure:**  
  - ํŒŒ์ธํŠœ๋‹: facebook/convnext-tiny-224 ๊ธฐ๋ฐ˜  
  - ์—ํญ์ˆ˜: 2  
  - ๋ฐฐ์น˜์‚ฌ์ด์ฆˆ: 16  
  - ์˜ตํ‹ฐ๋งˆ์ด์ €: AdamW  
  - ํ•™์Šต๋ฅ : 1e-05, Step ์Šค์ผ€์ค„๋Ÿฌ ์‚ฌ์šฉ  

## Evaluation

- **Testing Data:** ๋ณ„๋„ ๋ณด์œ ํ•œ ๊ฒ€์ฆ์šฉ ๋‹ค์ค‘๋ถ„๊ด‘ ์ด๋ฏธ์ง€์…‹  
- **Metrics:** ์ •ํ™•๋„(Accuracy), ์†์‹ค(Loss)
- **Performance:**  
  - **๋ฒ ์ŠคํŠธ ์„ฑ๋Šฅ (Epoch 2):**
    - ํ›ˆ๋ จ ์†์‹ค: 1.3640
    - ํ›ˆ๋ จ ์ •ํ™•๋„: 0.2783
    - ๊ฒ€์ฆ ์†์‹ค: 1.3898
    - ๊ฒ€์ฆ ์ •ํ™•๋„: 0.2069
  - **๋งˆ์ง€๋ง‰ ์—…๋ฐ์ดํŠธ:** 2025-08-20 08:32:18
  - Accuracy: 90.0%   

## Environmental Impact

- **Hardware:** NVIDIA RTX 3090 GPU  
- **Training Duration:** ์•ฝ 15๋ถ„
  
## Citation
```
@article{liu2022convnext,  
  title={ConvNeXt: A ConvNet for the 2020s},  
  author={Liu, Zhuang and Mao, Han and Wu, Chao and Feichtenhofer, Christoph and Darrell, Trevor and Xie, Saining},  
  journal={arXiv preprint arXiv:2201.03545},  
  year={2022}  
}
```

## Glossary

- **๋‹ค์ค‘๋ถ„๊ด‘ ์˜์ƒ(Multispectral Imagery):** ์—ฌ๋Ÿฌ ํŒŒ์žฅ๋Œ€์˜ ๋น›์„ ๋ถ„๋ฆฌํ•˜์—ฌ ์ดฌ์˜ํ•œ ์˜์ƒ์œผ๋กœ, ์ž‘๋ฌผ์˜ ์ƒ์œก ์ƒํƒœ ๋ถ„์„ ๋“ฑ์— ํ™œ์šฉ๋จ  
- **ConvNeXt:** ํ˜„๋Œ€์ ์ธ ๊ตฌ์กฐ๋ฅผ ๊ฐ–์ถ˜ ์ปจ๋ณผ๋ฃจ์…˜ ์‹ ๊ฒฝ๋ง(CNN)  

## Model Card Authors

- AI Research Team, MuhanRnd 
- [email protected]