|
--- |
|
license: other |
|
license_name: non-commercial |
|
license_link: LICENSE |
|
library_name: pytorch |
|
tags: |
|
- computer-vision |
|
- object-detection |
|
- drone-detection |
|
- pytorch |
|
- convnext |
|
- security |
|
- surveillance |
|
- uav-detection |
|
- aerial-vehicle |
|
- defense |
|
pipeline_tag: object-detection |
|
model_type: object-detection |
|
datasets: |
|
- custom |
|
language: |
|
- en |
|
metrics: |
|
- precision |
|
- recall |
|
- map |
|
widget: |
|
- src: https://example.com/drone_sample.jpg |
|
example_title: "Drone Detection Sample" |
|
model-index: |
|
- name: HarpoonNet 1.2 |
|
results: |
|
- task: |
|
type: object-detection |
|
name: Object Detection |
|
dataset: |
|
type: custom |
|
name: Multi-Domain Drone Dataset |
|
args: 109880 images |
|
metrics: |
|
- type: validation_loss |
|
value: 0.059270 |
|
name: Validation Loss |
|
- type: parameters |
|
value: 50000000 |
|
name: Total Parameters |
|
base_model: microsoft/convnext-small-224 |
|
--- |
|
|
|
# HarpoonNet 1.2 - Advanced Drone Detection Model |
|
|
|
|
|
📄 **License: Non-commercial use only.** |
|
🔐 **Commercial licenses available upon request. Contact: [email protected]** |
|
|
|
 |
|
 |
|
 |
|
|
|
## 🛡️ Commercial Use Notice |
|
|
|
⚠️ **This model requires explicit permission for commercial use.** ⚠️ |
|
|
|
- ✅ **FREE for**: Research, education, academic use, open-source projects |
|
- ❌ **REQUIRES LICENSE for**: Commercial products, revenue-generating applications, proprietary systems |
|
- 📧 **Contact**: [email protected] for commercial licensing |
|
|
|
## Website |
|
|
|
check us out @ chiliadresearch.com! |
|
|
|
## Updates |
|
|
|
July 8- fixed bug causing users to download wrong model (Harpoon 1.1) instead of Harpoon 1.2 with the new ConvNeXt backbone. My fault lol |
|
July 8- fixed DataParallel Issue: no more module prefix problems! |
|
|
|
## 🎯 Model Description |
|
|
|
HarpoonNet 1.2 is a state-of-the-art drone detection model built on a ConvNeXt-Small backbone with a proprietary Harpoon Core detection architecture. This model has been trained on a comprehensive multi-dataset comprising 109,880+ images for robust drone detection across various scenarios. |
|
|
|
## 🏗️ Architecture |
|
|
|
- **Backbone**: ConvNeXt-Small (~50M parameters) |
|
- **Detection Head**: Harpoon Core (~4.7M parameters) |
|
- **Total Parameters**: ~54.7M |
|
- **Input Size**: 544x544 pixels |
|
- **Output**: Single-class detection (drone) |
|
- **Anchors**: 3 anchor boxes per grid cell |
|
- **Feature Map**: 17x17 grid (544/32=17) |
|
|
|
## 📊 Model Performance |
|
|
|
- **Training Dataset**: 110,000+ multi-domain drone images |
|
- **Validation Loss**: 0.059270 (enhanced ConvNeXt training) |
|
- **Inference Speed**: ~60 FPS on modern GPU |
|
- **Model Size**: ~122MB (PyTorch ConvNeXt-Small) |
|
- **[email protected]**: 95%+ (superior accuracy) |
|
|
|
## 🚀 Quick Start |
|
|
|
### Installation |
|
|
|
```bash |
|
pip install torch torchvision opencv-python pillow numpy |
|
``` |
|
|
|
### Load Model |
|
|
|
```python |
|
import torch |
|
from harpoon_modular import create_harpoon_net_12 |
|
|
|
# Load the HarpoonNet 1.2 ConvNeXt model |
|
model = create_harpoon_net_12(num_classes=1, num_anchors=3, pretrained=False) |
|
checkpoint = torch.load('pytorch_model.pth', map_location='cpu') |
|
|
|
# Handle both full checkpoint and weights-only files |
|
if 'model_state_dict' in checkpoint: |
|
model.load_state_dict(checkpoint['model_state_dict']) |
|
else: |
|
model.load_state_dict(checkpoint) |
|
|
|
model.eval() |
|
print("🚀 HarpoonNet 1.2 ConvNeXt model loaded successfully!") |
|
``` |
|
|
|
### Inference |
|
|
|
```python |
|
import cv2 |
|
import torch |
|
from torchvision import transforms |
|
from PIL import Image |
|
|
|
def preprocess_image(image_path): |
|
# Load and preprocess image |
|
img = cv2.imread(image_path) |
|
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) |
|
img = cv2.resize(img, (544, 544)) # Updated resolution |
|
|
|
# Convert to tensor |
|
transform = transforms.Compose([ |
|
transforms.ToTensor(), |
|
transforms.Normalize(mean=[0.485, 0.456, 0.406], |
|
std=[0.229, 0.224, 0.225]) |
|
]) |
|
|
|
img_tensor = transform(Image.fromarray(img)).unsqueeze(0) |
|
return img_tensor |
|
|
|
# Run inference |
|
with torch.no_grad(): |
|
img_tensor = preprocess_image('drone_image.jpg') |
|
predictions = model(img_tensor) |
|
detections = model.decode_predictions(predictions, confidence_threshold=0.85) # Higher threshold |
|
|
|
# Process detections |
|
for detection in detections[0]['boxes']: |
|
print(f"Drone detected at: {detection}") |
|
``` |
|
|
|
## 🎮 Real-time Detection |
|
|
|
The model comes with ready-to-use webcam detection scripts: |
|
|
|
### Basic Detection |
|
```bash |
|
python webcam_detection.py --model pytorch_model.pth --conf 0.85 |
|
``` |
|
|
|
### Advanced Tracking |
|
```bash |
|
webcam_detection_harpoonnet12.py - Enhanced detection for HarpoonNet 1.2 (if you got some sort of gpu to handle the load- if not, just make your own webcam code ill make one thats more suitable at some point haha) |
|
|
|
``` |
|
|
|
**Controls:** |
|
- `q`: Quit |
|
- `+/-`: Adjust confidence threshold |
|
- `r`: Reset tracker (tracking mode) |
|
- `d`: Toggle debug view |
|
|
|
## 📁 Repository Contents |
|
|
|
``` |
|
├── pytorch_model.pth # Main model checkpoint |
|
├── config.json # Model configuration |
|
├── training_history.json # Training metrics and history |
|
├── harpoon_modular.py # Model architecture |
|
├── config_multi_dataset.py # Dataset configuration |
|
├── LICENSE # Non-commercial license |
|
└── README.md # This file |
|
``` |
|
|
|
## 🔧 Model Configuration |
|
|
|
- **Classes**: 1 (drone) |
|
- **Confidence Threshold**: 0.85 (recommended for high precision) |
|
- **NMS Threshold**: 0.4 |
|
- **Input Resolution**: 544x544 |
|
- **Normalization**: ImageNet standard |
|
|
|
## 📈 Training Details |
|
|
|
- **Dataset Size**: 109,880+ images from 8 datasets |
|
- **Training Framework**: PyTorch |
|
- **Optimizer**: AdamW with cosine annealing |
|
- **Learning Rate**: Enhanced warmup and decay |
|
- **Augmentations**: Advanced geometric and photometric |
|
- **Validation Split**: Stratified sampling |
|
- **Best Epoch**: 5 (validation loss: 0.059270) |
|
|
|
## 🎯 Use Cases |
|
|
|
### ✅ **Permitted (Non-Commercial)** |
|
- **Academic Research**: Computer vision studies and publications |
|
- **Educational Projects**: University coursework and learning |
|
- **Open Source Projects**: Non-profit community tools |
|
- **Personal Experimentation**: Hobby and learning projects |
|
|
|
### 🔐 **Requires Commercial License** |
|
- **Security Systems**: Commercial perimeter monitoring |
|
- **Airport Security**: Professional UAV detection systems |
|
- **Military Applications**: Defense and surveillance contracts |
|
- **Enterprise Software**: Proprietary detection services |
|
- **API Services**: Commercial drone detection APIs |
|
|
|
## ⚡ Performance Tips |
|
|
|
1. **GPU Acceleration**: Use CUDA for optimal performance |
|
2. **Batch Processing**: Process multiple images for efficiency |
|
3. **Confidence Tuning**: Use 0.85+ for high precision applications |
|
4. **Input Quality**: 544x544 resolution provides best accuracy |
|
5. **Lighting**: Enhanced model performs well in various conditions |
|
|
|
## 🛠️ Advanced Features |
|
|
|
### ConvNeXt-Small Architecture |
|
- **Modern CNN Design**: State-of-the-art computer vision backbone |
|
- **Efficient Processing**: Optimized for accuracy and speed |
|
- **Robust Detection**: Enhanced feature extraction capabilities |
|
|
|
### ByteTrack Integration |
|
- **Persistent Tracking**: Maintains object IDs across frames |
|
- **Occlusion Handling**: Robust to temporary occlusions |
|
- **Motion Prediction**: Kalman filter-based motion model |
|
- **Track Management**: Automatic track creation and deletion |
|
|
|
### Real-time Optimization |
|
- **Enhanced Architecture**: Improved speed-accuracy trade-off |
|
- **Memory Management**: Optimized memory footprint |
|
- **Multiple Formats**: PyTorch, ONNX, TensorRT support |
|
|
|
## 🏢 Commercial Licensing |
|
|
|
For commercial use, we offer flexible licensing options: |
|
|
|
- **Enterprise License**: Full commercial rights for internal use |
|
- **OEM License**: Integration into commercial products |
|
- **API License**: Commercial API service deployment |
|
- **Custom Training**: Specialized model training services |
|
|
|
**Contact**: [email protected] for pricing and terms. |
|
|
|
## 📝 Citation |
|
|
|
If you use HarpoonNet 1.2 in your research, please cite: |
|
|
|
```bibtex |
|
@misc{harpoonnet2025, |
|
title={HarpoonNet 1.2: Advanced Drone Detection with ConvNeXt Architecture}, |
|
author={Christian Khoury}, |
|
year={2025}, |
|
publisher={Hugging Face}, |
|
url={https://huggingface.co/christiankhoury05/harpoon-1-2} |
|
} |
|
``` |
|
|
|
## 📄 License |
|
|
|
This model is released under a **Custom Non-Commercial License**. |
|
|
|
- ✅ **Non-commercial use**: Freely permitted |
|
- ❌ **Commercial use**: Requires explicit written permission |
|
- 📧 **Licensing**: Contact [email protected] |
|
|
|
See LICENSE file for complete terms. |
|
|
|
## 🤝 Contributing |
|
|
|
Contributions for non-commercial use are welcome! Please feel free to submit issues and enhancement requests. |
|
|
|
## 📞 Contact |
|
|
|
For questions, support, and commercial licensing: |
|
- **Email**: [email protected] |
|
- **Website**: chiliadresearch.com |
|
- **GitHub**: [christiankhoury05](https://github.com/christiankhoury05) |
|
- **Hugging Face**: [christiankhoury05](https://huggingface.co/christiankhoury05) |
|
|
|
## 🔄 Model Updates |
|
|
|
- **v1.2**: Current version with 109k+ dataset, ConvNeXt-Small backbone |
|
- **v1.1**: Previous version with EfficientNet-B0 backbone |
|
- **v1.0**: Initial release |