HarpoonNet 1.2 - Advanced Drone Detection Model

๐Ÿ“„ License: Non-commercial use only. ๐Ÿ” Commercial licenses available upon request. Contact: [email protected]

HarpoonNet Logo PyTorch License

๐Ÿ›ก๏ธ Commercial Use Notice

โš ๏ธ This model requires explicit permission for commercial use. โš ๏ธ

  • โœ… FREE for: Research, education, academic use, open-source projects
  • โŒ REQUIRES LICENSE for: Commercial products, revenue-generating applications, proprietary systems
  • ๐Ÿ“ง Contact: [email protected] for commercial licensing

Website

check us out @ chiliadresearch.com!

Updates

July 8- fixed bug causing users to download wrong model (Harpoon 1.1) instead of Harpoon 1.2 with the new ConvNeXt backbone. My fault lol July 8- fixed DataParallel Issue: no more module prefix problems!

๐ŸŽฏ Model Description

HarpoonNet 1.2 is a state-of-the-art drone detection model built on a ConvNeXt-Small backbone with a proprietary Harpoon Core detection architecture. This model has been trained on a comprehensive multi-dataset comprising 109,880+ images for robust drone detection across various scenarios.

๐Ÿ—๏ธ Architecture

  • Backbone: ConvNeXt-Small (~50M parameters)
  • Detection Head: Harpoon Core (~4.7M parameters)
  • Total Parameters: ~54.7M
  • Input Size: 544x544 pixels
  • Output: Single-class detection (drone)
  • Anchors: 3 anchor boxes per grid cell
  • Feature Map: 17x17 grid (544/32=17)

๐Ÿ“Š Model Performance

  • Training Dataset: 110,000+ multi-domain drone images
  • Validation Loss: 0.059270 (enhanced ConvNeXt training)
  • Inference Speed: ~60 FPS on modern GPU
  • Model Size: ~122MB (PyTorch ConvNeXt-Small)
  • [email protected]: 95%+ (superior accuracy)

๐Ÿš€ Quick Start

Installation

pip install torch torchvision opencv-python pillow numpy

Load Model

import torch
from harpoon_modular import create_harpoon_net_12

# Load the HarpoonNet 1.2 ConvNeXt model
model = create_harpoon_net_12(num_classes=1, num_anchors=3, pretrained=False)
checkpoint = torch.load('pytorch_model.pth', map_location='cpu')

# Handle both full checkpoint and weights-only files
if 'model_state_dict' in checkpoint:
    model.load_state_dict(checkpoint['model_state_dict'])
else:
    model.load_state_dict(checkpoint)

model.eval()
print("๐Ÿš€ HarpoonNet 1.2 ConvNeXt model loaded successfully!")

Inference

import cv2
import torch
from torchvision import transforms
from PIL import Image

def preprocess_image(image_path):
    # Load and preprocess image
    img = cv2.imread(image_path)
    img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
    img = cv2.resize(img, (544, 544))  # Updated resolution
    
    # Convert to tensor
    transform = transforms.Compose([
        transforms.ToTensor(),
        transforms.Normalize(mean=[0.485, 0.456, 0.406], 
                           std=[0.229, 0.224, 0.225])
    ])
    
    img_tensor = transform(Image.fromarray(img)).unsqueeze(0)
    return img_tensor

# Run inference
with torch.no_grad():
    img_tensor = preprocess_image('drone_image.jpg')
    predictions = model(img_tensor)
    detections = model.decode_predictions(predictions, confidence_threshold=0.85)  # Higher threshold
    
    # Process detections
    for detection in detections[0]['boxes']:
        print(f"Drone detected at: {detection}")

๐ŸŽฎ Real-time Detection

The model comes with ready-to-use webcam detection scripts:

Basic Detection

python webcam_detection.py --model pytorch_model.pth --conf 0.85

Advanced Tracking

webcam_detection_harpoonnet12.py - Enhanced detection for HarpoonNet 1.2 (if you got some sort of gpu to handle the load- if not, just make your own webcam code ill make one thats more suitable at some point haha)

Controls:

  • q: Quit
  • +/-: Adjust confidence threshold
  • r: Reset tracker (tracking mode)
  • d: Toggle debug view

๐Ÿ“ Repository Contents

โ”œโ”€โ”€ pytorch_model.pth          # Main model checkpoint
โ”œโ”€โ”€ config.json               # Model configuration
โ”œโ”€โ”€ training_history.json     # Training metrics and history
โ”œโ”€โ”€ harpoon_modular.py        # Model architecture
โ”œโ”€โ”€ config_multi_dataset.py   # Dataset configuration
โ”œโ”€โ”€ LICENSE                   # Non-commercial license
โ””โ”€โ”€ README.md                 # This file

๐Ÿ”ง Model Configuration

  • Classes: 1 (drone)
  • Confidence Threshold: 0.85 (recommended for high precision)
  • NMS Threshold: 0.4
  • Input Resolution: 544x544
  • Normalization: ImageNet standard

๐Ÿ“ˆ Training Details

  • Dataset Size: 109,880+ images from 8 datasets
  • Training Framework: PyTorch
  • Optimizer: AdamW with cosine annealing
  • Learning Rate: Enhanced warmup and decay
  • Augmentations: Advanced geometric and photometric
  • Validation Split: Stratified sampling
  • Best Epoch: 5 (validation loss: 0.059270)

๐ŸŽฏ Use Cases

โœ… Permitted (Non-Commercial)

  • Academic Research: Computer vision studies and publications
  • Educational Projects: University coursework and learning
  • Open Source Projects: Non-profit community tools
  • Personal Experimentation: Hobby and learning projects

๐Ÿ” Requires Commercial License

  • Security Systems: Commercial perimeter monitoring
  • Airport Security: Professional UAV detection systems
  • Military Applications: Defense and surveillance contracts
  • Enterprise Software: Proprietary detection services
  • API Services: Commercial drone detection APIs

โšก Performance Tips

  1. GPU Acceleration: Use CUDA for optimal performance
  2. Batch Processing: Process multiple images for efficiency
  3. Confidence Tuning: Use 0.85+ for high precision applications
  4. Input Quality: 544x544 resolution provides best accuracy
  5. Lighting: Enhanced model performs well in various conditions

๐Ÿ› ๏ธ Advanced Features

ConvNeXt-Small Architecture

  • Modern CNN Design: State-of-the-art computer vision backbone
  • Efficient Processing: Optimized for accuracy and speed
  • Robust Detection: Enhanced feature extraction capabilities

ByteTrack Integration

  • Persistent Tracking: Maintains object IDs across frames
  • Occlusion Handling: Robust to temporary occlusions
  • Motion Prediction: Kalman filter-based motion model
  • Track Management: Automatic track creation and deletion

Real-time Optimization

  • Enhanced Architecture: Improved speed-accuracy trade-off
  • Memory Management: Optimized memory footprint
  • Multiple Formats: PyTorch, ONNX, TensorRT support

๐Ÿข Commercial Licensing

For commercial use, we offer flexible licensing options:

  • Enterprise License: Full commercial rights for internal use
  • OEM License: Integration into commercial products
  • API License: Commercial API service deployment
  • Custom Training: Specialized model training services

Contact: [email protected] for pricing and terms.

๐Ÿ“ Citation

If you use HarpoonNet 1.2 in your research, please cite:

@misc{harpoonnet2025,
  title={HarpoonNet 1.2: Advanced Drone Detection with ConvNeXt Architecture},
  author={Christian Khoury},
  year={2025},
  publisher={Hugging Face},
  url={https://huggingface.co/christiankhoury05/harpoon-1-2}
}

๐Ÿ“„ License

This model is released under a Custom Non-Commercial License.

  • โœ… Non-commercial use: Freely permitted
  • โŒ Commercial use: Requires explicit written permission
  • ๐Ÿ“ง Licensing: Contact [email protected]

See LICENSE file for complete terms.

๐Ÿค Contributing

Contributions for non-commercial use are welcome! Please feel free to submit issues and enhancement requests.

๐Ÿ“ž Contact

For questions, support, and commercial licensing:

๐Ÿ”„ Model Updates

  • v1.2: Current version with 109k+ dataset, ConvNeXt-Small backbone
  • v1.1: Previous version with EfficientNet-B0 backbone
  • v1.0: Initial release
Downloads last month
4
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Evaluation results

  • Validation Loss on Multi-Domain Drone Dataset
    self-reported
    0.059
  • Total Parameters on Multi-Domain Drone Dataset
    self-reported
    50000000.000