YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
π§ Beans-Image-Classification-AI-Model
A fine-tuned image classification model trained on the Beans dataset with 3 classes: angular_leaf_spot, bean_rust, and healthy. This model is built using Hugging Face Transformers and the ViT (Vision Transformer) architecture and is suitable for educational use, plant disease classification tasks, and image classification experiments.
β¨ Model Highlights
- π Base Model: google/vit-base-patch16-224-in21k
- π Fine-tuned: Beans dataset
- πΏ Classes: angular_leaf_spot, bean_rust, healthy
- π§ Framework: Hugging Face Transformers + PyTorch
- π¦ Preprocessing: AutoImageProcessor from Transformers
π§ Intended Uses
- β Educational tools for training and evaluation in agriculture and plant disease detection
- β Benchmarking vision transformer models on small datasets
- β Demonstration of fine-tuning workflows with Hugging Face
π« Limitations
- β Not suitable for real-world diagnosis in agriculture without further domain validation
- β Not robust to significant background noise or occlusion in images
- β Trained on small dataset, may not generalize beyond bean leaf diseases
π Input & Output
- Input: RGB image of a bean leaf (expected size 224x224)
- Output: Predicted class label β angular_leaf_spot, bean_rust, or healthy
ποΈββοΈ Training Details
Attribute | Value |
---|---|
Base Model | `google/vit-base-patch16-224-in21k |
Dataset | Beans Dataset (train/val/test) |
Task Type | Image Classification |
Image Size | 224 Γ 224 |
Epochs | 3 |
Batch Size | 16 |
Optimizer | AdamW |
Loss Function | CrossEntropyLoss |
Framework | PyTorch + Transformers |
Hardware | CUDA-enabled GPU |
π Evaluation Metrics
Metric | Score |
---|---|
Accuracy | 0.98 |
F1-Score | 0.99 |
Precision | 0.98 |
Recall | 0.99 |
π Usage
from transformers import AutoImageProcessor, AutoModelForImageClassification
from PIL import Image
import torch
model_name = "AventIQ-AI/Beans-Image-Classification-AI-Model"
processor = AutoImageProcessor.from_pretrained(model_name)
model = AutoModelForImageClassification.from_pretrained(model_name)
model.eval()
def predict(image_path):
image = Image.open(image_path).convert("RGB")
inputs = processor(images=image, return_tensors="pt").to(model.device)
with torch.no_grad():
outputs = model(**inputs)
preds = torch.argmax(outputs.logits, dim=1)
return model.config.id2label[preds.item()]
# Example
print(predict("example_leaf.jpg"))
- π§© Quantization
- Post-training static quantization applied using PyTorch to reduce model size and accelerate inference on edge devices.
π Repository Structure
.
beans-vit-finetuned/
βββ config.json β
Model architecture & config
βββ pytorch_model.bin β
Model weights
βββ preprocessor_config.json β
Image processor config
βββ special_tokens_map.json β
(Auto-generated, not critical for ViT)
βββ training_args.bin β
Training metadata
βββ README.md β
Model card
π€ Contributing
Open to improvements and feedback! Feel free to submit a pull request or open an issue if you find any bugs or want to enhance the model.
- Downloads last month
- 7
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support