|
--- |
|
license: other |
|
language: |
|
- en |
|
library_name: transformers |
|
tags: |
|
- image-classification |
|
- deepfake-detection |
|
- vit |
|
- verichain |
|
pipeline_tag: image-classification |
|
--- |
|
|
|
# VeriChain Deepfake Detection Model - ViT |
|
|
|
This repository contains the artifacts for the Vision Transformer (ViT) model fine-tuned for the task of Deepfake detection, developed as part of the VeriChain project. |
|
|
|
The model is trained to classify an image into one of three categories: **Real**, **AI-Generated**, or **Deepfake**. |
|
|
|
## Repository Structure |
|
|
|
The model artifacts in this repository are organized as follows: |
|
|
|
- **/models/vit-deepfake-model/**: Contains the final, fine-tuned PyTorch model files, ready to be loaded with the `transformers` library. |
|
- **/models/onnx/**: Contains the model converted to the ONNX format, optimized for production deployment and inference. |
|
- **/assets/**: Contains visual assets for documentation, such as the confusion matrix. |
|
|
|
## How to Use (PyTorch Model) |
|
|
|
You can use the fine-tuned PyTorch model directly with the `pipeline` function from the `transformers` library. Make sure to specify the correct `subfolder`. |
|
|
|
```python |
|
from transformers import pipeline |
|
from PIL import Image |
|
|
|
# Load the image classification pipeline with your model |
|
# The 'subfolder' parameter points to the directory containing the model files |
|
classifier = pipeline( |
|
"image-classification", |
|
model="einrafh/verichain-deepfake-models", |
|
subfolder="models/vit-deepfake-model" |
|
) |
|
|
|
# Load an image you want to classify |
|
# Make sure to replace 'path/to/your/image.jpg' with an actual image file |
|
try: |
|
image = Image.open('path/to/your/image.jpg') |
|
results = classifier(image) |
|
print(results) |
|
except FileNotFoundError: |
|
print("Please provide a valid path to an image file.") |
|
|
|
# Example Output: |
|
# [{'label': 'Deepfake', 'score': 0.9985}, {'label': 'AI Generated', 'score': 0.0010}, {'label': 'Real', 'score': 0.0005}] |
|
``` |
|
|
|
## Evaluation Results |
|
|
|
The model was evaluated on a held-out test set of 2,000 images, achieving near-perfect performance. |
|
|
|
| Metric | Score | |
|
|----------------------|---------| |
|
| **Test Accuracy** | **0.9990** | |
|
| **F1-Score (Macro)** | **0.9990** | |
|
| Test Loss | 0.0202 | |
|
|
|
### Classification Report |
|
|
|
| Class | Precision | Recall | F1-Score | |
|
|----------------|-----------|--------|----------| |
|
| AI Generated | 1.0000 | 0.9970 | 0.9985 | |
|
| Deepfake | 0.9970 | 1.0000 | 0.9985 | |
|
| Real | 1.0000 | 1.0000 | 1.0000 | |
|
|
|
### Confusion Matrix |
|
|
|
The confusion matrix below shows the model's high precision and recall across all classes, with very few misclassifications. |
|
|
|
 |
|
|
|
## Citation |
|
|
|
If you use this model in your work, please consider citing this repository. |
|
|
|
```bibtex |
|
@misc{verichain_model_2025, |
|
author = {Muhammad Rafly Ash Shiddiqi}, |
|
title = {VeriChain Deepfake Detection Model - ViT}, |
|
year = {2025}, |
|
publisher = {Hugging Face}, |
|
journal = {Hugging Face repository}, |
|
howpublished = {\url{[https://huggingface.co/einrafh/verichain-deepfake-models](https://huggingface.co/einrafh/verichain-deepfake-models)}}, |
|
} |
|
``` |
|
|
|
## License |
|
|
|
Copyright (c) 2025 Muhammad Rafly Ash Shiddiqi. |
|
|