prithivMLmods commited on
Commit
210bdef
Β·
verified Β·
1 Parent(s): ad8b85d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +55 -2
README.md CHANGED
@@ -11,7 +11,9 @@ tags:
11
  base_model:
12
  - google/vit-base-patch32-224-in21k
13
  ---
14
-
 
 
15
 
16
  Classification report:
17
 
@@ -26,4 +28,55 @@ base_model:
26
 
27
 
28
 
29
- ![download (1).png](https://cdn-uploads.huggingface.co/production/uploads/6720824b15b6282a2464fc58/-25Oh3wureg_MI4nvjh7w.png)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  base_model:
12
  - google/vit-base-patch32-224-in21k
13
  ---
14
+ # **Deepfake-Detection-Exp-02-22**
15
+
16
+ Deepfake-Detection-Exp-02-22 is a minimalist, high-quality dataset trained on a ViT-based model for image classification, distinguishing between deepfake and real images. The model is based on Google's **`google/vit-base-patch32-224-in21k`**.
17
 
18
  Classification report:
19
 
 
28
 
29
 
30
 
31
+ ![download (1).png](https://cdn-uploads.huggingface.co/production/uploads/6720824b15b6282a2464fc58/-25Oh3wureg_MI4nvjh7w.png)
32
+
33
+ # **Inference with Hugging Face Pipeline**
34
+ ```python
35
+ from transformers import pipeline
36
+
37
+ # Load the model
38
+ pipe = pipeline('image-classification', model="prithivMLmods/Deepfake-Detection-Exp-02-22", device=0)
39
+
40
+ # Predict on an image
41
+ result = pipe("path_to_image.jpg")
42
+ print(result)
43
+ ```
44
+
45
+ # **Inference with PyTorch**
46
+ ```python
47
+ from transformers import ViTForImageClassification, ViTImageProcessor
48
+ from PIL import Image
49
+ import torch
50
+
51
+ # Load the model and processor
52
+ model = ViTForImageClassification.from_pretrained("prithivMLmods/Deepfake-Detection-Exp-02-22")
53
+ processor = ViTImageProcessor.from_pretrained("prithivMLmods/Deepfake-Detection-Exp-02-22")
54
+
55
+ # Load and preprocess the image
56
+ image = Image.open("path_to_image.jpg").convert("RGB")
57
+ inputs = processor(images=image, return_tensors="pt")
58
+
59
+ # Perform inference
60
+ with torch.no_grad():
61
+ outputs = model(**inputs)
62
+ logits = outputs.logits
63
+ predicted_class = torch.argmax(logits, dim=1).item()
64
+
65
+ # Map class index to label
66
+ label = model.config.id2label[predicted_class]
67
+ print(f"Predicted Label: {label}")
68
+ ```
69
+
70
+ # **Limitations**
71
+ 1. **Generalization Issues** – The model may not perform well on deepfake images generated by unseen or novel deepfake techniques.
72
+ 2. **Dataset Bias** – The training data might not cover all variations of real and fake images, leading to biased predictions.
73
+ 3. **Resolution Constraints** – Since the model is based on `vit-base-patch32-224-in21k`, it is optimized for 224x224 image resolution, which may limit its effectiveness on high-resolution images.
74
+ 4. **Adversarial Vulnerabilities** – The model may be susceptible to adversarial attacks designed to fool vision transformers.
75
+ 5. **False Positives & False Negatives** – The model may occasionally misclassify real images as deepfake and vice versa, requiring human validation in critical applications.
76
+
77
+ # **Intended Use**
78
+ 1. **Deepfake Detection** – Designed for identifying deepfake images in media, social platforms, and forensic analysis.
79
+ 2. **Research & Development** – Useful for researchers studying deepfake detection and improving ViT-based classification models.
80
+ 3. **Content Moderation** – Can be integrated into platforms to detect and flag manipulated images.
81
+ 4. **Security & Forensics** – Assists in cybersecurity applications where verifying the authenticity of images is crucial.
82
+ 5. **Educational Purposes** – Can be used in training AI practitioners and students in the field of computer vision and deepfake detection.