prithivMLmods commited on
Commit
0d207f4
·
verified ·
1 Parent(s): c66723b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +90 -1
README.md CHANGED
@@ -1,6 +1,21 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
4
  ```py
5
  Classification Report:
6
  precision recall f1-score support
@@ -14,4 +29,78 @@ Classification Report:
14
  accuracy 0.7407 44072
15
  macro avg 0.5933 0.3438 0.3480 44072
16
  weighted avg 0.7240 0.7407 0.7130 44072
17
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ language:
4
+ - en
5
+ base_model:
6
+ - google/siglip2-base-patch16-224
7
+ pipeline_tag: image-classification
8
+ library_name: transformers
9
+ tags:
10
+ - gender
11
+ - fashion
12
+ - product
13
  ---
14
+
15
+ # **Fashion-Product-Gender**
16
+
17
+ > **Fashion-Product-Gender** is a vision model fine-tuned from **google/siglip2-base-patch16-224** using the **SiglipForImageClassification** architecture. It classifies fashion product images into one of five gender categories.
18
+
19
  ```py
20
  Classification Report:
21
  precision recall f1-score support
 
29
  accuracy 0.7407 44072
30
  macro avg 0.5933 0.3438 0.3480 44072
31
  weighted avg 0.7240 0.7407 0.7130 44072
32
+ ```
33
+
34
+ The model predicts one of the following gender categories for fashion products:
35
+
36
+ - **0:** Boys
37
+ - **1:** Girls
38
+ - **2:** Men
39
+ - **3:** Unisex
40
+ - **4:** Women
41
+
42
+ ---
43
+
44
+ # **Run with Transformers 🤗**
45
+
46
+ ```python
47
+ !pip install -q transformers torch pillow gradio
48
+ ```
49
+
50
+ ```python
51
+ import gradio as gr
52
+ from transformers import AutoImageProcessor, SiglipForImageClassification
53
+ from PIL import Image
54
+ import torch
55
+
56
+ # Load model and processor
57
+ model_name = "prithivMLmods/Fashion-Product-Gender" # Replace with your actual model path
58
+ model = SiglipForImageClassification.from_pretrained(model_name)
59
+ processor = AutoImageProcessor.from_pretrained(model_name)
60
+
61
+ # Label mapping
62
+ id2label = {
63
+ 0: "Boys",
64
+ 1: "Girls",
65
+ 2: "Men",
66
+ 3: "Unisex",
67
+ 4: "Women"
68
+ }
69
+
70
+ def classify_gender(image):
71
+ """Predicts the gender category for a fashion product."""
72
+ image = Image.fromarray(image).convert("RGB")
73
+ inputs = processor(images=image, return_tensors="pt")
74
+
75
+ with torch.no_grad():
76
+ outputs = model(**inputs)
77
+ logits = outputs.logits
78
+ probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
79
+
80
+ predictions = {id2label[i]: round(probs[i], 3) for i in range(len(probs))}
81
+ return predictions
82
+
83
+ # Gradio interface
84
+ iface = gr.Interface(
85
+ fn=classify_gender,
86
+ inputs=gr.Image(type="numpy"),
87
+ outputs=gr.Label(label="Gender Prediction Scores"),
88
+ title="Fashion-Product-Gender",
89
+ description="Upload a fashion product image to predict the target gender category (Boys, Girls, Men, Unisex, Women)."
90
+ )
91
+
92
+ # Launch the app
93
+ if __name__ == "__main__":
94
+ iface.launch()
95
+ ```
96
+
97
+ ---
98
+
99
+ # **Intended Use**
100
+
101
+ This model is best suited for:
102
+
103
+ - **Fashion E-commerce tagging and search**
104
+ - **Personalized recommendations based on gender**
105
+ - **Catalog organization and gender-based filters**
106
+ - **Retail analytics and demographic insights**