Update README.md
Browse files
README.md
CHANGED
@@ -10,9 +10,9 @@ tags:
|
|
10 |
license: mit
|
11 |
---
|
12 |
|
13 |
-
#
|
14 |
|
15 |
-
**
|
16 |
|
17 |
This model is based on `openai/clip-vit-base-patch16` and fine-tuned using a **hierarchical multi-positive contrastive loss** that leverages **Locarno classification** — an international system used to categorize industrial designs.
|
18 |
|
@@ -72,7 +72,7 @@ You can now compare cosine similarity between embeddings to retrieve similar pat
|
|
72 |
|
73 |
## 🏆 Results
|
74 |
|
75 |
-
Evaluated on the **DeepPatent2** dataset,
|
76 |
- **Intra-category retrieval** (same subclass)
|
77 |
- **Cross-category generalization** (related but distinct inventions)
|
78 |
- **Low-parameter robustness**, making it suitable for real-time deployment
|
|
|
10 |
license: mit
|
11 |
---
|
12 |
|
13 |
+
# PHOENIX: Hierarchical Contrastive Learning for Patent Image Retrieval
|
14 |
|
15 |
+
**PHOENIX** is a domain-adapted CLIP/ViT-based model designed to improve **patent image retrieval**. It addresses the unique challenges of retrieving relevant technical drawings in patent documents, especially when searching for **semantically or hierarchically related images**, not just exact matches.
|
16 |
|
17 |
This model is based on `openai/clip-vit-base-patch16` and fine-tuned using a **hierarchical multi-positive contrastive loss** that leverages **Locarno classification** — an international system used to categorize industrial designs.
|
18 |
|
|
|
72 |
|
73 |
## 🏆 Results
|
74 |
|
75 |
+
Evaluated on the **DeepPatent2** dataset, PHOENIX shows significant gains in:
|
76 |
- **Intra-category retrieval** (same subclass)
|
77 |
- **Cross-category generalization** (related but distinct inventions)
|
78 |
- **Low-parameter robustness**, making it suitable for real-time deployment
|