Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -36,7 +36,7 @@ size_categories:
|
|
36 |
## Dataset Summary
|
37 |
|
38 |
|
39 |
-
The **ImageNet-Hard** is a new benchmark that comprises
|
40 |
This dataset is challenging to state-of-the-art vision models, as merely zooming in often fails to enhance their ability to classify images correctly. Consequently, even the most advanced models, such as `CLIP-ViT-L/14@336px`, struggle to perform well on this dataset, achieving only `2.02%` accuracy.
|
41 |
|
42 |
|
@@ -49,16 +49,16 @@ This dataset is challenging to state-of-the-art vision models, as merely zooming
|
|
49 |
|
50 |
|
51 |
| Model | Accuracy |
|
52 |
-
| ------------------- |
|
53 |
-
| ResNet-18 |
|
54 |
-
| ResNet-50 |
|
55 |
-
| ViT-B/32 |
|
56 |
-
| VGG19 |
|
57 |
-
| AlexNet |
|
58 |
-
|
|
59 |
-
|
|
60 |
-
|
|
61 |
-
|
62 |
|
63 |
**Evaluation Code**
|
64 |
|
|
|
36 |
## Dataset Summary
|
37 |
|
38 |
|
39 |
+
The **ImageNet-Hard** is a new benchmark that comprises 11,350 images, collected from various existing ImageNet-scale benchmarks (ImageNet, ImageNet-V2, ImageNet-Sketch, ImageNet-C, ImageNet-R, ImageNet-ReaL, ImageNet-A, and ObjectNet).
|
40 |
This dataset is challenging to state-of-the-art vision models, as merely zooming in often fails to enhance their ability to classify images correctly. Consequently, even the most advanced models, such as `CLIP-ViT-L/14@336px`, struggle to perform well on this dataset, achieving only `2.02%` accuracy.
|
41 |
|
42 |
|
|
|
49 |
|
50 |
|
51 |
| Model | Accuracy |
|
52 |
+
| ------------------- | -------- |
|
53 |
+
| ResNet-18 | 11.11 |
|
54 |
+
| ResNet-50 | 14.91 |
|
55 |
+
| ViT-B/32 | 18.78 |
|
56 |
+
| VGG19 | 12.15 |
|
57 |
+
| AlexNet | 7.30 |
|
58 |
+
| EfficientNet-B7 | 18.02 |
|
59 |
+
| EfficientNet-L2-Ns | 38.79 |
|
60 |
+
| CLIP-ViT-L/14@224px | 2.11 |
|
61 |
+
| CLIP-ViT-L/14@336px | 2.30 |
|
62 |
|
63 |
**Evaluation Code**
|
64 |
|