Update README.md
Browse files
README.md
CHANGED
@@ -318,16 +318,17 @@ And comparing to **SpaceThinker**:
|
|
318 |
|
319 |
[](https://colab.research.google.com/drive/1YpIOjJFZ-Zaomg77ImeQHSqYBLB8T1Ce?usp=sharing)
|
320 |
|
321 |
-
This table compares `SpaceOm
|
322 |
|
323 |
| Model | EQ | SQ | SA | OO | OS | EP | FR | SP | Source |
|
324 |
|------------------------|-------|-------|-------|-------|-------|-------|-------|-------|-----------|
|
325 |
-
| **SpaceOm**
|
326 |
-
|
|
327 |
-
|
|
328 |
-
|
|
329 |
-
|
|
330 |
-
|
|
|
|
331 |
|
332 |
**Legend:**
|
333 |
- EQ: Entity Quantification
|
|
|
318 |
|
319 |
[](https://colab.research.google.com/drive/1YpIOjJFZ-Zaomg77ImeQHSqYBLB8T1Ce?usp=sharing)
|
320 |
|
321 |
+
This table compares `SpaceOm`, `SpaceQwen`, `SpaceThinker` and other leading open-source models on the SpaCE-10 benchmark. Top scores in each category are **bolded**.
|
322 |
|
323 |
| Model | EQ | SQ | SA | OO | OS | EP | FR | SP | Source |
|
324 |
|------------------------|-------|-------|-------|-------|-------|-------|-------|-------|-----------|
|
325 |
+
| **SpaceOm** | 32.47 | 24.81 | **47.63** | 50.00 | 32.52 | 9.12 | 37.04 | 25.00 | GPT Eval |
|
326 |
+
| SpaceThinker | 32.73 | 24.81 | 47.26 | 50.33 | 33.63 | 9.25 | 37.54 | 26.25 | GPT Eval |
|
327 |
+
| SpaceQwen | 31.19 | 25.89 | 41.61 | **51.98** | **35.18** | 10.97 | 36.54 | 22.50 | GPT Eval |
|
328 |
+
| Qwen2.5-VL-7B-Instruct| 32.70 | 31.00 | 41.30 | 32.10 | 27.60 | 15.40 | 26.30 | 27.50 | Table |
|
329 |
+
| LLaVA-OneVision-7B | **37.40** | 36.20 | 42.90 | 44.20 | 27.10 | 11.20 | **45.60** | 27.20 | Table |
|
330 |
+
| VILA1.5-7B | 30.20 | **38.60** | 39.90 | 44.10 | 16.50 | **35.10** | 30.10 | **37.60** | Table |
|
331 |
+
| InternVL2.5-4B | 34.30 | 34.40 | 43.60 | 44.60 | 16.10 | 30.10 | 33.70 | 36.70 | Table |
|
332 |
|
333 |
**Legend:**
|
334 |
- EQ: Entity Quantification
|