Datasets:
Add Quick Start / Sample Usage section
#3
by
nielsr
HF Staff
- opened
README.md
CHANGED
@@ -1,133 +1,246 @@
|
|
1 |
-
---
|
2 |
-
|
3 |
-
-
|
4 |
-
-
|
5 |
-
license:
|
6 |
-
- mit
|
7 |
-
|
8 |
-
-
|
9 |
-
-
|
10 |
-
tags:
|
11 |
-
- physics
|
12 |
-
- olympiad
|
13 |
-
- benchmark
|
14 |
-
- multimodal
|
15 |
-
- llm-evaluation
|
16 |
-
- science
|
17 |
-
---
|
18 |
-
|
19 |
-
<div align="center">
|
20 |
-
|
21 |
-
<p align="center" style="font-size:28px"><b>π₯ HiPhO: High School Physics Olympiad Benchmark</b></p>
|
22 |
-
<p align="center">
|
23 |
-
<a href="https://phyarena.github.io/">[π Leaderboard]</a>
|
24 |
-
<a href="https://huggingface.co/datasets/SciYu/HiPhO">[π Dataset]</a>
|
25 |
-
<a href="https://github.com/SciYu/HiPhO">[β¨ GitHub]</a>
|
26 |
-
<a href="https://huggingface.co/papers/2509.07894">[π Paper]</a>
|
27 |
-
</p>
|
28 |
-
|
29 |
-
[](https://opensource.org/license/mit)
|
30 |
-
</div>
|
31 |
-
|
32 |
-
π **New (Sep. 16):** We launched "[**PhyArena**](https://phyarena.github.io/)", a physics reasoning leaderboard incorporating the HiPhO benchmark.
|
33 |
-
|
34 |
-
## π Introduction
|
35 |
-
|
36 |
-
**HiPhO** (High School Physics Olympiad Benchmark) is the **first benchmark** specifically designed to evaluate the physical reasoning abilities of (M)LLMs on **real-world Physics Olympiads from 2024β2025**.
|
37 |
-
|
38 |
-
<div align="center">
|
39 |
-
<img src="intro/HiPhO_overview.png" alt="hipho overview five rings" width="600"/>
|
40 |
-
</div>
|
41 |
-
|
42 |
-
### β¨ Key Features
|
43 |
-
|
44 |
-
1. **Up-to-date Coverage**: Includes 13 Olympiad exam papers from 2024β2025 across international and regional competitions.
|
45 |
-
2. **Mixed-modal Content**: Supports four modality types, spanning from text-only to diagram-based problems.
|
46 |
-
3. **Professional Evaluation**: Uses official marking schemes for answer-level and step-level grading.
|
47 |
-
4. **Human-level Comparison**: Maps model scores to medal levels (Gold/Silver/Bronze) and compares with human performance.
|
48 |
-
|
49 |
-
|
50 |
-
## π IPhO 2025 (Theory) Results
|
51 |
-
|
52 |
-
<div align="center">
|
53 |
-
<img src="intro/HiPhO_IPhO2025.png" alt="ipho2025 results" width="800"/>
|
54 |
-
</div>
|
55 |
-
|
56 |
-
- **Top-1 Human Score**: 29.2 / 30.0
|
57 |
-
- **Top-1 Model Score**: 22.7 / 29.4 (Gemini-2.5-Pro)
|
58 |
-
- **Gold Threshold**: 19.7
|
59 |
-
- **Silver Threshold**: 12.1
|
60 |
-
- **Bronze Threshold**: 7.2
|
61 |
-
|
62 |
-
> Although models like Gemini-2.5-Pro and GPT-5 achieved gold-level scores, they still fall noticeably short of the very best human contestants.
|
63 |
-
|
64 |
-
|
65 |
-
|
66 |
-
## π Dataset Overview
|
67 |
-
|
68 |
-
<div align="center">
|
69 |
-
<img src="intro/HiPhO_statistics.png" alt="framework and stats" width="700"/>
|
70 |
-
</div>
|
71 |
-
|
72 |
-
HiPhO contains:
|
73 |
-
- **13 Physics Olympiads**
|
74 |
-
- **360 Problems**
|
75 |
-
- Categorized across:
|
76 |
-
- **5 Physics Fields**: Mechanics, Electromagnetism, Thermodynamics, Optics, Modern Physics
|
77 |
-
- **4 Modality Types**: Text-Only, Text+Illustration Figure, Text+Variable Figure, Text+Data Figure
|
78 |
-
- **6 Answer Types**: Expression, Numerical Value, Multiple Choice, Equation, Open-Ended, Inequality
|
79 |
-
|
80 |
-
Evaluation is conducted using:
|
81 |
-
- **Answer-level and step-level scoring**, aligned with official marking schemes
|
82 |
-
- **Exam score** as the evaluation metric
|
83 |
-
- **Medal-based comparison**, using official thresholds for gold, silver, and bronze
|
84 |
-
|
85 |
-
|
86 |
-
|
87 |
-
## πΌοΈ Modality Categorization
|
88 |
-
|
89 |
-
<div align="center">
|
90 |
-
<img src="intro/HiPhO_modality.png" alt="modality examples" width="700"/>
|
91 |
-
</div>
|
92 |
-
|
93 |
-
- π **Text-Only (TO)**: Pure text, no figures
|
94 |
-
- π― **Text+Illustration Figure (TI)**: Figures illustrate physical setups
|
95 |
-
- π **Text+Variable Figure (TV)**: Figures define key variables or geometry
|
96 |
-
- π **Text+Data Figure (TD)**: Figures show plots, data, or functions absent from text
|
97 |
-
|
98 |
-
> As models move from TO β TD, performance drops sharplyβhighlighting the challenges of visual reasoning.
|
99 |
-
|
100 |
-
|
101 |
-
|
102 |
-
## π Main Results
|
103 |
-
|
104 |
-
<div align="center">
|
105 |
-
<img src="intro/HiPhO_main_results.png" alt="main results medal table" width="700"/>
|
106 |
-
</div>
|
107 |
-
|
108 |
-
- **Closed-source reasoning MLLMs** lead the benchmark, earning **6β12 gold medals** (Top 5: Gemini-2.5-Pro, Gemini-2.5-Flash-Thinking, GPT-5, o3, Grok-4)
|
109 |
-
- **Open-source MLLMs** mostly score at or below the **bronze** level
|
110 |
-
- **Open-source LLMs** demonstrate **stronger reasoning** and generally outperform open-source MLLMs
|
111 |
-
|
112 |
-
|
113 |
-
|
114 |
-
|
115 |
-
|
116 |
-
|
117 |
-
|
118 |
-
-
|
119 |
-
|
120 |
-
|
121 |
-
|
122 |
-
|
123 |
-
|
124 |
-
|
125 |
-
|
126 |
-
|
127 |
-
|
128 |
-
|
129 |
-
|
130 |
-
|
131 |
-
|
132 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
133 |
```
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
- zh
|
5 |
+
license:
|
6 |
+
- mit
|
7 |
+
task_categories:
|
8 |
+
- question-answering
|
9 |
+
- image-text-to-text
|
10 |
+
tags:
|
11 |
+
- physics
|
12 |
+
- olympiad
|
13 |
+
- benchmark
|
14 |
+
- multimodal
|
15 |
+
- llm-evaluation
|
16 |
+
- science
|
17 |
+
---
|
18 |
+
|
19 |
+
<div align="center">
|
20 |
+
|
21 |
+
<p align="center" style="font-size:28px"><b>π₯ HiPhO: High School Physics Olympiad Benchmark</b></p>
|
22 |
+
<p align="center">
|
23 |
+
<a href="https://phyarena.github.io/">[π Leaderboard]</a>
|
24 |
+
<a href="https://huggingface.co/datasets/SciYu/HiPhO">[π Dataset]</a>
|
25 |
+
<a href="https://github.com/SciYu/HiPhO">[β¨ GitHub]</a>
|
26 |
+
<a href="https://huggingface.co/papers/2509.07894">[π Paper]</a>
|
27 |
+
</p>
|
28 |
+
|
29 |
+
[](https://opensource.org/license/mit)
|
30 |
+
</div>
|
31 |
+
|
32 |
+
π **New (Sep. 16):** We launched "[**PhyArena**](https://phyarena.github.io/)", a physics reasoning leaderboard incorporating the HiPhO benchmark.
|
33 |
+
|
34 |
+
## π Introduction
|
35 |
+
|
36 |
+
**HiPhO** (High School Physics Olympiad Benchmark) is the **first benchmark** specifically designed to evaluate the physical reasoning abilities of (M)LLMs on **real-world Physics Olympiads from 2024β2025**.
|
37 |
+
|
38 |
+
<div align="center">
|
39 |
+
<img src="intro/HiPhO_overview.png" alt="hipho overview five rings" width="600"/>
|
40 |
+
</div>
|
41 |
+
|
42 |
+
### β¨ Key Features
|
43 |
+
|
44 |
+
1. **Up-to-date Coverage**: Includes 13 Olympiad exam papers from 2024β2025 across international and regional competitions.
|
45 |
+
2. **Mixed-modal Content**: Supports four modality types, spanning from text-only to diagram-based problems.
|
46 |
+
3. **Professional Evaluation**: Uses official marking schemes for answer-level and step-level grading.
|
47 |
+
4. **Human-level Comparison**: Maps model scores to medal levels (Gold/Silver/Bronze) and compares with human performance.
|
48 |
+
|
49 |
+
|
50 |
+
## π IPhO 2025 (Theory) Results
|
51 |
+
|
52 |
+
<div align="center">
|
53 |
+
<img src="intro/HiPhO_IPhO2025.png" alt="ipho2025 results" width="800"/>
|
54 |
+
</div>
|
55 |
+
|
56 |
+
- **Top-1 Human Score**: 29.2 / 30.0
|
57 |
+
- **Top-1 Model Score**: 22.7 / 29.4 (Gemini-2.5-Pro)
|
58 |
+
- **Gold Threshold**: 19.7
|
59 |
+
- **Silver Threshold**: 12.1
|
60 |
+
- **Bronze Threshold**: 7.2
|
61 |
+
|
62 |
+
> Although models like Gemini-2.5-Pro and GPT-5 achieved gold-level scores, they still fall noticeably short of the very best human contestants.
|
63 |
+
|
64 |
+
|
65 |
+
|
66 |
+
## π Dataset Overview
|
67 |
+
|
68 |
+
<div align="center">
|
69 |
+
<img src="intro/HiPhO_statistics.png" alt="framework and stats" width="700"/>
|
70 |
+
</div>
|
71 |
+
|
72 |
+
HiPhO contains:
|
73 |
+
- **13 Physics Olympiads**
|
74 |
+
- **360 Problems**
|
75 |
+
- Categorized across:
|
76 |
+
- **5 Physics Fields**: Mechanics, Electromagnetism, Thermodynamics, Optics, Modern Physics
|
77 |
+
- **4 Modality Types**: Text-Only, Text+Illustration Figure, Text+Variable Figure, Text+Data Figure
|
78 |
+
- **6 Answer Types**: Expression, Numerical Value, Multiple Choice, Equation, Open-Ended, Inequality
|
79 |
+
|
80 |
+
Evaluation is conducted using:
|
81 |
+
- **Answer-level and step-level scoring**, aligned with official marking schemes
|
82 |
+
- **Exam score** as the evaluation metric
|
83 |
+
- **Medal-based comparison**, using official thresholds for gold, silver, and bronze
|
84 |
+
|
85 |
+
|
86 |
+
|
87 |
+
## πΌοΈ Modality Categorization
|
88 |
+
|
89 |
+
<div align="center">
|
90 |
+
<img src="intro/HiPhO_modality.png" alt="modality examples" width="700"/>
|
91 |
+
</div>
|
92 |
+
|
93 |
+
- π **Text-Only (TO)**: Pure text, no figures
|
94 |
+
- π― **Text+Illustration Figure (TI)**: Figures illustrate physical setups
|
95 |
+
- π **Text+Variable Figure (TV)**: Figures define key variables or geometry
|
96 |
+
- π **Text+Data Figure (TD)**: Figures show plots, data, or functions absent from text
|
97 |
+
|
98 |
+
> As models move from TO β TD, performance drops sharplyβhighlighting the challenges of visual reasoning.
|
99 |
+
|
100 |
+
|
101 |
+
|
102 |
+
## π Main Results
|
103 |
+
|
104 |
+
<div align="center">
|
105 |
+
<img src="intro/HiPhO_main_results.png" alt="main results medal table" width="700"/>
|
106 |
+
</div>
|
107 |
+
|
108 |
+
- **Closed-source reasoning MLLMs** lead the benchmark, earning **6β12 gold medals** (Top 5: Gemini-2.5-Pro, Gemini-2.5-Flash-Thinking, GPT-5, o3, Grok-4)
|
109 |
+
- **Open-source MLLMs** mostly score at or below the **bronze** level
|
110 |
+
- **Open-source LLMs** demonstrate **stronger reasoning** and generally outperform open-source MLLMs
|
111 |
+
|
112 |
+
|
113 |
+
## π Quick Start
|
114 |
+
|
115 |
+
### Install Python Packages
|
116 |
+
You need to first create a conda environment and install relevant python packages
|
117 |
+
```bash
|
118 |
+
conda create -n pae python==3.10
|
119 |
+
conda activate pae
|
120 |
+
|
121 |
+
git clone https://github.com/amazon-science/PAE
|
122 |
+
cd PAE
|
123 |
+
|
124 |
+
# Install PAE
|
125 |
+
pip install -e .
|
126 |
+
|
127 |
+
# Install LLaVA
|
128 |
+
git clone https://github.com/haotian-liu/LLaVA.git
|
129 |
+
cd LLaVA
|
130 |
+
pip install -e .
|
131 |
+
pip install -e ".[train]"
|
132 |
+
pip install flash-attn==2.5.9.post1 --no-build-isolation
|
133 |
+
```
|
134 |
+
|
135 |
+
### Install Chrome
|
136 |
+
You should install google chrome and chrome driver with version 125.0.6422.141 for reproducing our results
|
137 |
+
```bash
|
138 |
+
sudo apt-get update
|
139 |
+
wget --no-verbose -O /tmp/chrome.deb https://dl.google.com/linux/chrome/deb/pool/main/g/google-chrome-stable/google-chrome-stable_125.0.6422.141-1_amd64.deb \
|
140 |
+
&& apt install -y /tmp/chrome.deb \
|
141 |
+
&& rm /tmp/chrome.deb
|
142 |
+
|
143 |
+
wget -O /tmp/chromedriver.zip https://storage.googleapis.com/chrome-for-testing-public/125.0.6422.141/linux64/chromedriver-linux64.zip
|
144 |
+
cd /tmp
|
145 |
+
unzip /tmp/chromedriver.zip
|
146 |
+
mv chromedriver-linux64/chromedriver /usr/local/bin
|
147 |
+
rm /tmp/chromedriver.zip
|
148 |
+
rm -r chromedriver-linux64
|
149 |
+
export PATH=$PATH:/usr/local/bin
|
150 |
+
```
|
151 |
+
Then you can verify that google chrome and chromedriver have been successfully installed with
|
152 |
+
```bash
|
153 |
+
google-chrome --version
|
154 |
+
# Google Chrome 125.0.6422.141
|
155 |
+
chromedriver --version
|
156 |
+
# ChromeDriver 125.0.6422.141
|
157 |
+
```
|
158 |
+
|
159 |
+
### Play with the Model Yourself
|
160 |
+
```python
|
161 |
+
import pae
|
162 |
+
from pae.models import LlavaAgent, ClaudeAgent
|
163 |
+
from accelerate import Accelerator
|
164 |
+
import torch
|
165 |
+
from tqdm import tqdm
|
166 |
+
from types import SimpleNamespace
|
167 |
+
from pae.environment.webgym import BatchedWebEnv
|
168 |
+
import os
|
169 |
+
from llava.model.language_model.llava_mistral import LlavaMistralForCausalLM
|
170 |
+
|
171 |
+
# ============= Instanstiate the agent =============
|
172 |
+
config_dict = {"use_lora": False,
|
173 |
+
"use_q4": False, # our 34b model is quantized to 4-bit, set it to True if you are using 34B model
|
174 |
+
"use_anyres": False,
|
175 |
+
"temperature": 1.0,
|
176 |
+
"max_new_tokens": 512,
|
177 |
+
"train_vision": False,
|
178 |
+
"num_beams": 1,}
|
179 |
+
config = SimpleNamespace(**config_dict)
|
180 |
+
|
181 |
+
accelerator = Accelerator()
|
182 |
+
agent = LlavaAgent(policy_lm = "yifeizhou/pae-llava-7b", # alternate models "yifeizhou/pae-llava-7b-webarena", "yifeizhou/pae-llava-34b"
|
183 |
+
device = accelerator.device,
|
184 |
+
accelerator = accelerator,
|
185 |
+
config = config)
|
186 |
+
|
187 |
+
# ============= Instanstiate the environment =============
|
188 |
+
test_tasks = [{"web_name": "Google Map",
|
189 |
+
"id": "0",
|
190 |
+
"ques": "Locate a parking lot near the Brooklyn Bridge that open 24 hours. Review the user comments about it.",
|
191 |
+
"web": "https://www.google.com/maps/"}]
|
192 |
+
save_path = "xxx"
|
193 |
+
|
194 |
+
test_env = BatchedWebEnv(tasks = test_tasks,
|
195 |
+
do_eval = False,
|
196 |
+
download_dir=os.path.join(save_path, 'test_driver', 'download'),
|
197 |
+
output_dir=os.path.join(save_path, 'test_driver', 'output'),
|
198 |
+
batch_size=1,
|
199 |
+
max_iter=10,)
|
200 |
+
# for you to check the images and actions
|
201 |
+
image_histories = [] # stores the history of the paths of images
|
202 |
+
action_histories = [] # stores the history of actions
|
203 |
+
|
204 |
+
results = test_env.reset()
|
205 |
+
image_histories.append(results[0][0]["image"])
|
206 |
+
|
207 |
+
observations = [r[0] for r in results]
|
208 |
+
actions = agent.get_action(observations)
|
209 |
+
action_histories.append(actions[0])
|
210 |
+
dones = None
|
211 |
+
|
212 |
+
for _ in tqdm(range(3)):
|
213 |
+
if dones is not None and all(dones):
|
214 |
+
break
|
215 |
+
results = test_env.step(actions)
|
216 |
+
image_histories.append(results[0][0]["image"])
|
217 |
+
observations = [r[0] for r in results]
|
218 |
+
actions = agent.get_action(observations)
|
219 |
+
action_histories.append(actions[0])
|
220 |
+
dones = [r[2] for r in results]
|
221 |
+
|
222 |
+
print("Done!")
|
223 |
+
print("image_histories: ", image_histories)
|
224 |
+
print("action_histories: ", action_histories)
|
225 |
+
```
|
226 |
+
|
227 |
+
|
228 |
+
## π₯ Download
|
229 |
+
|
230 |
+
- Dataset & Annotations: [https://huggingface.co/datasets/SciYu/HiPhO](https://huggingface.co/datasets/SciYu/HiPhO)
|
231 |
+
- GitHub Repository: [https://github.com/SciYu/HiPhO](https://github.com/SciYu/HiPhO)
|
232 |
+
- π Paper: [https://arxiv.org/abs/2509.07894](https://arxiv.org/abs/2509.07894)
|
233 |
+
- π§ Contact: *[email protected]*
|
234 |
+
|
235 |
+
|
236 |
+
|
237 |
+
## π Citation
|
238 |
+
|
239 |
+
```bibtex
|
240 |
+
@article{hipho2025,
|
241 |
+
title={HiPhO: How Far Are (M)LLMs from Humans in the Latest High School Physics Olympiad Benchmark?},
|
242 |
+
author={Yu, Fangchen and Wan, Haiyuan and Cheng, Qianjia and Zhang, Yuchen and Chen, Jiacheng and Han, Fujun and Wu, Yulun and Yao, Junchi and Hu, Ruilizhen and Ding, Ning and Cheng, Yu and Chen, Tao and Bai, Lei and Zhou, Dongzhan and Luo, Yun and Cui, Ganqu and Ye, Peng},
|
243 |
+
journal={arXiv preprint arXiv:2509.07894},
|
244 |
+
year={2025}
|
245 |
+
}
|
246 |
```
|