Datasets:
Add task category, paper/code links, and evaluation usage
#1
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,5 +1,8 @@
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
|
|
|
|
|
|
|
|
|
| 3 |
tags:
|
| 4 |
- math
|
| 5 |
- reasoning
|
|
@@ -7,9 +10,8 @@ tags:
|
|
| 7 |
- bode-plot
|
| 8 |
- llm-evaluation
|
| 9 |
- scientific-question-answering
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
- "10K<n<100K"
|
| 13 |
configs:
|
| 14 |
- config_name: linear_solve
|
| 15 |
data_files: linear_solve.parquet
|
|
@@ -25,6 +27,8 @@ configs:
|
|
| 25 |
|
| 26 |
# MathBode: A Dynamic Benchmark for Mathematical Reasoning
|
| 27 |
|
|
|
|
|
|
|
| 28 |
<!-- Updated: 2025-09-22 -->
|
| 29 |
|
| 30 |
**MathBode** is a benchmark designed to evaluate the dynamic reasoning capabilities of large language models (LLMs) by treating parametric math problems as dynamic systems. Instead of testing static accuracy on fixed problems, MathBode treats parametric math problems as dynamic systems. It sinusoidally varies a parameter and measures the model's response in terms of **gain** (amplitude tracking) and **phase** (reasoning lag), analogous to a Bode plot in control theory.
|
|
@@ -49,7 +53,9 @@ The dataset is provided in a single Parquet file and contains the following colu
|
|
| 49 |
| `ground_truth` | The correct numerical answer for the given prompt. |
|
| 50 |
| `symbolic_baseline_answer` | The answer from a perfect symbolic solver (identical to `ground_truth`). |
|
| 51 |
|
| 52 |
-
## Usage
|
|
|
|
|
|
|
| 53 |
|
| 54 |
You can load the dataset easily using the `datasets` library:
|
| 55 |
|
|
@@ -63,6 +69,20 @@ dataset = load_dataset("cognitive-metrology-lab/MathBode")
|
|
| 63 |
print(dataset['train'][0])
|
| 64 |
```
|
| 65 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 66 |
## Citing
|
| 67 |
|
| 68 |
If you use this dataset, please cite our work.
|
|
@@ -76,4 +96,4 @@ If you use this dataset, please cite our work.
|
|
| 76 |
primaryClass = {cs.AI},
|
| 77 |
url = {https://arxiv.org/abs/2509.23143}
|
| 78 |
}
|
| 79 |
-
```
|
|
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
+
size_categories:
|
| 4 |
+
- 10K<n<100K
|
| 5 |
+
pretty_name: MathBode
|
| 6 |
tags:
|
| 7 |
- math
|
| 8 |
- reasoning
|
|
|
|
| 10 |
- bode-plot
|
| 11 |
- llm-evaluation
|
| 12 |
- scientific-question-answering
|
| 13 |
+
task_categories:
|
| 14 |
+
- text-generation
|
|
|
|
| 15 |
configs:
|
| 16 |
- config_name: linear_solve
|
| 17 |
data_files: linear_solve.parquet
|
|
|
|
| 27 |
|
| 28 |
# MathBode: A Dynamic Benchmark for Mathematical Reasoning
|
| 29 |
|
| 30 |
+
[Paper](https://huggingface.co/papers/2509.23143) | [Code](https://github.com/charleslwang/MathBode-Eval)
|
| 31 |
+
|
| 32 |
<!-- Updated: 2025-09-22 -->
|
| 33 |
|
| 34 |
**MathBode** is a benchmark designed to evaluate the dynamic reasoning capabilities of large language models (LLMs) by treating parametric math problems as dynamic systems. Instead of testing static accuracy on fixed problems, MathBode treats parametric math problems as dynamic systems. It sinusoidally varies a parameter and measures the model's response in terms of **gain** (amplitude tracking) and **phase** (reasoning lag), analogous to a Bode plot in control theory.
|
|
|
|
| 53 |
| `ground_truth` | The correct numerical answer for the given prompt. |
|
| 54 |
| `symbolic_baseline_answer` | The answer from a perfect symbolic solver (identical to `ground_truth`). |
|
| 55 |
|
| 56 |
+
## Sample Usage
|
| 57 |
+
|
| 58 |
+
### Load the dataset
|
| 59 |
|
| 60 |
You can load the dataset easily using the `datasets` library:
|
| 61 |
|
|
|
|
| 69 |
print(dataset['train'][0])
|
| 70 |
```
|
| 71 |
|
| 72 |
+
### Run Evaluations
|
| 73 |
+
|
| 74 |
+
To run evaluations using the associated code, follow the instructions in the [GitHub repository](https://github.com/charleslwang/MathBode-Eval). A quick smoke test can be run as follows:
|
| 75 |
+
|
| 76 |
+
```bash
|
| 77 |
+
# First, clone the repository and install dependencies
|
| 78 |
+
# git clone https://github.com/charleslwang/MathBode-Eval.git
|
| 79 |
+
# cd MathBode-Eval
|
| 80 |
+
# pip install -r requirements.txt
|
| 81 |
+
|
| 82 |
+
# Then, run the smoke test
|
| 83 |
+
CONFIG=SMOKE ./run_matrix.sh
|
| 84 |
+
```
|
| 85 |
+
|
| 86 |
## Citing
|
| 87 |
|
| 88 |
If you use this dataset, please cite our work.
|
|
|
|
| 96 |
primaryClass = {cs.AI},
|
| 97 |
url = {https://arxiv.org/abs/2509.23143}
|
| 98 |
}
|
| 99 |
+
```
|