LukasHug commited on
Commit
47d9ef1
·
1 Parent(s): f082004
Files changed (6) hide show
  1. README.md +190 -28
  2. app.py +257 -3
  3. packages.txt +1 -0
  4. requirements.txt +5 -1
  5. tests.py +0 -17
  6. verifiablerewardsforscalablelogicalreasoning.py +259 -48
README.md CHANGED
@@ -1,50 +1,212 @@
1
  ---
2
  title: VerifiableRewardsForScalableLogicalReasoning
3
- datasets:
4
- -
5
- tags:
6
- - evaluate
7
- - metric
8
- description: "TODO: add a description here"
9
  sdk: gradio
10
- sdk_version: 3.19.1
11
  app_file: app.py
12
  pinned: false
 
 
 
 
 
 
 
13
  ---
14
 
15
- # Metric Card for VerifiableRewardsForScalableLogicalReasoning
16
-
17
- ***Module Card Instructions:*** *Fill out the following subsections. Feel free to take a look at existing metric cards if you'd like examples.*
18
 
19
  ## Metric Description
20
- *Give a brief overview of this metric, including what task(s) it is usually used for, if any.*
 
 
 
 
 
 
 
 
 
21
 
22
- ## How to Use
23
- *Give general statement of how to use the metric*
 
 
 
 
 
24
 
25
- *Provide simplest possible example for using the metric*
 
 
 
 
 
 
 
 
 
26
 
27
- ### Inputs
28
- *List all input arguments in the format below*
29
- - **input_field** *(type): Definition of input, with explanation if necessary. State any default value(s).*
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
30
 
31
- ### Output Values
32
 
33
- *Explain what this metric outputs and provide an example of what the metric output looks like. Modules should return a dictionary with one or multiple key-value pairs, e.g. {"bleu" : 6.02}*
 
34
 
35
- *State the range of possible values that the metric's output can take, as well as what in that range is considered good. For example: "This metric can take on any value between 0 and 100, inclusive. Higher scores are better."*
36
 
37
- #### Values from Popular Papers
38
- *Give examples, preferrably with links to leaderboards or publications, to papers that have reported this metric, along with the values they have reported.*
 
 
 
 
 
39
 
40
- ### Examples
41
- *Give code examples of the metric being used. Try to include examples that clear up any potential ambiguity left from the metric description above. If possible, provide a range of examples that show both typical and atypical results, as well as examples where a variety of input parameters are passed.*
 
 
 
 
 
42
 
43
- ## Limitations and Bias
44
- *Note any known limitations or biases that the metric has, with links and references if possible.*
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
45
 
46
  ## Citation
47
- *Cite the source where this metric was introduced.*
 
 
 
 
 
 
 
 
 
 
 
48
 
49
  ## Further References
50
- *Add any useful further references.*
 
 
 
1
  ---
2
  title: VerifiableRewardsForScalableLogicalReasoning
3
+ datasets: []
4
+ colorFrom: blue
5
+ colorTo: red
 
 
 
6
  sdk: gradio
7
+ sdk_version: 5.34.2
8
  app_file: app.py
9
  pinned: false
10
+ tags:
11
+ - evaluate
12
+ - reward
13
+ - reasoning
14
+ - metric
15
+ description: >-
16
+ VerifiableRewardsForScalableLogicalReasoning is a metric for evaluating logical reasoning in AI systems by providing verifiable rewards. It computes rewards through symbolic execution of candidate solutions against validation programs, enabling automatic, transparent and reproducible evaluation in AI systems.
17
  ---
18
 
19
+ # Metric Card for Symbolic Judge: Verifiable Rewards for Scalable Logical Reasoning
 
 
20
 
21
  ## Metric Description
22
+ Verifiable Rewards for Scalable Logical Reasoning (**SLR**) introduces a **symbolic judge** that provides verifiable rewards for logical reasoning tasks.
23
+ To check whether a given task is solved, the symbolic judge evaluates a candidate solution (i.e., a logic rule, typically generated by a language model) and using an **executable validation program** that encodes the task's background knowledge and labeled examples.
24
+ Evaluations performed by the symbolic judge are fully verifiable and grounded in formal logic, ensuring an automatic, transparent, and reproducible standard for evaluation and reward in both supervised and reinforcement learning settings.
25
+ ### How it Works
26
+ - **Input:** The symbolic judge takes as input a candidate hypothesis (logic rule) and an executable validation program containing background knowledge and examples.
27
+ - **Execution:** The candidate rule is executed against the validation program using a Prolog interpreter.
28
+ - **Correctness Criteria:** The rule is considered correct if it entails all positive examples and rejects all negative examples.
29
+ - **Metrics:** The symbolic judge computes a range of evaluation metrics (detailed below).
30
+ **Note:** A local Prolog interpreter is required to execute validation programs.
31
+ ---
32
 
33
+ ### Inputs
34
+ - **predictions** (`list` of `str`): Each prediction should be a Prolog rule like "eastbound(T) :- Body."
35
+ - **references** (`list` of `dict`): Each reference should contain:
36
+ - **validation_program** (`str`): Prolog program with background knowledge and examples
37
+ - **evaluation_config** (`dict`, optional): Configuration with:
38
+ - **positive_predicate** (`str`): Predicate identifying positive examples (default: "eastbound")
39
+ - **negative_predicate** (`str`): Predicate identifying negative examples (default: "westbound")
40
 
41
+ ### Metrics & Output Values
42
+ - **accuracy** (`float`): Proportion of predictions that correctly classify all examples (0.0 to 1.0)
43
+ - **partial_score** (`float`): Average proportion of correctly classified examples (0.0 to 1.0)
44
+ - **syntax_score** (`float`): Proportion of rules with valid Prolog syntax (0.0 to 1.0)
45
+ - **detailed_results** (`list` of `dict`): Per-example results with:
46
+ - **is_correct** (`bool`): Whether the rule correctly classifies all examples
47
+ - **partial_score** (`float`): Proportion of correctly classified examples
48
+ - **syntax_valid** (`bool`): Whether the rule has valid syntax
49
+ - **error** (`str`, optional): Any error messages from Prolog evaluation
50
+ - **exec_time** (`float`, optional): Execution time for evaluation
51
 
52
+ ---
53
+ ## How to Use with the datasets library
54
+ ```python
55
+ from evaluate import load
56
+ from datasets import load_dataset
57
+
58
+ # Load the symbolic judge for logical reasoning
59
+ symbolic_judge = load("AIML-TUDA/VerifiableRewardsForScalableLogicalReasoning")
60
+
61
+ # load dataset AIML-TUDA/SLR-Bench
62
+ ds = load_dataset('AIML-TUDA/SLR-Bench', 'v1-All')
63
+ ds_test = ds['test'][:5]
64
+
65
+ # Prepare the predictions and references
66
+ rules = ds_test['ground-truth rule']
67
+ references = [{'validation_program': p,
68
+ 'evaluation_config': {
69
+ "positive_predicate": "eastbound",
70
+ "negative_predicate": "westbound"
71
+ }
72
+ } for p in ds_test['validation program']]
73
+ # Compute the evaluation
74
+ r2 = symbolic_judge.compute(predictions=rules, references=references)
75
+ r2
76
+ ```
77
+
78
+ ### Outputs
79
+
80
+ ```python
81
+ {'accuracy': 1.0,
82
+ 'partial_score': 1.0,
83
+ 'syntax_score': 1.0,
84
+ 'detailed_results': [{'is_correct': True,'partial_score': 1.0,'syntax_valid': True,'error': None,'exec_time1': 0.014362812042236328},
85
+ {'is_correct': True,'partial_score': 1.0,'syntax_valid': True,'error': None,'exec_time1': 0.012364625930786133},
86
+ {'is_correct': True,'partial_score': 1.0,'syntax_valid': True,'error': None,'exec_time1': 0.012273550033569336},
87
+ {'is_correct': True,'partial_score': 1.0,'syntax_valid': True,'error': None,'exec_time1': 0.012486696243286133},
88
+ {'is_correct': True,'partial_score': 1.0,'syntax_valid': True,'error': None,'exec_time1': 0.012131929397583008}]}
89
+ ```
90
+
91
+
92
+ ---
93
+
94
+ ## Examples
95
 
96
+ ### Example 1: Evaluating a Single Rule
97
 
98
+ ```python
99
+ from evaluate import load
100
 
101
+ symbolic_judge = load("AIML-TUDA/VerifiableRewardsForScalableLogicalReasoning")
102
 
103
+ validation_program = """
104
+ eastbound(train0).
105
+ has_car(train0, car0_1).
106
+ car_num(car0_1, 1).
107
+ car_color(car0_1, white).
108
+ car_len(car0_1, short).
109
+ has_wall(car0_1, full).
110
 
111
+ westbound(train1).
112
+ has_car(train1, car1_1).
113
+ car_num(car1_1, 1).
114
+ car_color(car1_1, yellow).
115
+ car_len(car1_1, short).
116
+ has_wall(car1_1, full).
117
+ """
118
 
119
+ predicted_rule = "eastbound(Train):- has_car(Train, Car1), car_color(Car1, white)."
120
+
121
+ results = symbolic_judge.compute(
122
+ predictions=[predicted_rule],
123
+ references=[{"validation_program": validation_program,
124
+ "evaluation_config": {
125
+ "positive_predicate": "eastbound",
126
+ "negative_predicate": "westbound"
127
+ }}]
128
+ )
129
+
130
+ print(results)
131
+ ```
132
+
133
+ ### Output Example 1
134
+
135
+ ```python
136
+ {'accuracy': 1.0,
137
+ 'partial_score': 1.0,
138
+ 'syntax_score': 1.0,
139
+ 'detailed_results': [
140
+ {'is_correct': True,
141
+ 'partial_score': 1.0,
142
+ 'syntax_valid': True,
143
+ 'error': None,
144
+ 'exec_time1': 0.012056350708007812}]
145
+ }
146
+
147
+ ```
148
+
149
+ ### Example 2: Evaluating Multiple Rules
150
+
151
+ ```python
152
+ correct_rule = "eastbound(Train):- has_car(Train, Car1), car_color(Car1, white)."
153
+ incorrect_rule = "eastbound(Train):- has_car(Train, Car1), car_color(Car1, green)."
154
+
155
+ results = symbolic_judge.compute(
156
+ predictions=[correct_rule, incorrect_rule],
157
+ references=[
158
+ {"validation_program": validation_program},
159
+ {"validation_program": validation_program}
160
+ ]
161
+ )
162
+
163
+ print(results)
164
+ ```
165
+
166
+ ### Example 3: Custom Evaluation Configuration
167
+
168
+ ```python
169
+ validation_program = """
170
+ % Custom problem
171
+ parent(john, mary).
172
+ parent(john, bob).
173
+ parent(alice, bob).
174
+ parent(susan, alice).
175
+
176
+ % Examples
177
+ grandparent(susan, bob).
178
+ not_grandparent(john, alice).
179
+ """
180
+
181
+ rule = "grandparent(X, Y) :- parent(X, Z), parent(Z, Y)."
182
+
183
+ results = symbolic_judge.compute(
184
+ predictions=[rule],
185
+ references=[{
186
+ "validation_program": validation_program,
187
+ "evaluation_config": {
188
+ "positive_predicate": "grandparent",
189
+ "negative_predicate": "not_grandparent"
190
+ }
191
+ }]
192
+ )
193
+ ```
194
 
195
  ## Citation
196
+
197
+ ```
198
+ @misc{helff2025slrautomatedsynthesisframework,
199
+ title={SLR: An Automated Synthesis Framework for Scalable Logical Reasoning},
200
+ author={Lukas Helff and Ahmad Omar and Felix Friedrich and Wolfgang Stammer and Antonia Wüst and Tim Woydt and Rupert Mitchell and Patrick Schramowski and Kristian Kersting},
201
+ year={2025},
202
+ eprint={2506.15787},
203
+ archivePrefix={arXiv},
204
+ primaryClass={cs.AI},
205
+ url={https://arxiv.org/abs/2506.15787},
206
+ }
207
+ ```
208
 
209
  ## Further References
210
+
211
+ - [SLR-Bench Dataset](https://huggingface.co/datasets/AIML-TUDA/SLR-Bench)
212
+ - [SLR-Bench GitHub Repository](https://github.com/AIML-TUDA/SLR)
app.py CHANGED
@@ -1,6 +1,260 @@
1
  import evaluate
2
- from evaluate.utils import launch_gradio_widget
 
3
 
4
 
5
- module = evaluate.load("LukasHug/verifiablerewardsforscalablelogicalreasoning")
6
- launch_gradio_widget(module)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  import evaluate
2
+ import gradio as gr
3
+ import os
4
 
5
 
6
+ # Create your own Gradio interface instead of using the built-in widget
7
+ def create_interface(module):
8
+ def evaluate_fn(prediction, references, pos_pred, neg_pred):
9
+ # Process a single prediction instead of multiple
10
+ pred = prediction.strip()
11
+
12
+ # Create reference with evaluation_config at the top level
13
+ ref = {
14
+ "validation_program": references.strip(),
15
+ "evaluation_config": {
16
+ "positive_predicate": pos_pred,
17
+ "negative_predicate": neg_pred
18
+ }
19
+ }
20
+
21
+ # Use a list with a single prediction and reference
22
+ results = module.compute(predictions=[pred], references=[ref])
23
+
24
+ # Extract the error message from detailed_results if it exists
25
+ error_msg = ""
26
+ if results["detailed_results"] and len(results["detailed_results"]) > 0:
27
+ error = results["detailed_results"][0].get("error")
28
+ if error:
29
+ error_msg = f"Error: {error}"
30
+
31
+ return (
32
+ f"Accuracy score: {results['accuracy']:.4f}",
33
+ f"Partial score: {results['partial_score']:.4f}",
34
+ f"Syntax score: {results['syntax_score']:.4f}",
35
+ error_msg
36
+ )
37
+
38
+ # Helper function to load example data
39
+ def load_example(example):
40
+ return (
41
+ example["rule"],
42
+ example["validation"],
43
+ example["pos_pred"],
44
+ example["neg_pred"]
45
+ )
46
+
47
+ # Read README.md content
48
+ readme_path = os.path.join(os.path.dirname(os.path.abspath(__file__)), "README.md")
49
+ with open(readme_path, 'r') as f:
50
+ readme_content = f.read()
51
+ readme_content = '# Metric Card ' + readme_content.split('# Metric Card ')[1]
52
+
53
+ # Define examples for quick use
54
+ example_train = {
55
+ "description": "Basic Train Problem",
56
+ "validation": """eastbound(train0).
57
+ has_car(train0, car0_1).
58
+ car_num(car0_1, 1).
59
+ car_color(car0_1, white).
60
+ car_len(car0_1, short).
61
+ has_wall(car0_1, full).
62
+
63
+ westbound(train1).
64
+ has_car(train1, car1_1).
65
+ car_num(car1_1, 1).
66
+ car_color(car1_1, yellow).
67
+ car_len(car1_1, short).
68
+ has_wall(car1_1, full).
69
+ """,
70
+ "rule": "eastbound(Train):- has_car(Train, Car1), car_color(Car1, white).",
71
+ "pos_pred": "eastbound",
72
+ "neg_pred": "westbound"
73
+ }
74
+
75
+ example_family = {
76
+ "description": "Family Relationships",
77
+ "validation": """% Custom problem
78
+ parent(john, mary).
79
+ parent(john, bob).
80
+ parent(alice, bob).
81
+ parent(susan, alice).
82
+
83
+ % Examples
84
+ grandparent(susan, bob).
85
+ not_grandparent(john, alice).""",
86
+ "rule": "grandparent(X, Y) :- parent(X, Z), parent(Z, Y).",
87
+ "pos_pred": "grandparent",
88
+ "neg_pred": "not_grandparent"
89
+ }
90
+
91
+ with gr.Blocks(title="Symbolic Judge") as demo:
92
+ with gr.Tab("Evaluation"):
93
+ gr.Markdown("# Symbolic Judge: Verifiable Rewards for Scalable Logical Reasoning")
94
+ gr.Markdown("""
95
+ Verifiable Rewards for Scalable Logical Reasoning (**SLR**) introduces a **symbolic judge** that provides verifiable rewards for logical reasoning tasks.
96
+ To check whether a given task is solved, the symbolic judge evaluates a candidate solution (i.e., a logic rule, typically generated by a language model) and using an **executable validation program** that encodes the task's background knowledge and labeled examples.
97
+ Evaluations performed by the symbolic judge are fully verifiable and grounded in formal logic, ensuring an automatic, transparent, and reproducible standard for evaluation and reward in both supervised and reinforcement learning settings.
98
+ ### How it Works
99
+ - **Input:** The symbolic judge takes as input a candidate hypothesis (logic rule) and an executable validation program containing background knowledge and examples.
100
+ - **Execution:** The candidate rule is executed against the validation program using a Prolog interpreter.
101
+ - **Correctness Criteria:** The rule is considered correct if it entails all positive examples and rejects all negative examples.
102
+ - **Metrics:** The symbolic judge computes a range of evaluation metrics (detailed below).
103
+ - **Usage:** see **Documentation tab** for details on how to use the symbolic judge in your own projects.
104
+ **Note:** A local Prolog interpreter is required to execute validation programs.
105
+ """)
106
+
107
+ with gr.Row():
108
+ with gr.Column(scale=1):
109
+ with gr.Group():
110
+ gr.Markdown("### Model Output")
111
+ prediction_input = gr.Textbox(
112
+ label="Candidate Hypothesis to be evaluated(predicted rule by the model)",
113
+ placeholder="eastbound(T) :- has_car(T, C), short(C), open(C).",
114
+ lines=5
115
+ )
116
+
117
+ with gr.Group():
118
+ gr.Markdown("### Validation Program")
119
+
120
+ references_input = gr.Textbox(
121
+ label="The validation program contains background knowledge and examples for testing",
122
+ placeholder="% Background knowledge\ncar(car_1). car(car_2).\nlong(car_2). short(car_1).\nopen(car_1). closed(car_2).\n\n% Examples\neastbound(train_1).\nwestbound(train_2).\n\n% Train configurations\nhas_car(train_1, car_1).\nhas_car(train_2, car_2).",
123
+ lines=12
124
+ )
125
+
126
+ with gr.Row():
127
+ pos_pred_input = gr.Textbox(
128
+ label="Positive Validation Examples",
129
+ value="eastbound",
130
+ placeholder="eastbound",
131
+ info="The predicate name identifying positive examples in the validation program"
132
+ )
133
+ neg_pred_input = gr.Textbox(
134
+ label="Negative Validation Examples",
135
+ value="westbound",
136
+ placeholder="westbound",
137
+ info="The predicate name identifying negative examples in the validation program"
138
+ )
139
+
140
+ eval_button = gr.Button("Evaluate", variant="primary")
141
+
142
+ with gr.Column(scale=1):
143
+ with gr.Group():
144
+ gr.Markdown("### Evaluation Metrics")
145
+ with gr.Group():
146
+ accuracy_output = gr.Textbox(
147
+ label="Overall Accuracy",
148
+ info="Proportion of hypotheses that solve the tasks",
149
+ container=True
150
+ )
151
+ partial_score_output = gr.Textbox(
152
+ label="Partial Score",
153
+ info="Proportion of examples that are correctly classified in the tasks",
154
+ container=True
155
+ )
156
+ syntax_score_output = gr.Textbox(
157
+ label="Syntax Score",
158
+ info="Proportion of syntactically valid hypothesis",
159
+ container=True
160
+ )
161
+ error_output = gr.Textbox(
162
+ label="Syntax Details",
163
+ info="Error messages for syntax errors or execution failures",
164
+ container=True,
165
+ )
166
+ gr.Markdown("Note: This interface evaluates a single hypothesis at a time. Use Python API for batch processing")
167
+ # Add the examples section
168
+ # Example list
169
+ examples = [
170
+ ["Train Problem", example_train],
171
+ ["Family Relationships", example_family]
172
+ ]
173
+
174
+ with gr.Accordion("Example Logical Reasoning Tasks", open=True):
175
+ example_radio = gr.Radio([ex[0] for ex in examples], label="Select an example", value="Train Problem")
176
+ # Show preview of selected example
177
+ with gr.Row():
178
+ with gr.Column():
179
+ gr.Markdown("### Selected Example Preview")
180
+ example_description = gr.Markdown("**Description**: " + example_train["description"])
181
+ with gr.Row():
182
+ with gr.Column():
183
+ gr.Markdown("#### Candidate Hypothesis")
184
+ example_rule = gr.Code(value=example_train["rule"])
185
+ with gr.Column():
186
+ gr.Markdown("#### Validation Program")
187
+ example_validation = gr.Code(value=example_train["validation"])
188
+
189
+ with gr.Row():
190
+ with gr.Column():
191
+ gr.Markdown("#### Validation Examples")
192
+ example_predicates = gr.Markdown(f"""
193
+ **Positive Examples**: `{example_train["pos_pred"]}`
194
+ **Negative Examples**: `{example_train["neg_pred"]}`
195
+ """)
196
+
197
+ # Load button for the selected example
198
+ load_button = gr.Button("Load Selected Example", variant="secondary")
199
+ gr.Markdown("### Citation")
200
+
201
+ gr.Markdown("""
202
+ If you use Symbolic Judge in your work, please cite:
203
+ ```
204
+ @misc{helff2025slrautomatedsynthesisframework,
205
+ title={SLR: An Automated Synthesis Framework for Scalable Logical Reasoning},
206
+ author={Lukas Helff and Ahmad Omar and Felix Friedrich and Wolfgang Stammer and Antonia Wüst and Tim Woydt and Rupert Mitchell and Patrick Schramowski and Kristian Kersting},
207
+ year={2025},
208
+ eprint={2506.15787},
209
+ archivePrefix={arXiv},
210
+ primaryClass={cs.AI},
211
+ url={https://arxiv.org/abs/2506.15787},
212
+ }
213
+ ```
214
+ """)
215
+
216
+ # Set up event handlers for the example selection
217
+ def update_example_preview(selection):
218
+ selected_example = next((ex[1] for ex in examples if ex[0] == selection), example_train)
219
+ return (
220
+ "**Description**: " + selected_example["description"],
221
+ selected_example["rule"],
222
+ selected_example["validation"],
223
+ f"""
224
+ **Positive Examples**: `{selected_example["pos_pred"]}`
225
+ **Negative Examples**: `{selected_example["neg_pred"]}`
226
+ """
227
+ )
228
+
229
+ example_radio.change(
230
+ fn=update_example_preview,
231
+ inputs=[example_radio],
232
+ outputs=[example_description, example_rule, example_validation, example_predicates]
233
+ )
234
+
235
+ # Event handler for the load button
236
+ def load_selected_example(selection):
237
+ selected_example = next((ex[1] for ex in examples if ex[0] == selection), example_train)
238
+ return load_example(selected_example)
239
+
240
+ load_button.click(
241
+ fn=load_selected_example,
242
+ inputs=[example_radio],
243
+ outputs=[prediction_input, references_input, pos_pred_input, neg_pred_input]
244
+ )
245
+
246
+ # Set up the evaluate button
247
+ eval_button.click(
248
+ fn=evaluate_fn,
249
+ inputs=[prediction_input, references_input, pos_pred_input, neg_pred_input],
250
+ outputs=[accuracy_output, partial_score_output, syntax_score_output, error_output]
251
+ )
252
+
253
+ with gr.Tab("Documentation"):
254
+ gr.Markdown(readme_content)
255
+
256
+ return demo
257
+
258
+ # Use a local path instead of a module name
259
+ module = evaluate.load("AIML-TUDA/VerifiableRewardsForScalableLogicalReasoning")
260
+ create_interface(module).launch()
packages.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ swi-prolog
requirements.txt CHANGED
@@ -1 +1,5 @@
1
- git+https://github.com/huggingface/evaluate@main
 
 
 
 
 
1
+ git+https://github.com/huggingface/evaluate@main
2
+ evaluate
3
+ gradio>=5.34.2
4
+ datasets
5
+ tqdm
tests.py DELETED
@@ -1,17 +0,0 @@
1
- test_cases = [
2
- {
3
- "predictions": [0, 0],
4
- "references": [1, 1],
5
- "result": {"metric_score": 0}
6
- },
7
- {
8
- "predictions": [1, 1],
9
- "references": [1, 1],
10
- "result": {"metric_score": 1}
11
- },
12
- {
13
- "predictions": [1, 0],
14
- "references": [1, 1],
15
- "result": {"metric_score": 0.5}
16
- }
17
- ]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
verifiablerewardsforscalablelogicalreasoning.py CHANGED
@@ -11,85 +11,296 @@
11
  # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
  # See the License for the specific language governing permissions and
13
  # limitations under the License.
14
- """TODO: Add a description here."""
15
 
 
 
 
16
  import evaluate
 
17
  import datasets
 
 
 
 
18
 
 
19
 
20
- # TODO: Add BibTeX citation
21
  _CITATION = """\
22
- @InProceedings{huggingface:module,
23
- title = {A great new module},
24
- authors={huggingface, Inc.},
25
- year={2020}
 
 
 
 
26
  }
27
  """
28
 
29
- # TODO: Add description of the module here
30
  _DESCRIPTION = """\
31
- This new module is designed to solve this great ML task and is crafted with a lot of care.
32
- """
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
33
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
34
 
35
- # TODO: Add description of the arguments of the module here
36
  _KWARGS_DESCRIPTION = """
37
- Calculates how good are predictions given some references, using certain scores
38
  Args:
39
- predictions: list of predictions to score. Each predictions
40
- should be a string with tokens separated by spaces.
41
- references: list of reference for each prediction. Each
42
- reference should be a string with tokens separated by spaces.
 
43
  Returns:
44
- accuracy: description of the first score,
45
- another_score: description of the second score,
46
- Examples:
47
- Examples should be written in doctest format, and should illustrate how
48
- to use the function.
49
-
50
- >>> my_new_module = evaluate.load("my_new_module")
51
- >>> results = my_new_module.compute(references=[0, 1], predictions=[0, 1])
52
- >>> print(results)
53
- {'accuracy': 1.0}
54
  """
55
 
56
- # TODO: Define external resources urls if needed
57
- BAD_WORDS_URL = "http://url/to/external/resource/bad_words.txt"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
58
 
59
 
60
  @evaluate.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION)
61
  class VerifiableRewardsForScalableLogicalReasoning(evaluate.Metric):
62
- """TODO: Short description of my evaluation module."""
 
 
 
 
 
 
 
 
 
 
 
63
 
64
  def _info(self):
65
- # TODO: Specifies the evaluate.EvaluationModuleInfo object
66
  return evaluate.MetricInfo(
67
- # This is the description that will appear on the modules page.
68
- module_type="metric",
69
  description=_DESCRIPTION,
70
  citation=_CITATION,
71
  inputs_description=_KWARGS_DESCRIPTION,
72
- # This defines the format of each prediction and reference
73
  features=datasets.Features({
74
- 'predictions': datasets.Value('int64'),
75
- 'references': datasets.Value('int64'),
 
 
 
 
 
 
76
  }),
77
- # Homepage of the module for documentation
78
- homepage="http://module.homepage",
79
- # Additional links to the codebase or references
80
- codebase_urls=["http://github.com/path/to/codebase/of/new_module"],
81
- reference_urls=["http://path.to.reference.url/new_module"]
82
  )
83
 
84
  def _download_and_prepare(self, dl_manager):
85
- """Optional: download external resources useful to compute the scores"""
86
- # TODO: Download external resources if needed
87
- pass
88
-
89
- def _compute(self, predictions, references):
90
- """Returns the scores"""
91
- # TODO: Compute the different scores of the module
92
- accuracy = sum(i == j for i, j in zip(predictions, references)) / len(predictions)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
93
  return {
94
  "accuracy": accuracy,
95
- }
 
 
 
 
11
  # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
  # See the License for the specific language governing permissions and
13
  # limitations under the License.
14
+ """ Symbolic Judge: Verifiable Rewards for Logical Reasoning at Scale """
15
 
16
+ import os
17
+ import subprocess
18
+ import tempfile
19
  import evaluate
20
+ import logging
21
  import datasets
22
+ from tqdm import tqdm
23
+ import time
24
+ import multiprocessing as mp
25
+ import re
26
 
27
+ logger = logging.getLogger(__name__)
28
 
 
29
  _CITATION = """\
30
+ @misc{helff2025slrautomatedsynthesisframework,
31
+ title={SLR: An Automated Synthesis Framework for Scalable Logical Reasoning},
32
+ author={Lukas Helff and Ahmad Omar and Felix Friedrich and Wolfgang Stammer and Antonia Wüst and Tim Woydt and Rupert Mitchell and Patrick Schramowski and Kristian Kersting},
33
+ year={2025},
34
+ eprint={2506.15787},
35
+ archivePrefix={arXiv},
36
+ primaryClass={cs.AI},
37
+ url={https://arxiv.org/abs/2506.15787},
38
  }
39
  """
40
 
 
41
  _DESCRIPTION = """\
42
+ Verifiable Rewards for Scalable Logical Reasoning (SLR) introduces a symbolic judge that provides verifiable rewards for logical reasoning tasks.
43
+ To check whether a given task is solved, the symbolic judge evaluates a candidate solution (i.e., a logic rule, typically generated by a language model) using an executable validation program that encodes the task's background knowledge and labeled examples.
44
+ Evaluations performed by the symbolic judge are fully verifiable and grounded in formal logic, ensuring an automatic, transparent, and reproducible standard for evaluation and reward in both supervised and reinforcement learning settings.
45
+
46
+ How it Works:
47
+ - Input: The symbolic judge takes as input a candidate hypothesis (logic rule) and an executable validation program containing background knowledge and examples.
48
+ - Execution: The candidate rule is executed against the validation program using a Prolog interpreter.
49
+ - Correctness Criteria: The rule is considered correct if it entails all positive examples and rejects all negative examples.
50
+ - Metrics: The symbolic judge computes a range of evaluation metrics (detailed below).
51
+ - Usage: see **Documentation tab** for details on how to use the symbolic judge in your own projects.
52
+
53
+ Example usage:
54
+ from evaluate import load
55
+
56
+ symbolic_judge = load("AIML-TUDA/VerifiableRewardsForScalableLogicalReasoning")
57
+
58
+ validation_program = \"\"\"
59
+ eastbound(train0).
60
+ has_car(train0, car0_1).
61
+ car_num(car0_1, 1).
62
+ car_color(car0_1, white).
63
+ car_len(car0_1, short).
64
+ has_wall(car0_1, full).
65
 
66
+ westbound(train1).
67
+ has_car(train1, car1_1).
68
+ car_num(car1_1, 1).
69
+ car_color(car1_1, yellow).
70
+ car_len(car1_1, short).
71
+ has_wall(car1_1, full).
72
+ \"\"\"
73
+
74
+ predicted_rule = "eastbound(Train):- has_car(Train, Car1), car_color(Car1, white)."
75
+
76
+ results = symbolic_judge.compute(
77
+ predictions=[predicted_rule],
78
+ references=[{"validation_program": validation_program,
79
+ "evaluation_config": {
80
+ "positive_predicate": "eastbound",
81
+ "negative_predicate": "westbound"
82
+ }}]
83
+ )
84
+
85
+ Note: A local Prolog interpreter is required to execute validation programs.
86
+ """
87
 
 
88
  _KWARGS_DESCRIPTION = """
 
89
  Args:
90
+ predictions (`list` of `str`): Each prediction should be a Prolog rule like "pred(T) :- Body."
91
+ references (`list` of `dict`): Each reference should contain:
92
+ - 'validation_program' (`str`): Background knowledge in Prolog syntax
93
+ - 'evaluation_config' (`dict`, optional): Configuration of predicates to use for evaluation.
94
+ Define: positive_predicate, and negative_predicate
95
  Returns:
96
+ accuracy (`float`): The proportion of predictions that correctly solve all examples. Value is between 0 and 1.
97
+ partial_score (`float`): Average proportion of correctly classified examples across all predictions. Value is between 0 and 1.
98
+ syntax_score (`float`): Proportion of rules with valid syntax. Value is between 0 and 1.
99
+ detailed_results (`list` of `dict`): Per-example results including correctness, partial score, execution time, and any errors encountered.
 
 
 
 
 
 
100
  """
101
 
102
+
103
+ def _evaluate_with_prolog(rule_to_evaluate, validation_program, eval_config, timeout=5):
104
+ """
105
+ Evaluates a predicted rule against the validation program using Prolog.
106
+ """
107
+ # Extract configuration
108
+ positive_pred = eval_config.get("positive_predicate", "eastbound")
109
+ negative_pred = eval_config.get("negative_predicate", "westbound")
110
+ # extract predicate from rule_to_evaluate
111
+ if positive_pred not in rule_to_evaluate:
112
+ logger.warning(f"Rule '{rule_to_evaluate}' does not contain positive predicate '{positive_pred}'")
113
+ return {
114
+ "is_correct": False,
115
+ "partial_score": 0.0,
116
+ "syntax_valid": False,
117
+ "error": f"Invalid Syntax: Rule does not contain positive predicate '{positive_pred}'"
118
+ }
119
+
120
+ pos_examples = re.findall(rf'{positive_pred}\(([^)]+)\)', validation_program)
121
+ neg_examples = re.findall(rf'{negative_pred}\(([^)]+)\)', validation_program)
122
+
123
+ # Determine arity by counting commas in first example plus 1
124
+ arity = 1 # default to unary
125
+ if pos_examples:
126
+ arity = pos_examples[0].count(',') + 1
127
+ elif neg_examples:
128
+ arity = neg_examples[0].count(',') + 1
129
+
130
+ # Create variables based on arity
131
+ vars = ", ".join([f"X{i}" for i in range(1, arity + 1)])
132
+
133
+ symbolic_judge = f"""
134
+ % Dynamic evaluation predicates
135
+ check({vars}) :- pos({vars}), {positive_pred}({vars}). % positive covered
136
+ check({vars}) :- neg({vars}), \\+ {positive_pred}({vars}). % negative rejected
137
+
138
+ % Count successful checks
139
+ check_count(Count) :-
140
+ setof(({vars}), ((pos({vars}); neg({vars})), check({vars})), CorrectExamples),
141
+ length(CorrectExamples, Count).
142
+
143
+ check_all :- forall((pos({vars});neg({vars})), check({vars})).
144
+ """
145
+ # Add the rule to evaluate
146
+ validation_program = re.sub(rf'\b{positive_pred}\b', 'pos', validation_program)
147
+ validation_program = re.sub(rf'\b{negative_pred}\b', 'neg', validation_program)
148
+
149
+ pos_negs = validation_program.count("pos(") + validation_program.count("neg(")
150
+ validation_program = '\n'.join(sorted(validation_program.splitlines()))
151
+ full_program = validation_program + "\n\n" + symbolic_judge + "\n\n" + rule_to_evaluate + "\n\n"
152
+
153
+ with tempfile.NamedTemporaryFile(suffix='.pl', mode='w', delete=False) as f:
154
+ f.write(full_program)
155
+ temp_file = f.name
156
+
157
+ try:
158
+ eval_start_time = time.time()
159
+ # Execute the Prolog program
160
+ cmd = ['swipl', '-s', temp_file, '-g', 'check_count(Count), writeln(Count)', '-t', 'halt']
161
+ result = subprocess.run(
162
+ cmd,
163
+ stdout=subprocess.PIPE,
164
+ stderr=subprocess.PIPE,
165
+ timeout=timeout,
166
+ text=True
167
+ )
168
+ # Extract partial score from output
169
+ partial_score = int(result.stdout.strip()) / pos_negs if pos_negs > 0 else 0.0
170
+
171
+ is_correct = True if partial_score == 1.0 else False
172
+
173
+ error = result.stderr if result.stderr else None
174
+ t1 = time.time()
175
+
176
+ return {
177
+ "is_correct": is_correct,
178
+ "partial_score": partial_score,
179
+ "syntax_valid": True,
180
+ "error": error,
181
+ "exec_time1": t1 - eval_start_time,
182
+ }
183
+
184
+ except subprocess.TimeoutExpired:
185
+ logger.warning(f"Evaluation timed out after {timeout} seconds for rule: {rule_to_evaluate}...")
186
+ return {"is_correct": False, "partial_score": 0.0, "syntax_valid": False,
187
+ "error": f"Evaluation timed out after {timeout} seconds"}
188
+ except Exception as e:
189
+ logger.warning(f"Error evaluating rule '{rule_to_evaluate}' returns: '{result.stdout.strip() if result else 'No error message'}'")
190
+ error_message = 'Invalid Syntax: exit with ' + str(e)
191
+ return {"is_correct": False, "partial_score": 0.0, "syntax_valid": False, "error": error_message}
192
+ finally:
193
+ if os.path.exists(temp_file):
194
+ os.remove(temp_file)
195
+
196
 
197
 
198
  @evaluate.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION)
199
  class VerifiableRewardsForScalableLogicalReasoning(evaluate.Metric):
200
+ def __init__(self, config_name=None, **kwargs):
201
+ """
202
+ Initializes the PrologEval metric.
203
+
204
+ Args:
205
+ config_name (str, optional): Name of the configuration to use.
206
+ **kwargs: Additional keyword arguments.
207
+ """
208
+ super().__init__(config_name=config_name, **kwargs)
209
+ self.config_name = config_name or "default"
210
+ self._info = self._info()
211
+ self._download_and_prepare(dl_manager=None)
212
 
213
  def _info(self):
 
214
  return evaluate.MetricInfo(
 
 
215
  description=_DESCRIPTION,
216
  citation=_CITATION,
217
  inputs_description=_KWARGS_DESCRIPTION,
 
218
  features=datasets.Features({
219
+ 'predictions': datasets.Value('string'),
220
+ 'references': {
221
+ 'validation_program': datasets.Value('string'),
222
+ 'evaluation_config': {
223
+ 'positive_predicate': datasets.Value('string'),
224
+ 'negative_predicate': datasets.Value('string')
225
+ }
226
+ },
227
  }),
228
+ codebase_urls=["https://github.com/AIML-TUDA/SLR-Bench"],
229
+ reference_urls=["https://huggingface.co/datasets/AIML-TUDA/SLR-Bench"]
 
 
 
230
  )
231
 
232
  def _download_and_prepare(self, dl_manager):
233
+ """Checks if SWI-Prolog is installed or warns the user."""
234
+ try:
235
+ subprocess.run(
236
+ ["swipl", "--version"],
237
+ stdout=subprocess.PIPE,
238
+ stderr=subprocess.PIPE,
239
+ check=True
240
+ )
241
+ except (subprocess.CalledProcessError, FileNotFoundError):
242
+ logger.warning(
243
+ "SWI-Prolog not found. Please install it:\n"
244
+ "Ubuntu/Debian: sudo apt-get install swi-prolog\n"
245
+ "macOS: brew install swi-prolog\n"
246
+ "Windows: download from https://www.swi-prolog.org/download/stable"
247
+ )
248
+
249
+ def _compute(self, predictions: list, references: list):
250
+ """Calculates the accuracy of predictions using Prolog for evaluation with multiprocessing."""
251
+ if not isinstance(predictions, list):
252
+ predictions = [predictions]
253
+
254
+ if len(predictions) != len(references):
255
+ raise ValueError(
256
+ f"Number of predictions ({len(predictions)}) and references {len(references)}) don't match")
257
+
258
+ # Prepare evaluation inputs
259
+ eval_inputs = []
260
+ for i, (prediction, reference) in enumerate(zip(predictions, references)):
261
+ validation_program = reference.get("validation_program", reference.get("validation program"))
262
+
263
+ # Extract configuration parameters directly from reference
264
+ # This is the key fix: look for config values at the top level if evaluation_config doesn't exist
265
+ eval_config = reference.get("evaluation_config", {
266
+ "positive_predicate": "eastbound",
267
+ "negative_predicate": "westbound"
268
+ })
269
+
270
+ if not validation_program:
271
+ raise ValueError(f"Example {i} does not contain validation program field")
272
+
273
+ # Make sure the prediction is a proper rule format
274
+ if not prediction.strip().endswith('.'):
275
+ prediction = prediction.strip() + '.'
276
+
277
+ eval_inputs.append((prediction, validation_program, eval_config))
278
+
279
+ # Process evaluations in parallel
280
+ num_cpus = max(1, mp.cpu_count() - 1) # Leave one CPU free
281
+ with mp.Pool(processes=num_cpus) as pool:
282
+ results = list(tqdm(
283
+ pool.starmap(_evaluate_with_prolog, eval_inputs),
284
+ total=len(eval_inputs),
285
+ desc="Evaluating rules (parallel)"
286
+ ))
287
+ # no multiprocessing in the main thread, so we can use tqdm directly
288
+ # results = []
289
+ # for prediction, validation_program, eval_config in tqdm(eval_inputs, total=len(predictions), desc="Evaluating rules"):
290
+ # results.append(_evaluate_with_prolog(prediction, validation_program, eval_config))
291
+
292
+ # Calculate metrics
293
+ partial_scores = [result["partial_score"] for result in results]
294
+ correct_count = sum(1 for result in results if result["is_correct"])
295
+ syntax_valid_count = sum(1 for result in results if result["syntax_valid"])
296
+
297
+ accuracy = correct_count / len(predictions) if predictions else 0
298
+ partial_score = sum(partial_scores) / len(predictions) if partial_scores else 0
299
+ syntax_score = syntax_valid_count / len(predictions) if predictions else 0
300
+
301
  return {
302
  "accuracy": accuracy,
303
+ "partial_score": partial_score,
304
+ "syntax_score": syntax_score,
305
+ "detailed_results": results
306
+ }