Safetensors
qwen2

Model Description

This model is fine-tuned on tactic states from the Mathlib4 dataset and leverages inference data from MiniF2F Kiminaprover8b. It builds upon the NVIDIA OpenReasoning Nemotron 7B base model.

Performance

task dataset metric value pass@
MiniF2F MiniF2F benchmark accuracy 39.6% 32

Usage

You can load the model using the Hugging Face Transformers library and apply it to tasks involving tactic state prediction and theorem proving.


For more details, check out the datasets and base model referenced above.

Downloads last month
2
Safetensors
Model size
7.62B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Kevew/nemo_finetuned_minif2f

Base model

Qwen/Qwen2.5-7B
Finetuned
(2)
this model

Datasets used to train Kevew/nemo_finetuned_minif2f