Fin-O1
Collection
3 items
β’
Updated
Fin-o1-8B is a fine-tuned version of Qwen3-14B, designed to improve performance on [financial reasoning tasks]. This model has been trained using SFT and RF on TheFinAI/Fino1_Reasoning_Path_FinQA, enhancing its capabilities in financial reasoning tasks. Check our paper arxiv.org/abs/2502.08127 for more details.
Fin-o1-14B
Qwen3-14B
TheFinAI/FinCoT
Derived from FinQA, TATQA, DocMath-Eval, Econ-Logic, BizBench-QA, DocFinQA dataset. [Enhance performance on specific tasks such as financial mathemtical reasoning]
Qwen3-8B
GPU: [e.g., 8xA100]
[e.g., 16]
[e.g., 2e-5]
[e.g., 3]
[e.g., AdamW, LAMB]
To use Fin-o1-14B
with Hugging Face's transformers
library:
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "TheFinAI/Fin-o1-14B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
input_text = "What is the results of 3-5?"
inputs = tokenizer(input_text, return_tensors="pt")
output = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(output[0], skip_special_tokens=True))
If you use this model in your research, please cite:
@article{qian2025fino1,
title={Fino1: On the Transferability of Reasoning Enhanced LLMs to Finance},
author={Qian, Lingfei and Zhou, Weipeng and Wang, Yan and Peng, Xueqing and Huang, Jimin and Xie, Qianqian},
journal={arXiv preprint arXiv:2502.08127},
year={2025}
}