Model Sources

Uses

To use RKEFino1-14B with Hugging Face's transformers library:

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "YanAdjeNole/RKEFino1-14B"

tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

input_text = "What is the results of 3-5?"
inputs = tokenizer(input_text, return_tensors="pt")

output = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(output[0], skip_special_tokens=True))

Citation

Please cite our paper here:

@misc{wang2025rkefino1regulationknowledgeenhancedlarge,
      title={RKEFino1: A Regulation Knowledge-Enhanced Large Language Model}, 
      author={Yan Wang and Yueru He and Ruoyu Xiang and Jeff Zhao},
      year={2025},
      eprint={2506.05700},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2506.05700}, 
}
Downloads last month
7
Safetensors
Model size
14.8B params
Tensor type
F16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for YanAdjeNole/RKEFino1-14B

Base model

Qwen/Qwen2.5-14B
Finetuned
(79)
this model