raat-ir0.7-d5k-0.5mix1.0

This model has been fine-tuned using the Divide-Then-Align (DTA) method, as proposed in the paper Divide-Then-Align: Honest Alignment based on the Knowledge Boundary of RAG. The training was conducted on RAAT, a RAFT-based model built upon Llama2-7B-base.

Training Configuration

  • training datasize: 5k
  • idk_ratio: 0.7
  • coe_cls: 0.5
  • coe_sft: 1.0

Evaluation

For your reference, we include the evaluation results of this checkpoint on the same test set as in the original paper. Minor differences in metrics were observed.

Model OQ AQ RH AbQ
Acc Rec Prec F1 DR CUR ARec APrec AF1
raat-ir0.7-d5k-0.5mix1.0 63.570.160.464.975.860.850.773.460.0

Citation

@misc{sun2025dividethenalignhonestalignmentbased,
      title={Divide-Then-Align: Honest Alignment based on the Knowledge Boundary of RAG}, 
      author={Xin Sun and Jianan Xie and Zhongqi Chen and Qiang Liu and Shu Wu and Yuehe Chen and Bowen Song and Weiqiang Wang and Zilei Wang and Liang Wang},
      year={2025},
      eprint={2505.20871},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2505.20871}, 
}
Downloads last month
1
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including Itandy/raat_ir0.7_d5k_0.5mix1.0