Model Details

Model Description

This Lora is trained based on the AQuilt model and should be loaded into the AQuilt model when performing Self-Inspection.

Model Sources

How to use

AQuilt_Eval_lora is a LoRA weight checkpoint that must be used in conjunction with AQuilt. Its sole purpose is to perform inspection on the synthetic data produced by AQuilt.

Please refer to the https://github.com/Krueske/AQuilt for an example invocation script:

CUDA_VISIBLE_DEVICES=0 python ./dataGen.py \
  --model_path /path/to/AQuilt \
  --eval_lora_path /path/to/AQuilt_eval_lora \
  --eval true \
  --input_file input.txt \
  --output_file output.json \
  --task_type "natural language inference" \
  --language "en" \
  --task_predix "" \
  --num_gen_per_text 1 \
  --temperature 0.7 \
  --top_p 0.95 \
  --seed 42

In the above command, eval_lora_path should point to the locally downloaded AQuilt_Eval_lora checkpoint. When you need to inspect the data synthesized by AQuilt, supply this path and set the --eval flag to true.

Training Details

This Lora is trained based on the AQuilt model.

Training Data

We've built a training dataset for Self-Inspection of about 14k scale: https://huggingface.co/datasets/xiapk7/AQuilt_trainingset.(Self-Inspection-Trainingset subset)

Training hyperparameters:

We use the following hyperparameters:

  • LoRA rank (r): 64
  • LoRA scaling factor (alpha): 4
  • LoRA dropout: 0
  • Optimizer: AdamW
  • Learning rate scheduler: cosine
  • Max. learning rate: 1e-04
  • Min. learning rate: 0
  • Weight decay: 0.1
  • Dropout: 0
  • Effective batch size: 16
  • Epoch: 2

πŸ“œ Citation

If you find this model useful, please cite:

@misc{ke2025aquiltweavinglogicselfinspection,
      title={AQuilt: Weaving Logic and Self-Inspection into Low-Cost, High-Relevance Data Synthesis for Specialist LLMs}, 
      author={Xiaopeng Ke and Hexuan Deng and Xuebo Liu and Jun Rao and Zhenxi Song and Jun Yu and Min Zhang},
      year={2025},
      eprint={2507.18584},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2507.18584}, 
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for xiapk7/AQuilt_Eval_lora

Base model

Qwen/Qwen2.5-7B
Finetuned
xiapk7/AQuilt
Finetuned
(1)
this model

Dataset used to train xiapk7/AQuilt_Eval_lora