metadata
library_name: transformers
tags:
- llama-factory
license: apache-2.0
datasets:
- kxdw2580/catgirl-dataset
language:
- zh
base_model:
- deepseek-ai/DeepSeek-R1-0528-Qwen3-8B
new_version: kxdw2580/DeepSeek-R1-0528-Qwen3-8B-catgirl-v2.5
We have released the updated v2-qwen dataset , designed to evaluate performance advantages of large-scale models.
To address limitations in previous model iterations, we implemented a hybrid fine-tuning approach combining v2-common with other v2-qwen subsets. This significantly reduced redundant reasoning processes and hallucinations in routine responses, while improvements were also observed in non-reasoning modes .
Additionally, during fine-tuning, LoRA + bitsandbytes 8-bit quantization was employed to accelerate training. The model's efficiency may be compromised compared to fully-precision models.