--- base_model: Qwen/Qwen2.5-1.5B-Instruct tags: - peft - lora - federated-learning - flower datasets: - vicgalle/alpaca-gpt4 --- # FlowerTune LoRA Model This is a LoRA adapter for Qwen/Qwen2.5-1.5B-Instruct fine-tuned with Flower federated learning framework on a general NLP dataset. ## Training Details - Dataset: vicgalle/alpaca-gpt4 - Training method: Federated LoRA fine-tuning with FlowerTune - Framework: Flower This model is a LoRA adapter fine-tuned on Qwen/Qwen2.5-1.5B-Instruct using the Flower federated learning framework. It was trained on a general NLP dataset (vicgalle/alpaca-gpt4) through distributed learning to improve performance. ## Links - FlowerTune Homepage: [https://huggingface.co/zjudai/FlowerTune](https://huggingface.co/zjudai/FlowerTune) - FlowerTune Collection: [https://huggingface.co/collections/zjudai/flowertune-lora-collection-67ecd5d0dae6145cbf798439](https://huggingface.co/collections/zjudai/flowertune-lora-collection-67ecd5d0dae6145cbf798439)