FlowerTune LoRA Model

This is a LoRA adapter for Qwen/Qwen2.5-1.5B-Instruct fine-tuned with Flower federated learning framework on a general NLP dataset.

Training Details

  • Dataset: vicgalle/alpaca-gpt4
  • Training method: Federated LoRA fine-tuning with FlowerTune
  • Framework: Flower

This model is a LoRA adapter fine-tuned on Qwen/Qwen2.5-1.5B-Instruct using the Flower federated learning framework. It was trained on a general NLP dataset (vicgalle/alpaca-gpt4) through distributed learning to improve performance.

Links

Downloads last month
4
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for zjudai/flowertune-general-nlp-lora-qwen2.5-1.5b-instruct

Base model

Qwen/Qwen2.5-1.5B
Adapter
(441)
this model

Dataset used to train zjudai/flowertune-general-nlp-lora-qwen2.5-1.5b-instruct

Collection including zjudai/flowertune-general-nlp-lora-qwen2.5-1.5b-instruct