FlowerTune LoRA Model

This is a LoRA adapter for TinyLlama/TinyLlama-1.1B-Chat-v1.0 fine-tuned with Flower federated learning framework on a general NLP dataset.

Training Details

  • Dataset: vicgalle/alpaca-gpt4
  • Training method: Federated LoRA fine-tuning with FlowerTune
  • Framework: Flower

This model is a LoRA adapter fine-tuned on TinyLlama/TinyLlama-1.1B-Chat-v1.0 using the Flower federated learning framework. It was trained on a general NLP dataset (vicgalle/alpaca-gpt4) through distributed learning to improve performance.

Links

Downloads last month
7
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for zjudai/flowertune-general-nlp-lora-tinyllama-1.1b-chat-v1.0

Adapter
(1013)
this model

Dataset used to train zjudai/flowertune-general-nlp-lora-tinyllama-1.1b-chat-v1.0

Collection including zjudai/flowertune-general-nlp-lora-tinyllama-1.1b-chat-v1.0