FlowerTune LoRA Model

This is a LoRA adapter for meta-llama/Llama-3.1-8B-Instruct fine-tuned with Flower federated learning framework on a general NLP dataset.

Training Details

  • Dataset: vicgalle/alpaca-gpt4
  • Training method: Federated LoRA fine-tuning with FlowerTune
  • Framework: Flower

This model is a LoRA adapter fine-tuned on meta-llama/Llama-3.1-8B-Instruct using the Flower federated learning framework. It was trained on a general NLP dataset (vicgalle/alpaca-gpt4) through distributed learning to improve performance.

Links

Downloads last month
11
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for zjudai/flowertune-general-nlp-lora-llama-3.1-8b-instruct

Adapter
(826)
this model

Dataset used to train zjudai/flowertune-general-nlp-lora-llama-3.1-8b-instruct

Collection including zjudai/flowertune-general-nlp-lora-llama-3.1-8b-instruct