File size: 1,010 Bytes
3d64e88
 
 
 
 
 
 
e9bd1fe
 
3d64e88
 
27007a6
3d64e88
27007a6
c2da42b
27007a6
c2da42b
27007a6
 
 
c2da42b
27007a6
c2da42b
27007a6
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
---
base_model: meta-llama/Llama-3.2-1B-Instruct
tags:
- peft
- lora
- federated-learning
- flower
datasets:
- vicgalle/alpaca-gpt4
---

# FlowerTune LoRA Model

This is a LoRA adapter for meta-llama/Llama-3.2-1B-Instruct fine-tuned with Flower federated learning framework on a general NLP dataset.

## Training Details

- Dataset: vicgalle/alpaca-gpt4
- Training method: Federated LoRA fine-tuning with FlowerTune
- Framework: Flower

This model is a LoRA adapter fine-tuned on meta-llama/Llama-3.2-1B-Instruct using the Flower federated learning framework. It was trained on a general NLP dataset (vicgalle/alpaca-gpt4) through distributed learning to improve performance.

## Links
- FlowerTune Homepage: [https://huggingface.co/zjudai/FlowerTune](https://huggingface.co/zjudai/FlowerTune)
- FlowerTune Collection: [https://huggingface.co/collections/zjudai/flowertune-lora-collection-67ecd5d0dae6145cbf798439](https://huggingface.co/collections/zjudai/flowertune-lora-collection-67ecd5d0dae6145cbf798439)