Neroism8422 commited on
Commit
6bfae43
·
verified ·
1 Parent(s): 296eb20

Add merge lora configuration file, LlamaFactory can reproduce.

Browse files
Files changed (1) hide show
  1. merge_lora.yaml +14 -0
merge_lora.yaml ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ### Note: DO NOT use quantized model or quantization_bit when merging lora adapters
2
+
3
+ ### model
4
+ # model_name_or_path: Qwen/Qwen2.5-VL-7B-Instruct
5
+ model_name_or_path: llava-hf/llama3-llava-next-8b-hf
6
+ adapter_name_or_path: saves/mol-instruct-llava3-next/checkpoint-4000
7
+ template: llava_next
8
+ trust_remote_code: true
9
+
10
+ ### export
11
+ export_dir: saves/merged_models/mol-instruct-llava3-next-checkpoint-4000
12
+ export_size: 5
13
+ export_device: cpu # choices: [cpu, auto]
14
+ export_legacy_format: false