Svngoku commited on
Commit
ac15039
·
verified ·
1 Parent(s): 5c66500

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -25,7 +25,7 @@ This model is a fine-tuned version of the Qwen3-VL-8B-Instruct model using the U
25
 
26
  ## Training
27
 
28
- The model was trained for 200 steps with a batch size of 8 (2 per device with 4 gradient accumulation steps). LoRA adapters were used for parameter efficient finetuning, targeting both vision and language layers, as well as attention and MLP modules.
29
 
30
 
31
  ## Dataset
 
25
 
26
  ## Training
27
 
28
+ The model was trained for 4 epochs with a batch size of 8 (4 per device with 8 gradient accumulation steps). LoRA adapters were used for parameter efficient finetuning, targeting both vision and language layers, as well as attention and MLP modules.
29
 
30
 
31
  ## Dataset