add_vision / README.md
AlexHung29629's picture
Upload folder using huggingface_hub
f8f4cbd verified
---
library_name: transformers
tags:
- generated_from_trainer
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.10.0.dev0`
```yaml
base_model: /mnt/shared/tp1-an1/alex/Magistral/merged
chat_template: mistral_v7_tekken
plugins:
- axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_glu_activation: true
liger_fused_linear_cross_entropy: true
processor_type: AutoProcessor
image_size: 512
image_resize_algorithm: bilinear
skip_prepare_dataset: true
remove_unused_columns: false # leave columns in place as they are needed to handle image embeddings during training
sample_packing: false # not yet supported with multimodal
unfrozen_parameters:
- .*multi_modal_projector.*
- .*lm_head.*
datasets:
- path: /mnt/shared/tp1-an1/alex/FFM_training/vision_dialogue_dataset-0527.jsonl
type: chat_template
field_messages: messages
roles_to_train: ['assistant']
train_on_eos: turn
dataset_prepared_path: ./vision_dataprep/
val_set_size: 0
output_dir: ./placeholder_add_vision/
shuffle_merged_datasets: true
sequence_len: 4096
eval_sample_packing: false
pad_to_sequence_len: true
wandb_project: TP1_2025_05
wandb_entity:
wandb_watch:
wandb_name: Mistral-24B-SFT-250611
use_tensorboard: true
gradient_accumulation_steps: 1
micro_batch_size: 4
num_epochs: 2
optimizer: adamw_torch_fused
lr_scheduler: cosine
learning_rate: 1e-5
max_grad_norm: 1.0
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-8
bf16: true
tf32: false
logging_steps: 1
flash_attention: true
xformers_attention: false
sdp_attention: false
warmup_ratio: 0.05
saves_per_epoch: 1
weight_decay: 0
fsdp:
- full_shard
- auto_wrap
fsdp_config:
fsdp_limit_all_gathers: true
fsdp_sync_module_states: true
fsdp_offload_params: false
fsdp_use_orig_params: true
fsdp_cpu_ram_efficient_loading: true
fsdp_activation_checkpointing: true
fsdp_transformer_layer_cls_to_wrap: MistralDecoderLayer,PixtralAttentionLayer
fsdp_state_dict_type: FULL_STATE_DICT
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
seed: 42
auto_resume_from_checkpoints: true
```
</details><br>
# placeholder_add_vision/
This model was trained from scratch on the /mnt/shared/tp1-an1/alex/FFM_training/vision_dialogue_dataset-0527.jsonl dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 16
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.95) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 802
- training_steps: 16051
### Training results
### Framework versions
- Transformers 4.52.3
- Pytorch 2.6.0+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1