You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Built with Axolotl

See axolotl config

axolotl version: 0.10.0.dev0

base_model: /mnt/shared/tp1-an1/alex/Magistral/merged
chat_template: mistral_v7_tekken

plugins:
  - axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_glu_activation: true
liger_fused_linear_cross_entropy: true

processor_type: AutoProcessor
image_size: 512
image_resize_algorithm: bilinear

skip_prepare_dataset: true
remove_unused_columns: false  # leave columns in place as they are needed to handle image embeddings during training
sample_packing: false  # not yet supported with multimodal


unfrozen_parameters:
  - .*multi_modal_projector.*
  - .*lm_head.*

datasets:
  - path: /mnt/shared/tp1-an1/alex/FFM_training/vision_dialogue_dataset-0527.jsonl
    type: chat_template
    field_messages: messages
    roles_to_train: ['assistant']
    train_on_eos: turn
      
dataset_prepared_path: ./vision_dataprep/
val_set_size: 0
output_dir: ./placeholder_add_vision/
shuffle_merged_datasets: true

sequence_len: 4096
eval_sample_packing: false
pad_to_sequence_len: true

wandb_project: TP1_2025_05
wandb_entity:
wandb_watch:
wandb_name: Mistral-24B-SFT-250611
use_tensorboard: true

gradient_accumulation_steps: 1
micro_batch_size: 4
num_epochs: 2
optimizer: adamw_torch_fused
lr_scheduler: cosine
learning_rate: 1e-5
max_grad_norm: 1.0

adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-8

bf16: true
tf32: false

logging_steps: 1
flash_attention: true
xformers_attention: false
sdp_attention: false

warmup_ratio: 0.05
saves_per_epoch: 1
weight_decay: 0

fsdp:
  - full_shard
  - auto_wrap
fsdp_config:
  fsdp_limit_all_gathers: true
  fsdp_sync_module_states: true
  fsdp_offload_params: false
  fsdp_use_orig_params: true
  fsdp_cpu_ram_efficient_loading: true
  fsdp_activation_checkpointing: true
  fsdp_transformer_layer_cls_to_wrap: MistralDecoderLayer,PixtralAttentionLayer
  fsdp_state_dict_type: FULL_STATE_DICT
  fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP

seed: 42
auto_resume_from_checkpoints: true

placeholder_add_vision/

This model was trained from scratch on the /mnt/shared/tp1-an1/alex/FFM_training/vision_dialogue_dataset-0527.jsonl dataset.

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 16
  • total_train_batch_size: 64
  • total_eval_batch_size: 64
  • optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.95) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 802
  • training_steps: 16051

Training results

Framework versions

  • Transformers 4.52.3
  • Pytorch 2.6.0+cu126
  • Datasets 3.6.0
  • Tokenizers 0.21.1
Downloads last month
2
Safetensors
Model size
24B params
Tensor type
F32
·
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support