--- library_name: transformers license: apache-2.0 base_model: Dans-DiscountModels/Mistral-Small-3.1-24B-Base-2503-hf-DanChat tags: - axolotl - generated_from_trainer datasets: - Dans-DiscountModels/dpe-130l-m-24b-32k model-index: - name: 24b-ms-dans-personality-engine-v1.3.0L-TestArticle-1 results: [] --- [Built with Axolotl](https://github.com/axolotl-ai-cloud/axolotl)
See axolotl config axolotl version: `0.10.0.dev0` ```yaml base_model: Dans-DiscountModels/Mistral-Small-3.1-24B-Base-2503-hf-DanChat model_type: AutoModelForCausalLM tokenizer_type: AutoTokenizer trust_remote_code: # wandb configuration wandb_project: 24b-ms-dans-personality-engine wandb_watch: wandb_run_id: V1.3.0L-1-3 # V{Version}-{Run Number}-{Attempt Number} wandb_log_model: # push checkpoints to hub hub_model_id: Dans-DiscountModels/24b-ms-dans-personality-engine-v1.3.0L-TestArticle-1 # how to push checkpoints to hub # https://huggingface.co/docs/transformers/v4.31.0/en/main_classes/trainer#transformers.TrainingArguments.hub_strategy hub_strategy: "every_save" # Whether to use hf `use_auth_token` for loading datasets. Useful for fetching private datasets # Required to be true when used in combination with `push_dataset_to_hub` hf_use_auth_token: true # where to save the finished model to output_dir: ./24b-ms-dans-personality-engine save_safetensors: true datasets: - path: Dans-DiscountModels/dpe-130l-m-24b-32k split: train ds_type: parquet type: test_datasets: - path: Dans-DiscountModels/dpe-130l-m-24b-32k split: validation ds_type: parquet type: plugins: - axolotl.integrations.liger.LigerPlugin - axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin liger_rope: true liger_rms_norm: true liger_layer_norm: true liger_glu_activation: true liger_fused_linear_cross_entropy: false cut_cross_entropy: true load_in_8bit: false load_in_4bit: false strict: false adapter: lora_model_dir: dataset_prepared_path: ./24b-ms-dans-personality-engine-data sequence_len: 32768 sample_packing: true eval_sample_packing: true pad_to_sequence_len: true gradient_checkpointing: true gradient_accumulation_steps: 4 micro_batch_size: 1 num_epochs: 1 optimizer: ademamix_8bit optim_args: "beta1=0.9,beta2=0.999,beta3=0.999,alpha=5" lr_scheduler: rex learning_rate: 0.0000012 cosine_min_lr_ratio: 0.1 max_grad_norm: 0.001 train_on_inputs: false group_by_length: false bf16: true fp16: false tf32: false early_stopping_patience: resume_from_checkpoint: auto_resume_from_checkpoints: false local_rank: logging_steps: 1 xformers_attention: flash_attention: true warmup_ratio: 0.1 evals_per_epoch: 10 eval_table_size: eval_max_new_tokens: saves_per_epoch: 4 save_total_limit: 1 debug: false deepspeed: deepspeed_configs/zero3_bf16.json fsdp: fsdp_config: special_tokens: ```

# 24b-ms-dans-personality-engine-v1.3.0L-TestArticle-1 This model is a fine-tuned version of [Dans-DiscountModels/Mistral-Small-3.1-24B-Base-2503-hf-DanChat](https://huggingface.co/Dans-DiscountModels/Mistral-Small-3.1-24B-Base-2503-hf-DanChat) on the Dans-DiscountModels/dpe-130l-m-24b-32k dataset. It achieves the following results on the evaluation set: - Loss: 1.3214 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.2e-06 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - total_eval_batch_size: 8 - optimizer: Use ademamix_8bit and the args are: beta1=0.9,beta2=0.999,beta3=0.999,alpha=5 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 48 - num_epochs: 1.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.4826 | 0.0021 | 1 | 1.4263 | | 1.4024 | 0.1012 | 49 | 1.3709 | | 1.4655 | 0.2024 | 98 | 1.3545 | | 1.576 | 0.3036 | 147 | 1.3459 | | 1.3687 | 0.4047 | 196 | 1.3396 | | 1.4367 | 0.5059 | 245 | 1.3346 | | 1.3409 | 0.6071 | 294 | 1.3304 | | 1.4442 | 0.7083 | 343 | 1.3270 | | 1.4049 | 0.8095 | 392 | 1.3242 | | 1.5044 | 0.9107 | 441 | 1.3214 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.4.1+cu121 - Datasets 3.5.0 - Tokenizers 0.21.1