--- library_name: transformers license: apache-2.0 base_model: Dans-DiscountModels/Mistral-Small-3.1-24B-Base-2503-hf-DanChat tags: - axolotl - generated_from_trainer datasets: - Dans-DiscountModels/pretokenization-test-6 model-index: - name: 24b-ms-dans-personality-engine-v1.3.0-TestArticle-1 results: [] --- [Built with Axolotl](https://github.com/axolotl-ai-cloud/axolotl)
See axolotl config axolotl version: `0.10.0.dev0` ```yaml base_model: Dans-DiscountModels/Mistral-Small-3.1-24B-Base-2503-hf-DanChat model_type: AutoModelForCausalLM tokenizer_type: AutoTokenizer trust_remote_code: # wandb configuration wandb_project: 24b-ms-dans-personality-engine wandb_watch: wandb_run_id: V1.3.0-1-5 # V{Version}-{Run Number}-{Attempt Number} wandb_log_model: # push checkpoints to hub hub_model_id: Dans-DiscountModels/24b-ms-dans-personality-engine-v1.3.0-TestArticle-1 # how to push checkpoints to hub # https://huggingface.co/docs/transformers/v4.31.0/en/main_classes/trainer#transformers.TrainingArguments.hub_strategy hub_strategy: "every_save" # Whether to use hf `use_auth_token` for loading datasets. Useful for fetching private datasets # Required to be true when used in combination with `push_dataset_to_hub` hf_use_auth_token: true # where to save the finished model to output_dir: ./24b-ms-dans-personality-engine save_safetensors: true datasets: - path: Dans-DiscountModels/pretokenization-test-6 ds_type: parquet type: plugins: - axolotl.integrations.liger.LigerPlugin - axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin liger_rope: true liger_rms_norm: true liger_layer_norm: true liger_glu_activation: true liger_fused_linear_cross_entropy: false cut_cross_entropy: true load_in_8bit: false load_in_4bit: false strict: false adapter: lora_model_dir: dataset_prepared_path: ./24b-ms-dans-personality-engine val_set_size: 0.0 sequence_len: 33000 sample_packing: true eval_sample_packing: true pad_to_sequence_len: true gradient_checkpointing: true gradient_accumulation_steps: 4 micro_batch_size: 1 num_epochs: 2 optimizer: ademamix_8bit optim_args: "beta1=0.9,beta2=0.999,beta3=0.999,alpha=5" lr_scheduler: rex learning_rate: 0.000001 cosine_min_lr_ratio: max_grad_norm: 0.001 train_on_inputs: false group_by_length: false bf16: true fp16: false tf32: false early_stopping_patience: resume_from_checkpoint: auto_resume_from_checkpoints: false local_rank: logging_steps: 1 xformers_attention: flash_attention: true warmup_ratio: 0.1 evals_per_epoch: 24 eval_table_size: eval_max_new_tokens: saves_per_epoch: 4 save_total_limit: 1 debug: false deepspeed: deepspeed_configs/zero3_bf16.json fsdp: fsdp_config: special_tokens: ```

# 24b-ms-dans-personality-engine-v1.3.0-TestArticle-1 This model is a fine-tuned version of [Dans-DiscountModels/Mistral-Small-3.1-24B-Base-2503-hf-DanChat](https://huggingface.co/Dans-DiscountModels/Mistral-Small-3.1-24B-Base-2503-hf-DanChat) on the Dans-DiscountModels/pretokenization-test-6 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - total_eval_batch_size: 8 - optimizer: Use ademamix_8bit and the args are: beta1=0.9,beta2=0.999,beta3=0.999,alpha=5 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 338 - num_epochs: 2.0 ### Training results ### Framework versions - Transformers 4.51.3 - Pytorch 2.4.1+cu121 - Datasets 3.5.1 - Tokenizers 0.21.1