--- library_name: transformers base_model: NewEden/Hamanasu-KTO-4B tags: - axolotl - generated_from_trainer datasets: - PocketDoc/Dans-Prosemaxx-Cowriter-3-S - PocketDoc/Dans-Prosemaxx-Adventure - PocketDoc/Dans-Failuremaxx-Adventure-3 - PocketDoc/Dans-Prosemaxx-InstructWriter-ZeroShot-2 - PocketDoc/Dans-Prosemaxx-InstructWriter-ZeroShot-3 - PocketDoc/Dans-Prosemaxx-InstructWriter-Continue-2 - PocketDoc/Dans-Prosemaxx-Instructwriter-Long model-index: - name: Hamanasu-4B-Adventure-Final-Hopefully results: [] --- [Built with Axolotl](https://github.com/axolotl-ai-cloud/axolotl)
See axolotl config axolotl version: `0.8.0.dev0` ```yaml base_model: NewEden/Hamanasu-KTO-4B model_type: AutoModelForCausalLM tokenizer_type: AutoTokenizer hub_model_id: NewEden/Hamanasu-4B-Adventure-Final-Hopefully hub_strategy: "all_checkpoints" push_dataset_to_hub: hf_use_auth_token: true plugins: - axolotl.integrations.liger.LigerPlugin - axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin liger_rope: true liger_rms_norm: true liger_layer_norm: true liger_glu_activation: true liger_fused_linear_cross_entropy: false cut_cross_entropy: true load_in_8bit: false load_in_4bit: false strict: false datasets: - path: PocketDoc/Dans-Prosemaxx-Cowriter-3-S type: dan-chat-advanced - path: PocketDoc/Dans-Prosemaxx-Adventure type: dan-chat-advanced - path: PocketDoc/Dans-Failuremaxx-Adventure-3 type: dan-chat-advanced - path: PocketDoc/Dans-Prosemaxx-InstructWriter-ZeroShot-2 type: dan-chat-advanced - path: PocketDoc/Dans-Prosemaxx-InstructWriter-ZeroShot-3 type: dan-chat-advanced - path: PocketDoc/Dans-Prosemaxx-InstructWriter-Continue-2 type: dan-chat-advanced - path: PocketDoc/Dans-Prosemaxx-Instructwriter-Long type: dan-chat-advanced shuffle_merged_datasets: true dataset_prepared_path: prepared_data val_set_size: 0.01 output_dir: ./adventure sequence_len: 32768 sample_packing: true pad_to_sequence_len: true eval_sample_packing: true wandb_project: tavbussy wandb_entity: wandb_watch: wandb_name: adventure-attempt-02 wandb_log_model: gradient_accumulation_steps: 1 micro_batch_size: 4 num_epochs: 4 optimizer: adamw_bnb_8bit lr_scheduler: cosine learning_rate: 5e-6 max_grad_norm: 0.1 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 25 evals_per_epoch: 4 eval_table_size: eval_max_new_tokens: 128 saves_per_epoch: 2 debug: deepspeed: ./deepspeed_configs/zero3_bf16.json weight_decay: 0.02 fsdp: fsdp_config: special_tokens: pad_token: <|finetune_right_pad_id|> ```

# Hamanasu-4B-Adventure-Final-Hopefully This model is a fine-tuned version of [NewEden/Hamanasu-KTO-4B](https://huggingface.co/NewEden/Hamanasu-KTO-4B) on the PocketDoc/Dans-Prosemaxx-Cowriter-3-S, the PocketDoc/Dans-Prosemaxx-Adventure, the PocketDoc/Dans-Failuremaxx-Adventure-3, the PocketDoc/Dans-Prosemaxx-InstructWriter-ZeroShot-2, the PocketDoc/Dans-Prosemaxx-InstructWriter-ZeroShot-3, the PocketDoc/Dans-Prosemaxx-InstructWriter-Continue-2 and the PocketDoc/Dans-Prosemaxx-Instructwriter-Long datasets. It achieves the following results on the evaluation set: - Loss: 2.4143 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - total_train_batch_size: 16 - total_eval_batch_size: 16 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 25 - num_epochs: 4.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.5668 | 0.0068 | 1 | 2.5806 | | 2.455 | 0.2534 | 37 | 2.4450 | | 2.4115 | 0.5068 | 74 | 2.4323 | | 2.3298 | 0.7603 | 111 | 2.4223 | | 2.322 | 1.0137 | 148 | 2.4178 | | 2.2661 | 1.2671 | 185 | 2.4178 | | 2.2482 | 1.5205 | 222 | 2.4155 | | 2.3707 | 1.7740 | 259 | 2.4115 | | 2.293 | 2.0274 | 296 | 2.4132 | | 2.3085 | 2.2808 | 333 | 2.4137 | | 2.1902 | 2.5342 | 370 | 2.4123 | | 2.216 | 2.7877 | 407 | 2.4112 | | 2.3081 | 3.0411 | 444 | 2.4123 | | 2.1989 | 3.2945 | 481 | 2.4142 | | 2.2527 | 3.5479 | 518 | 2.4142 | | 2.2419 | 3.8014 | 555 | 2.4143 | ### Framework versions - Transformers 4.50.0 - Pytorch 2.5.1+cu124 - Datasets 3.4.1 - Tokenizers 0.21.1