--- library_name: peft license: other base_model: unsloth/Qwen2.5-3B-Instruct tags: - generated_from_trainer datasets: - ICEPVP8977/Uncensored_Small_Test_Time_Compute model-index: - name: outputs/mymodel results: [] --- [Built with Axolotl](https://github.com/axolotl-ai-cloud/axolotl)
See axolotl config axolotl version: `0.8.0.dev0` ```yaml adapter: lora base_model: unsloth/Qwen2.5-3B-Instruct bf16: auto dataset_processes: 32 per_device_train_batch_size: 1 datasets: - message_property_mappings: content: content role: role path: ICEPVP8977/Uncensored_Small_Test_Time_Compute type: alpaca trust_remote_code: false gradient_accumulation_steps: 1 gradient_checkpointing: true learning_rate: 0.0002 lisa_layers_attribute: model.layers load_best_model_at_end: false load_in_4bit: true load_in_8bit: false lora_alpha: 16 lora_dropout: 0.05 lora_r: 8 lora_target_modules: - q_proj - v_proj - k_proj - o_proj - gate_proj - down_proj - up_proj loraplus_lr_embedding: 1.0e-06 lr_scheduler: cosine max_prompt_len: 512 mean_resizing_embeddings: false micro_batch_size: 8 num_epochs: 1.0 optimizer: paged_adamw_8bit output_dir: ./outputs/mymodel pretrain_multipack_attn: true pretrain_multipack_buffer_size: 10000 qlora_sharded_model_loading: false ray_num_workers: 1 resources_per_worker: GPU: 1 sample_packing_bin_size: 200 sample_packing_group_size: 100000 save_only_model: false save_safetensors: true sequence_len: 4096 shuffle_merged_datasets: true skip_prepare_dataset: false strict: false train_on_inputs: false trl: log_completions: false ref_model_mixup_alpha: 0.9 ref_model_sync_steps: 64 sync_ref_model: false use_vllm: false vllm_device: auto vllm_dtype: auto vllm_gpu_memory_utilization: 0.9 use_ray: false val_set_size: 0.0 weight_decay: 0.0 ```

# outputs/mymodel This model is a fine-tuned version of [unsloth/Qwen2.5-3B-Instruct](https://huggingface.co/unsloth/Qwen2.5-3B-Instruct) on the ICEPVP8977/Uncensored_Small_Test_Time_Compute dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 17 - num_epochs: 1.0 ### Training results ### Framework versions - PEFT 0.14.0 - Transformers 4.49.0 - Pytorch 2.5.1+cu124 - Datasets 3.2.0 - Tokenizers 0.21.0