--- pipeline_tag: text-generation license: apache-2.0 language: - en tags: - T3Q-ko-solar-sft-v3.0 - kyujinpy/KoCommercial-NoSSL base_model: chihoonlee10/T3Q-ko-solar-dpo-v3.0 datasets: - davidkim205/ko_common_gen model-index: - name: T3Q-ko-solar-sft-v3.0 results: [] --- Update @ 2024.03.25 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f22e4076fedc4fd11e978f/MoTedec_ZL8GM2MmGyAPs.png) ## T3Q-ko-solar-sft-v3.0 This model is a SFT fine-tuned version of chihoonlee10/T3Q-ko-solar-dpo-v3.0 **Model Developers** Chihoon Lee(chlee10), T3Q ## Training hyperparameters The following hyperparameters were used during training: ```python # 데이터셋과 훈련 횟수와 관련된 하이퍼 파라미터 batch_size = 16 num_epochs = 1 micro_batch = 1 gradient_accumulation_steps = batch_size // micro_batch # 훈련 방법에 대한 하이퍼 파라미터 cutoff_len = 4096 lr_scheduler = 'cosine' warmup_ratio = 0.06 # warmup_steps = 100 learning_rate = 5e-5 optimizer = 'paged_adamw_32bit' weight_decay = 0.01 max_grad_norm = 1.0 # LoRA config lora_r = 16 lora_alpha = 16 lora_dropout = 0.05 lora_target_modules = ["k_proj", "v_proj","gate_proj", "down_proj", "up_proj"] # Tokenizer에서 나오는 input값 설정 옵션 train_on_inputs = False add_eos_token = False # NEFTune params neftune_noise_alpha = 5 ```