--- library_name: transformers language: - en license: apache-2.0 base_model: pszemraj/tFINE-850m-24x24-v0.5-instruct-L1 tags: - instruct datasets: - pszemraj/infinity-instruct-7m-T2T_en pipeline_tag: text2text-generation --- # tFINE-850m-24x24-instruct-L2 This model is a fine-tuned version of [pszemraj/tFINE-850m-24x24-v0.5-instruct-L1](https://huggingface.co/pszemraj/tFINE-850m-24x24-v0.5-instruct-L1) on the pszemraj/infinity-instruct-7m-T2T_en dataset (config `deduped-L2`). It achieves the following results on the evaluation set: - Loss: 1.2542 - Num Input Tokens Seen: 750938410 ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3.5e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 17868 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use OptimizerNames.PAGED_ADEMAMIX and the args are: No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 1.0