--- license: cc-by-nc-4.0 base_model: EleutherAI/polyglot-ko-12.8b tags: - generated_from_trainer model-index: - name: gollm-12.8b-instruct-v2.3 results: [] --- # gollm-12.8b-instruct-v2.3 This model is a fine-tuned version of [EleutherAI/polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) on a custom mixed dataset ## Model description - No-context template ``` 아래는 작업을 설명하는 질문어와 추가 컨텍스트를 제공하는 맥락이 함께 제공됩니다. 요청을 적절히 완료하는 답변을 작성하세요. ### 질문: {instruction} ### 답변: ``` - With context template ``` 아래는 작업을 설명하는 질문어와 추가 컨텍스트를 제공하는 맥락이 함께 제공됩니다. 요청을 적절히 완료하는 답변을 작성하세요. ### 맥락: {input} ### 질문: {instruction} ### 답변: ``` ## Intended uses & limitations More information needed ## Training and evaluation data - self-introduction (20 samples) - High-quality reasoning dataset from private documents, QAs generated by Claude AI (1.3k samples) - EverythingLM-v2 (0.9k samples) - KoCoT (2k samples) - Private MRC dataset - answer generated by GPT-4 (32k samples) Original data have ~12k question-answer pairs with context, and augmentation is applied to make 20k samples with triplet contexts case (1 correct context out of 3) ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 - saved_checkpoint_at_epoch: 1 (condition: loss < 0.3) ### Framework versions - Transformers 4.32.0.dev0 - Pytorch 2.0.0+cu117 - Datasets 2.11.0 - Tokenizers 0.13.3