--- license: apache-2.0 datasets: - HuggingFaceFW/fineweb-2 language: - is base_model: - HuggingFaceTB/SmolLM2-360M-Instruct pipeline_tag: text-generation library_name: transformers --- This is a SmolLM2-135M-Instruct model fine-tuned on the Icelandic portion of Fineweb-2. It is intended for my research and has not been evaluated more broadly yet. Training: - 1 Epoch - Learning rate: 5e-4 - LR scheduler: Cosine - Warmup ratio: 0.05 - Batch size: 1 - 8 A100 (80GB) GPUs - Gradient accumulation steps: 32 - Effective batch size: 256 - Max. context length: 8192 tokens