license: mit
base_model: BramVanroy/fietje-2b
tags:
- trl
- fietje
- alignment-handbook
- sft
datasets:
- BramVanroy/ultrachat_200k_dutch
- BramVanroy/no_robots_dutch
- BramVanroy/belebele_dutch
model-index:
- name: fietje-2b-instruct
results: []
pipeline_tag: text-generation
inference: false
language:
- nl
Fietje 2B Instruct
An open and efficient LLM for Dutchπ±ββοΈ Base version - π€ Instruct version (this one) - π¬ Chat version - π GGUF of instruct model
Fietje is an adapated version of microsoft/phi-2, tailored to Dutch text generation by training on 28B tokens. It is small and efficient with a size of 2.7 billion parameters while performing almost on par with more powerful Dutch LLMs of twice its size like GEITje 7B Ultra.
A thorough description of the creation and evaluation of Fietje as well as usage examples is available in this Github repository.
Intended uses & limitations
The same limitations as phi-2, and LLMs in general, apply here. LLMs hallucinate, make mistakes, and should not be trusted. Use at your own risk!
Training and evaluation data
Fietje 2B instruct was finetuned from the base model on the following datasets. Number of training samples per dataset given in brackets, totalling 201,579 samples.
- BramVanroy/ultrachat_200k_dutch: gpt-4-1106-preview; multi-turn; fully generated (192,598)
- BramVanroy/no_robots_dutch: gpt-4-1106-preview; prompt translate, answer generated; some items have system messages (8181)
- BramVanroy/belebele_dutch: Dutch portion of belebele, formatted into SFT format (800)
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 42
- eval_batch_size: 42
- seed: 42
- distributed_type: multi-GPU
- num_devices: 16
- total_train_batch_size: 672
- total_eval_batch_size: 672
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-07
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
0.9325 | 1.0 | 178 | 0.9060 |
0.8687 | 2.0 | 356 | 0.8850 |
0.8385 | 3.0 | 534 | 0.8818 |
Framework versions
- Transformers 4.39.1
- Pytorch 2.1.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2