Uses my tiny dataset to train this bigger variant of Viking model family.

This LoRA uses the 1000B checkpoint.

Uploaded model

  • Developed by: mpasila
  • License: apache-2.0
  • Finetuned from model : LumiOpen/Viking-33B

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
11
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Model tree for mpasila/Finnish-Chatty-Tiny-V1-1-LoRA-33B

Adapter
(1)
this model

Dataset used to train mpasila/Finnish-Chatty-Tiny-V1-1-LoRA-33B

Collection including mpasila/Finnish-Chatty-Tiny-V1-1-LoRA-33B