This is a fine-tuned version of the Gemma3-12B-Instruct model, adapted for answering questions about legislation in Latvia. The model was fine-tuned on a dataset of ~15 thousand question–answer pairs sourced from the LVportals.lv archive.

Currently, we provide only LoRA adapter files.

How to Use

To use this model, you need to install the unsloth library and download the LoRA adapter files available from this repository.

Let's assume that the root folder is root, and there is a folder called lora where the adapteres are stored.

In the root folder, create and run a Python script:

    import os
    os.environ['WANDB_DISABLED'] = 'true'

    from unsloth import FastModel
    from unsloth import get_chat_template

    max_seq_length = 2048
    dtype = None
    load_in_4bit = True

    model, tokenizer = FastModel.from_pretrained(
        model_name = 'lora',
        max_seq_length=max_seq_length,
        dtype=dtype,
        load_in_4bit=load_in_4bit
    )

    tokenizer = get_chat_template(
        tokenizer,
        chat_template = "gemma-3",
    )

    prompt_style = """<start_of_turn>user
    {}<end_of_turn>
    <start_of_turn>model
    {}"""

    FastModel.for_inference(model)  # Unsloth has 2x faster inference!

    prompt = "Kas ir rakstīts civillikuma 594. pantā?"

    inputs = tokenizer([prompt_style.format(prompt, "")], return_tensors="pt").to("cuda")

    outputs = model.generate(
        input_ids=inputs.input_ids,
        attention_mask=inputs.attention_mask,
        max_new_tokens=2048,
        use_cache=True,
    )
    response = tokenizer.batch_decode(outputs)
    answer = response[0].split('<start_of_turn>model\n')[-1].replace('<end_of_turn>','')

    print(answer)

Citation

The data preparation, fine-tuning process, and comprehensive evaluation are described in more detail in:

Artis Pauniņš. Evaluation and Adaptation of Large Language Models for Question-Answering on Legislation. Master’s Thesis. University of Latvia, 2025.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for AiLab-IMCS-UL/Gemma3-12B-Instruct-LVportals-15K

Finetuned
(72)
this model

Collection including AiLab-IMCS-UL/Gemma3-12B-Instruct-LVportals-15K