Fablia-Qwen3-Format2 Merged Model

This repository contains the merged version of the Fablia-Qwen3-Format2 model, where the LoRA adapter has been merged with the base model.

Model Details

  • Base Model: Qwen/Qwen3-1.7B
  • LoRA Adapter: Volko76/Fablia-Qwen3-1.7B-LoRa-Format1-WithName
  • Merged Model: Full precision merged model ready for inference

Usage

from transformers import AutoTokenizer, AutoModelForCausalLM

# Load the merged model
tokenizer = AutoTokenizer.from_pretrained("Volko76/Fablia-Qwen3-1.7B-Format1-WithName", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
    "Volko76/Fablia-Qwen3-1.7B-Format1-WithName",
    torch_dtype=torch.float16,
    device_map="auto",
    trust_remote_code=True
)

# Generate text
prompt = "P: Bonjour cher ami, que fais-tu ?\nL:"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=50)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)

Original Components

Model Architecture

This model is based on the Qwen3 architecture with the following modifications applied through LoRA fine-tuning and subsequent merging.

Downloads last month
32
Safetensors
Model size
1.72B params
Tensor type
F16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Volko76/Fablia-Qwen3-1.7B-Format1-WithName

Finetuned
Qwen/Qwen3-1.7B
Adapter
(39)
this model
Adapters
2 models