Fablia-Qwen3-Format2 Merged Model
This repository contains the merged version of the Fablia-Qwen3-Format2 model, where the LoRA adapter has been merged with the base model.
Model Details
- Base Model: Qwen/Qwen3-1.7B
- LoRA Adapter: Volko76/Fablia-Qwen3-1.7B-LoRa-Format1-WithName
- Merged Model: Full precision merged model ready for inference
Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
# Load the merged model
tokenizer = AutoTokenizer.from_pretrained("Volko76/Fablia-Qwen3-1.7B-Format1-WithName", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
"Volko76/Fablia-Qwen3-1.7B-Format1-WithName",
torch_dtype=torch.float16,
device_map="auto",
trust_remote_code=True
)
# Generate text
prompt = "P: Bonjour cher ami, que fais-tu ?\nL:"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=50)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
Original Components
- Base Model: Qwen/Qwen3-1.7B
- LoRA Adapter: Volko76/Fablia-Qwen3-1.7B-LoRa-Format1-WithName
Model Architecture
This model is based on the Qwen3 architecture with the following modifications applied through LoRA fine-tuning and subsequent merging.
- Downloads last month
- 32
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support