Introduction

This model specializes on the Text-to-SQL task. It is finetuned from the quantized version of Qwen2.5-Coder-32B-Instruct. The model has an EX score of 63.33 and R-VES score of 60.02 on the BIRD leaderboard.

Quick start

To use this model, craft your prompt to start with your database schema in the form of CREATE TABLE, followed by your natural language query preceded by --. Make sure your prompt ends with SELECT in order for the model to finish the query for you.

from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
from peft import PeftModel

model_name = "unsloth/Qwen2.5-Coder-32B-Instruct-bnb-4bit"
adapter_name = "onekq-ai/OneSQL-v0.1-Qwen-32B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
tokenizer.padding_side = "left"
model = PeftModel.from_pretrained(AutoModelForCausalLM.from_pretrained(model_name, device_map="auto"), adapter_name).to("cuda")

generator = pipeline("text-generation", model=model, tokenizer=tokenizer, return_full_text=False)

prompt = """
CREATE TABLE students (
    id INTEGER PRIMARY KEY,
    name TEXT,
    age INTEGER,
    grade TEXT
);

-- Find the three youngest students
SELECT """

result = generator(f"<|im_start|>system\nYou are a SQL expert. Return code only.<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant\n")[0]
print(result["generated_text"])

The model response is the finished SQL query without SELECT

* FROM students ORDER BY age ASC LIMIT 3
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 1 Ask for provider support

Model tree for onekq-ai/OneSQL-v0.1-Qwen-32B

Base model

Qwen/Qwen2.5-32B
Finetuned
(16)
this model
Quantizations
1 model

Collection including onekq-ai/OneSQL-v0.1-Qwen-32B