π§ SQL Chat β Phi-3-mini SQL Assistant
Model ID: saadkhi/SQL_Chat_finetuned_model
Base model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
Model type: LoRA (merged)
Task: Natural Language β SQL query generation + conversational SQL assistance
Language: English
License: Apache 2.0
This model is a fine-tuned version of Phi-3-mini-4k-instruct (4-bit quantized) specialized in understanding natural language questions about databases and generating correct, clean SQL queries.
β¨ Key Features
- Very good balance between size, speed and SQL generation quality
- Works well with common database dialects (PostgreSQL, MySQL, SQLite, SQL Server, etc.)
- Can explain queries, suggest improvements and handle follow-up questions
- Fast inference even on consumer hardware (especially with 4-bit quantization)
π― Intended Use & Capabilities
Best for:
- Converting natural language questions β SQL queries
- Helping beginners learn SQL through explanations
- Quick prototyping of SQL queries in development
- Building SQL chat interfaces / tools / assistants
- Educational purposes
Limitations / Not recommended for:
- Extremely complex analytical/business intelligence queries
- Real-time query optimization advice
- Very database-specific or proprietary SQL extensions
- Production systems without human review (always validate generated SQL!)
π οΈ Quick Start (merged LoRA version)
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "saadkhi/SQL_Chat_finetuned_model"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True
)
# Simple prompt style (chat template is recommended)
prompt = """Show all customers who placed more than 5 orders in 2024"""
messages = [{"role": "user", "content": prompt}]
inputs = tokenizer.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
outputs = model.generate(
inputs,
max_new_tokens=180,
do_sample=False,
temperature=0.0,
pad_token_id=tokenizer.eos_token_id
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
- Downloads last month
- 130
Model tree for saadkhi/SQL_Chat_finetuned_model
Base model
unsloth/Phi-3-mini-4k-instruct-bnb-4bit