STL Phone Summarizer
A conversational LLM for summarizing phone specifications into concise, appealing descriptions for e-commerce.
Model: LoRA fine-tuned Llama-3.2
Repo: masabhuq/stl_phone_summarizer
Installation
pip install unsloth torch
Usage
1. Load Model and Tokenizer
from unsloth import FastLanguageModel
from unsloth.chat_templates import get_chat_template
model, tokenizer = FastLanguageModel.from_pretrained(
"masabhuq/stl_phone_summarizer",
max_seq_length=2048,
dtype=None, # Auto-detect (bfloat16 if supported)
load_in_4bit=True, # 4-bit quantization for memory efficiency
)
FastLanguageModel.for_inference(model)
2. Apply the Chat Template
tokenizer = get_chat_template(
tokenizer,
chat_template="llama-3.2",
map_eos_token=True,
)
3. Prepare the Input
system_prompt = (
"You are an expert at summarizing phone specifications into short, appealing key descriptions for an e-commerce site. "
"Always output in exactly this format:\n"
"Display: [concise display summary]\n"
"Processor: [processor name]\n"
"Camera: [camera highlights]\n"
"Battery: [battery capacity and charging]\n"
"Others: [comma-separated unique features]. "
"Focus on desirable aspects like high refresh rates, zoom capabilities, fast charging, and unique features such as water resistance or special sensors. "
"Do not include complicated keyboards that don't make sense on their own. "
"Do not include words that are too technical to understand for someone who is not highly tech savvy. "
"Output should be within 280 characters. Don't include anything like IPDC or IP64 or any such features in the result. Words starting with IP are not to be considered display feature."
)
specs = "Build: Glass front (Gorilla Glass 5), silicone polymer back (eco leather), plastic frame\nWeight: 178 g ..."
prompt = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": specs}
]
formatted_prompt = tokenizer.apply_chat_template(
prompt,
tokenize=False,
add_generation_prompt=True,
)
4. Tokenize and Generate
import torch
inputs = tokenizer(formatted_prompt, return_tensors="pt").to("cuda")
outputs = model.generate(
**inputs,
max_new_tokens=256,
do_sample=True,
temperature=0.7,
top_p=0.9,
)
5. Post-process Output
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=False)
# Extract the last paragraph and clean up
paragraphs = generated_text.strip().split("\n\n")
last_paragraph = paragraphs[-1]
clean_last_paragraph = last_paragraph.split("<|eot_id|>")[0].strip()
print(clean_last_paragraph)
6. Clean Up
Free GPU memory after inference:
python model.cpu() torch.cuda.empty_cache()
Hardware Requirements
- GPU: CUDA-compatible GPU with ~4-6GB VRAM for 4-bit inference.
- CPU: Optional for offloading model after inference (
model.cpu()
). - RAM: ~8GB system RAM for smooth operation with dataset processing.
Notes
- Chat Template: The tokenizer is uploaded without a chat template. Always apply the template at runtime as shown above.
- System Prompt: Adjust the system prompt for your use case.
- Output Format: The model is trained to output in a strict format for easy parsing.
- Memory Management: Use
model.cpu()
andtorch.cuda.empty_cache()
to free GPU memory after inference, especially on low-VRAM GPUs. - Inference Parameters: Adjust
temperature
andtop_p
for more or less creative outputs, andmax_new_tokens
for longer or shorter summaries.
Model Details
- Base Model:
unsloth/Llama-3.2-3B-Instruct-bnb-4bit
- Fine-Tuning: LoRA adapters with rank
r=16
, targeting modules:["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"]
. - Quantization: 4-bit for memory efficiency (~4-6GB VRAM).
- Training Data: A dataset of phone specifications (
specs
) paired with concise summaries (output
) in the format shown above. - Training Setup: Fine-tuned with
trl.SFTTrainer
,train_on_responses_only
to focus on assistant responses, and Llama-3.2 chat template for single-turn interactions. - Output Constraints: Summaries are limited to 280 characters, focusing on user-friendly features and avoiding technical terms like "IP68" or "IPDC".
Dataset
The model was trained on a custom dataset (specs_list.json
) containing pairs of detailed phone specifications and their corresponding summaries. Each entry includes:
- specs
: Detailed technical specs (e.g., display size, chipset, camera details).
- output
: A concise summary in the format:
Display: [summary] Processor: [name] Camera: [highlights] Battery: [capacity and charging] Others: [features]
The dataset emphasizes consumer-friendly features like high refresh rates, fast charging, and water resistance, avoiding overly technical terms.
License
This model is licensed under the Apache 2.0 License. See the LICENSE
file in the repository for details.
Citation
If you use this model, please cite the repository:
bibtex @misc{stl_phone_summarizer, author = {masabhuq}, title = {STL Phone Summarizer: A Fine-Tuned Llama-3.2 Model for Phone Specification Summaries}, year = {2025}, publisher = {Hugging Face}, howpublished = {\url{https://huggingface.co/masabhuq/stl_phone_summarizer}} }
### 6. Clean Up
Free GPU memory after inference:
python model.cpu() torch.cuda.empty_cache()
- Downloads last month
- 10