Gemma 3 1B E-commerce Product Similarity Model
This model is a fine-tuned version of unsloth/gemma-3-1b-pt-unsloth-bnb-4bit for e-commerce product similarity tasks.
Model Details
- Base Model: unsloth/gemma-3-1b-pt
- Training Method: LoRA (Low-Rank Adaptation) with Unsloth
- Training Dataset: jiteshsureka/retail-ecomm-products
- Training Steps: 60
- Training Date: 2025-08-07
Usage
This model is designed to compare product similarity and provide scores between 0.0 and 1.0.
from unsloth import FastLanguageModel
import torch
# Load model
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="jiteshsureka/gemma-3-1b-ecomm",
max_seq_length=2048,
dtype=None,
load_in_4bit=True,
)
# Enable inference mode
FastLanguageModel.for_inference(model)
# Example usage
prompt = '''<bos><start_of_turn>user
Compare these two products and rate their similarity from 0.0 to 1.0:
Product 1: Your product name
Description: Your product description
Category: Your category
Product 2: Another product name
Description: Another product description
Category: Another category
How similar are these products?<end_of_turn>
<start_of_turn>model
'''
inputs = tokenizer([prompt], return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=64, temperature=0.7)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
Training Configuration
- LoRA Rank: 16
- LoRA Alpha: 32
- Learning Rate: 2e-4
- Batch Size: 1
- Gradient Accumulation: 4
License
This model is released under the Apache 2.0 license.
Model tree for jiteshsureka/gemma-3-1b-ecomm
Base model
google/gemma-3-1b-pt
Quantized
unsloth/gemma-3-1b-pt-unsloth-bnb-4bit