SmolLM-90M-Instruct-Pruned π§ π‘
A pruned version of HuggingFaceTB/SmolLM-135M-Instruct
, reduced from 135M parameters to approximately 90M for faster inference and reduced memory usage, while maintaining reasonable performance for instruction-style tasks.
π§ Whatβs Inside
- Base:
SmolLM-135M-Instruct
- Parameters: ~90M
- Pruning method: Structured pruning (e.g., attention heads, MLP layers) using PyTorch/NVIDIA pruning tools (customize if needed).
- Vocabulary, tokenizer, and training objectives remain identical to the base model.
π Intended Use
This model is optimized for:
- Low-latency applications
- Edge deployments
- Instruction-following tasks with compact models
- Use in environments with limited VRAM or compute
Example Use
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("HuggingFaceTB/SmolLM-135M-Instruct")
model = AutoModelForCausalLM.from_pretrained("your-username/SmolLM-90M-Instruct-Pruned")
prompt = "Explain quantum computing to a 10-year-old."
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
Model tree for Pingsz/pruned
Base model
HuggingFaceTB/SmolLM-135M
Quantized
HuggingFaceTB/SmolLM-135M-Instruct