Pingsz commited on
Commit
6d07ea7
·
verified ·
1 Parent(s): f2a9fdf

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +50 -0
README.md ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - transformers
5
+ - smollm
6
+ - pruned-model
7
+ - instruct
8
+ - small-llm
9
+ - text-generation
10
+ model_creator: HuggingFaceTB
11
+ base_model: HuggingFaceTB/SmolLM-135M-Instruct
12
+ model_name: SmolLM-90M-Instruct-Pruned
13
+ pipeline_tag: text-generation
14
+ language:
15
+ - en
16
+ ---
17
+
18
+ # SmolLM-90M-Instruct-Pruned 🧠💡
19
+
20
+ A **pruned** version of [`HuggingFaceTB/SmolLM-135M-Instruct`](https://huggingface.co/HuggingFaceTB/SmolLM-135M-Instruct), reduced from **135M** parameters to approximately **90M** for faster inference and reduced memory usage, while maintaining reasonable performance for instruction-style tasks.
21
+
22
+ ## 🔧 What’s Inside
23
+
24
+ - Base: `SmolLM-135M-Instruct`
25
+ - Parameters: **~90M**
26
+ - Pruning method: Structured pruning (e.g., attention heads, MLP layers) using PyTorch/NVIDIA pruning tools *(customize if needed)*.
27
+ - Vocabulary, tokenizer, and training objectives remain **identical** to the base model.
28
+
29
+ ## 🚀 Intended Use
30
+
31
+ This model is optimized for:
32
+
33
+ - **Low-latency applications**
34
+ - **Edge deployments**
35
+ - **Instruction-following tasks** with compact models
36
+ - Use in environments with **limited VRAM or compute**
37
+
38
+ ### Example Use
39
+
40
+ ```python
41
+ from transformers import AutoTokenizer, AutoModelForCausalLM
42
+
43
+ tokenizer = AutoTokenizer.from_pretrained("HuggingFaceTB/SmolLM-135M-Instruct")
44
+ model = AutoModelForCausalLM.from_pretrained("your-username/SmolLM-90M-Instruct-Pruned")
45
+
46
+ prompt = "Explain quantum computing to a 10-year-old."
47
+ inputs = tokenizer(prompt, return_tensors="pt")
48
+ outputs = model.generate(**inputs, max_new_tokens=100)
49
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
50
+ ```