Updated readme
Browse files
README.md
CHANGED
@@ -1,7 +1,7 @@
|
|
1 |
---
|
2 |
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
|
3 |
library_name: peft
|
4 |
-
pipeline_tag:
|
5 |
tags:
|
6 |
- base_model:adapter:unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
|
7 |
- lora
|
@@ -9,202 +9,191 @@ tags:
|
|
9 |
- transformers
|
10 |
- trl
|
11 |
- unsloth
|
|
|
|
|
|
|
12 |
---
|
13 |
|
14 |
-
#
|
15 |
|
16 |
-
|
|
|
|
|
17 |
|
|
|
18 |
|
|
|
19 |
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
<!-- Provide a longer summary of what this model is. -->
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
- **Developed by:** [More Information Needed]
|
29 |
-
- **Funded by [optional]:** [More Information Needed]
|
30 |
-
- **Shared by [optional]:** [More Information Needed]
|
31 |
-
- **Model type:** [More Information Needed]
|
32 |
-
- **Language(s) (NLP):** [More Information Needed]
|
33 |
-
- **License:** [More Information Needed]
|
34 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
35 |
-
|
36 |
-
### Model Sources [optional]
|
37 |
-
|
38 |
-
<!-- Provide the basic links for the model. -->
|
39 |
-
|
40 |
-
- **Repository:** [More Information Needed]
|
41 |
-
- **Paper [optional]:** [More Information Needed]
|
42 |
-
- **Demo [optional]:** [More Information Needed]
|
43 |
-
|
44 |
-
## Uses
|
45 |
-
|
46 |
-
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
47 |
-
|
48 |
-
### Direct Use
|
49 |
-
|
50 |
-
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
51 |
-
|
52 |
-
[More Information Needed]
|
53 |
-
|
54 |
-
### Downstream Use [optional]
|
55 |
-
|
56 |
-
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
57 |
-
|
58 |
-
[More Information Needed]
|
59 |
-
|
60 |
-
### Out-of-Scope Use
|
61 |
-
|
62 |
-
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
63 |
-
|
64 |
-
[More Information Needed]
|
65 |
-
|
66 |
-
## Bias, Risks, and Limitations
|
67 |
-
|
68 |
-
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
69 |
-
|
70 |
-
[More Information Needed]
|
71 |
-
|
72 |
-
### Recommendations
|
73 |
-
|
74 |
-
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
75 |
-
|
76 |
-
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
77 |
-
|
78 |
-
## How to Get Started with the Model
|
79 |
-
|
80 |
-
Use the code below to get started with the model.
|
81 |
-
|
82 |
-
[More Information Needed]
|
83 |
-
|
84 |
-
## Training Details
|
85 |
-
|
86 |
-
### Training Data
|
87 |
-
|
88 |
-
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
89 |
-
|
90 |
-
[More Information Needed]
|
91 |
-
|
92 |
-
### Training Procedure
|
93 |
-
|
94 |
-
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
95 |
-
|
96 |
-
#### Preprocessing [optional]
|
97 |
-
|
98 |
-
[More Information Needed]
|
99 |
-
|
100 |
-
|
101 |
-
#### Training Hyperparameters
|
102 |
-
|
103 |
-
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
104 |
-
|
105 |
-
#### Speeds, Sizes, Times [optional]
|
106 |
-
|
107 |
-
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
108 |
-
|
109 |
-
[More Information Needed]
|
110 |
-
|
111 |
-
## Evaluation
|
112 |
-
|
113 |
-
<!-- This section describes the evaluation protocols and provides the results. -->
|
114 |
-
|
115 |
-
### Testing Data, Factors & Metrics
|
116 |
-
|
117 |
-
#### Testing Data
|
118 |
-
|
119 |
-
<!-- This should link to a Dataset Card if possible. -->
|
120 |
-
|
121 |
-
[More Information Needed]
|
122 |
-
|
123 |
-
#### Factors
|
124 |
-
|
125 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
126 |
-
|
127 |
-
[More Information Needed]
|
128 |
-
|
129 |
-
#### Metrics
|
130 |
-
|
131 |
-
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
132 |
-
|
133 |
-
[More Information Needed]
|
134 |
-
|
135 |
-
### Results
|
136 |
-
|
137 |
-
[More Information Needed]
|
138 |
-
|
139 |
-
#### Summary
|
140 |
-
|
141 |
-
|
142 |
-
|
143 |
-
## Model Examination [optional]
|
144 |
-
|
145 |
-
<!-- Relevant interpretability work for the model goes here -->
|
146 |
-
|
147 |
-
[More Information Needed]
|
148 |
-
|
149 |
-
## Environmental Impact
|
150 |
-
|
151 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
152 |
-
|
153 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
154 |
-
|
155 |
-
- **Hardware Type:** [More Information Needed]
|
156 |
-
- **Hours used:** [More Information Needed]
|
157 |
-
- **Cloud Provider:** [More Information Needed]
|
158 |
-
- **Compute Region:** [More Information Needed]
|
159 |
-
- **Carbon Emitted:** [More Information Needed]
|
160 |
-
|
161 |
-
## Technical Specifications [optional]
|
162 |
-
|
163 |
-
### Model Architecture and Objective
|
164 |
|
165 |
-
|
166 |
|
167 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
168 |
|
169 |
-
|
|
|
170 |
|
171 |
-
|
|
|
|
|
172 |
|
173 |
-
[More Information Needed]
|
174 |
|
175 |
-
|
176 |
|
177 |
-
|
178 |
|
179 |
-
|
|
|
|
|
|
|
|
|
180 |
|
181 |
-
|
|
|
182 |
|
183 |
-
**
|
|
|
|
|
|
|
|
|
|
|
184 |
|
185 |
-
|
186 |
|
187 |
-
|
188 |
|
189 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
190 |
|
191 |
-
|
192 |
|
193 |
-
|
194 |
|
195 |
-
[
|
196 |
|
197 |
-
|
198 |
|
199 |
-
|
200 |
|
201 |
-
|
202 |
|
203 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
204 |
|
205 |
-
|
206 |
|
207 |
-
|
208 |
-
|
|
|
|
|
209 |
|
210 |
-
|
|
|
1 |
---
|
2 |
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
|
3 |
library_name: peft
|
4 |
+
pipeline_tag: summarization
|
5 |
tags:
|
6 |
- base_model:adapter:unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
|
7 |
- lora
|
|
|
9 |
- transformers
|
10 |
- trl
|
11 |
- unsloth
|
12 |
+
license: apache-2.0
|
13 |
+
language:
|
14 |
+
- en
|
15 |
---
|
16 |
|
17 |
+
# STL Phone Summarizer
|
18 |
|
19 |
+
A conversational LLM for summarizing phone specifications into concise, appealing descriptions for e-commerce.
|
20 |
+
**Model:** LoRA fine-tuned Llama-3.2
|
21 |
+
**Repo:** [`masabhuq/stl_phone_summarizer`](https://huggingface.co/masabhuq/stl_phone_summarizer)
|
22 |
|
23 |
+
---
|
24 |
|
25 |
+
## Installation
|
26 |
|
27 |
+
```bash
|
28 |
+
pip install unsloth torch
|
29 |
+
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
30 |
|
31 |
+
---
|
32 |
|
33 |
+
## Usage
|
34 |
+
|
35 |
+
### 1. Load Model and Tokenizer
|
36 |
+
|
37 |
+
```python
|
38 |
+
from unsloth import FastLanguageModel
|
39 |
+
from unsloth.chat_templates import get_chat_template
|
40 |
+
|
41 |
+
model, tokenizer = FastLanguageModel.from_pretrained(
|
42 |
+
"masabhuq/stl_phone_summarizer",
|
43 |
+
max_seq_length=2048,
|
44 |
+
dtype=None, # Auto-detect (bfloat16 if supported)
|
45 |
+
load_in_4bit=True, # 4-bit quantization for memory efficiency
|
46 |
+
)
|
47 |
+
FastLanguageModel.for_inference(model)
|
48 |
+
```
|
49 |
+
|
50 |
+
### 2. Apply the Chat Template
|
51 |
+
|
52 |
+
```python
|
53 |
+
tokenizer = get_chat_template(
|
54 |
+
tokenizer,
|
55 |
+
chat_template="llama-3.2",
|
56 |
+
map_eos_token=True,
|
57 |
+
)
|
58 |
+
```
|
59 |
+
|
60 |
+
### 3. Prepare the Input
|
61 |
+
|
62 |
+
```python
|
63 |
+
system_prompt = (
|
64 |
+
"You are an expert at summarizing phone specifications into short, appealing key descriptions for an e-commerce site. "
|
65 |
+
"Always output in exactly this format:\n"
|
66 |
+
"Display: [concise display summary]\n"
|
67 |
+
"Processor: [processor name]\n"
|
68 |
+
"Camera: [camera highlights]\n"
|
69 |
+
"Battery: [battery capacity and charging]\n"
|
70 |
+
"Others: [comma-separated unique features]. "
|
71 |
+
"Focus on desirable aspects like high refresh rates, zoom capabilities, fast charging, and unique features such as water resistance or special sensors. "
|
72 |
+
"Do not include complicated keyboards that don't make sense on their own. "
|
73 |
+
"Do not include words that are too technical to understand for someone who is not highly tech savvy. "
|
74 |
+
"Output should be within 280 characters. Don't include anything like IPDC or IP64 or any such features in the result. Words starting with IP are not to be considered display feature."
|
75 |
+
)
|
76 |
+
|
77 |
+
specs = "Build: Glass front (Gorilla Glass 5), silicone polymer back (eco leather), plastic frame\nWeight: 178 g ..."
|
78 |
+
|
79 |
+
prompt = [
|
80 |
+
{"role": "system", "content": system_prompt},
|
81 |
+
{"role": "user", "content": specs}
|
82 |
+
]
|
83 |
+
|
84 |
+
formatted_prompt = tokenizer.apply_chat_template(
|
85 |
+
prompt,
|
86 |
+
tokenize=False,
|
87 |
+
add_generation_prompt=True,
|
88 |
+
)
|
89 |
+
```
|
90 |
+
|
91 |
+
### 4. Tokenize and Generate
|
92 |
+
|
93 |
+
```python
|
94 |
+
import torch
|
95 |
+
|
96 |
+
inputs = tokenizer(formatted_prompt, return_tensors="pt").to("cuda")
|
97 |
+
outputs = model.generate(
|
98 |
+
**inputs,
|
99 |
+
max_new_tokens=256,
|
100 |
+
do_sample=True,
|
101 |
+
temperature=0.7,
|
102 |
+
top_p=0.9,
|
103 |
+
)
|
104 |
+
```
|
105 |
+
|
106 |
+
### 5. Post-process Output
|
107 |
+
|
108 |
+
```python
|
109 |
+
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=False)
|
110 |
+
# Extract the last paragraph and clean up
|
111 |
+
paragraphs = generated_text.strip().split("\n\n")
|
112 |
+
last_paragraph = paragraphs[-1]
|
113 |
+
clean_last_paragraph = last_paragraph.split("<|eot_id|>")[0].strip()
|
114 |
+
print(clean_last_paragraph)
|
115 |
+
```
|
116 |
+
### 6. Clean Up
|
117 |
+
|
118 |
+
Free GPU memory after inference:
|
119 |
+
|
120 |
+
```python
|
121 |
+
model.cpu()
|
122 |
+
torch.cuda.empty_cache()
|
123 |
+
```
|
124 |
|
125 |
+
---
|
126 |
+
## Hardware Requirements
|
127 |
|
128 |
+
- **GPU**: CUDA-compatible GPU with ~4-6GB VRAM for 4-bit inference.
|
129 |
+
- **CPU**: Optional for offloading model after inference (`model.cpu()`).
|
130 |
+
- **RAM**: ~8GB system RAM for smooth operation with dataset processing.
|
131 |
|
|
|
132 |
|
133 |
+
---
|
134 |
|
135 |
+
## Notes
|
136 |
|
137 |
+
- **Chat Template:** The tokenizer is uploaded without a chat template. Always apply the template at runtime as shown above.
|
138 |
+
- **System Prompt:** Adjust the system prompt for your use case.
|
139 |
+
- **Output Format:** The model is trained to output in a strict format for easy parsing.
|
140 |
+
- **Memory Management**: Use `model.cpu()` and `torch.cuda.empty_cache()` to free GPU memory after inference, especially on low-VRAM GPUs.
|
141 |
+
- **Inference Parameters**: Adjust `temperature` and `top_p` for more or less creative outputs, and `max_new_tokens` for longer or shorter summaries.
|
142 |
|
143 |
+
---
|
144 |
+
## Model Details
|
145 |
|
146 |
+
- **Base Model**: `unsloth/Llama-3.2-3B-Instruct-bnb-4bit`
|
147 |
+
- **Fine-Tuning**: LoRA adapters with rank `r=16`, targeting modules: `["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"]`.
|
148 |
+
- **Quantization**: 4-bit for memory efficiency (~4-6GB VRAM).
|
149 |
+
- **Training Data**: A dataset of phone specifications (`specs`) paired with concise summaries (`output`) in the format shown above.
|
150 |
+
- **Training Setup**: Fine-tuned with `trl.SFTTrainer`, `train_on_responses_only` to focus on assistant responses, and Llama-3.2 chat template for single-turn interactions.
|
151 |
+
- **Output Constraints**: Summaries are limited to 280 characters, focusing on user-friendly features and avoiding technical terms like "IP68" or "IPDC".
|
152 |
|
153 |
+
---
|
154 |
|
155 |
+
## Dataset
|
156 |
|
157 |
+
The model was trained on a custom dataset (`specs_list.json`) containing pairs of detailed phone specifications and their corresponding summaries. Each entry includes:
|
158 |
+
- `specs`: Detailed technical specs (e.g., display size, chipset, camera details).
|
159 |
+
- `output`: A concise summary in the format:
|
160 |
+
```
|
161 |
+
Display: [summary]
|
162 |
+
Processor: [name]
|
163 |
+
Camera: [highlights]
|
164 |
+
Battery: [capacity and charging]
|
165 |
+
Others: [features]
|
166 |
+
```
|
167 |
+
The dataset emphasizes consumer-friendly features like high refresh rates, fast charging, and water resistance, avoiding overly technical terms.
|
168 |
|
169 |
+
---
|
170 |
|
171 |
+
## License
|
172 |
|
173 |
+
This model is licensed under the [Apache 2.0 License](LICENSE). See the `LICENSE` file in the repository for details.
|
174 |
|
175 |
+
---
|
176 |
|
177 |
+
## Citation
|
178 |
|
179 |
+
If you use this model, please cite the repository:
|
180 |
|
181 |
+
```bibtex
|
182 |
+
@misc{stl_phone_summarizer,
|
183 |
+
author = {masabhuq},
|
184 |
+
title = {STL Phone Summarizer: A Fine-Tuned Llama-3.2 Model for Phone Specification Summaries},
|
185 |
+
year = {2025},
|
186 |
+
publisher = {Hugging Face},
|
187 |
+
howpublished = {\url{https://huggingface.co/masabhuq/stl_phone_summarizer}}
|
188 |
+
}
|
189 |
+
```
|
190 |
+
### 6. Clean Up
|
191 |
|
192 |
+
Free GPU memory after inference:
|
193 |
|
194 |
+
```python
|
195 |
+
model.cpu()
|
196 |
+
torch.cuda.empty_cache()
|
197 |
+
```
|
198 |
|
199 |
+
---
|