Update README.md
Browse files
README.md
CHANGED
@@ -1,199 +1,161 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
|
6 |
-
|
7 |
|
8 |
-
|
9 |
|
|
|
10 |
|
|
|
11 |
|
12 |
-
|
13 |
|
14 |
-
|
|
|
15 |
|
16 |
-
|
17 |
|
18 |
-
|
|
|
|
|
19 |
|
20 |
-
-
|
21 |
-
- **Funded by [optional]:** [More Information Needed]
|
22 |
-
- **Shared by [optional]:** [More Information Needed]
|
23 |
-
- **Model type:** [More Information Needed]
|
24 |
-
- **Language(s) (NLP):** [More Information Needed]
|
25 |
-
- **License:** [More Information Needed]
|
26 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
27 |
|
28 |
-
|
29 |
|
30 |
-
|
|
|
31 |
|
32 |
-
|
33 |
-
- **Paper [optional]:** [More Information Needed]
|
34 |
-
- **Demo [optional]:** [More Information Needed]
|
35 |
|
36 |
-
|
37 |
|
38 |
-
|
39 |
|
40 |
-
|
41 |
|
42 |
-
|
|
|
43 |
|
44 |
-
|
|
|
45 |
|
46 |
-
|
|
|
47 |
|
48 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
49 |
|
50 |
-
|
|
|
|
|
|
|
51 |
|
52 |
-
###
|
|
|
53 |
|
54 |
-
|
|
|
|
|
|
|
55 |
|
56 |
-
|
|
|
57 |
|
58 |
-
|
|
|
59 |
|
60 |
-
|
61 |
|
62 |
-
|
63 |
|
64 |
-
|
65 |
|
66 |
-
|
67 |
|
68 |
-
|
69 |
|
70 |
-
|
71 |
|
72 |
-
|
73 |
|
74 |
-
|
75 |
|
76 |
-
|
|
|
77 |
|
78 |
-
|
79 |
|
80 |
-
|
81 |
|
82 |
-
|
83 |
|
84 |
-
|
85 |
|
86 |
-
|
87 |
|
88 |
-
|
89 |
|
90 |
-
|
91 |
|
|
|
92 |
|
93 |
-
|
94 |
|
95 |
-
|
96 |
|
97 |
-
|
98 |
|
99 |
-
|
100 |
|
101 |
-
|
102 |
|
103 |
-
|
|
|
104 |
|
105 |
-
|
106 |
|
107 |
-
|
108 |
|
109 |
-
|
110 |
|
111 |
-
|
112 |
|
113 |
-
|
114 |
|
115 |
-
|
116 |
|
117 |
-
|
118 |
|
119 |
-
|
120 |
|
121 |
-
|
122 |
|
123 |
-
|
124 |
|
125 |
-
|
126 |
|
127 |
-
|
128 |
|
129 |
-
|
130 |
|
131 |
-
|
132 |
|
|
|
133 |
|
|
|
134 |
|
135 |
-
|
|
|
136 |
|
137 |
-
|
138 |
|
139 |
-
|
140 |
-
|
141 |
-
## Environmental Impact
|
142 |
-
|
143 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
144 |
-
|
145 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
146 |
-
|
147 |
-
- **Hardware Type:** [More Information Needed]
|
148 |
-
- **Hours used:** [More Information Needed]
|
149 |
-
- **Cloud Provider:** [More Information Needed]
|
150 |
-
- **Compute Region:** [More Information Needed]
|
151 |
-
- **Carbon Emitted:** [More Information Needed]
|
152 |
-
|
153 |
-
## Technical Specifications [optional]
|
154 |
-
|
155 |
-
### Model Architecture and Objective
|
156 |
-
|
157 |
-
[More Information Needed]
|
158 |
-
|
159 |
-
### Compute Infrastructure
|
160 |
-
|
161 |
-
[More Information Needed]
|
162 |
-
|
163 |
-
#### Hardware
|
164 |
-
|
165 |
-
[More Information Needed]
|
166 |
-
|
167 |
-
#### Software
|
168 |
-
|
169 |
-
[More Information Needed]
|
170 |
-
|
171 |
-
## Citation [optional]
|
172 |
-
|
173 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
174 |
-
|
175 |
-
**BibTeX:**
|
176 |
-
|
177 |
-
[More Information Needed]
|
178 |
-
|
179 |
-
**APA:**
|
180 |
-
|
181 |
-
[More Information Needed]
|
182 |
-
|
183 |
-
## Glossary [optional]
|
184 |
-
|
185 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
186 |
-
|
187 |
-
[More Information Needed]
|
188 |
-
|
189 |
-
## More Information [optional]
|
190 |
-
|
191 |
-
[More Information Needed]
|
192 |
-
|
193 |
-
## Model Card Authors [optional]
|
194 |
-
|
195 |
-
[More Information Needed]
|
196 |
-
|
197 |
-
## Model Card Contact
|
198 |
-
|
199 |
-
[More Information Needed]
|
|
|
1 |
+
license: mit
|
2 |
+
language: en
|
3 |
+
base_model: Qwen/Qwen3-0.6B-Base
|
4 |
+
tags:
|
5 |
|
6 |
+
qwen
|
7 |
|
8 |
+
flask
|
9 |
|
10 |
+
code-generation
|
11 |
|
12 |
+
question-answering
|
13 |
|
14 |
+
lora
|
15 |
|
16 |
+
peft
|
17 |
+
datasets:
|
18 |
|
19 |
+
custom-flask-qa
|
20 |
|
21 |
+
Qwen3-0.6B-Flask-Expert
|
22 |
+
Model Description
|
23 |
+
This model is a fine-tuned version of Qwen/Qwen3-0.6B-Base, specifically adapted to function as a specialized Question & Answering assistant for the Python Flask web framework.
|
24 |
|
25 |
+
The model was trained on a high-quality, custom dataset generated by parsing the official Flask source code and documentation. It has been instruction-tuned to understand and answer developer-style questions, explain complex concepts with step-by-step reasoning, and identify when a question is outside its scope of knowledge.
|
|
|
|
|
|
|
|
|
|
|
|
|
26 |
|
27 |
+
This project was developed as part of an internship, demonstrating a full fine-tuning pipeline from data creation to evaluation and deployment.
|
28 |
|
29 |
+
Intended Use
|
30 |
+
The primary intended use of this model is to act as a helpful assistant for developers working with Flask. It can be used for:
|
31 |
|
32 |
+
Answering technical questions about Flask's API and internal mechanisms.
|
|
|
|
|
33 |
|
34 |
+
Providing explanations for core concepts (e.g., application context, blueprints).
|
35 |
|
36 |
+
Assisting with debugging common errors and understanding framework behavior.
|
37 |
|
38 |
+
Powering a chatbot or an integrated help tool within a developer environment.
|
39 |
|
40 |
+
How to Use
|
41 |
+
You can use this model directly with the transformers library pipeline for text generation. Make sure to use the provided prompt format for the best results.
|
42 |
|
43 |
+
from transformers import pipeline
|
44 |
+
import torch
|
45 |
|
46 |
+
# Replace with your Hugging Face username and model name
|
47 |
+
model_name = "your-hf-username/qwen3-0.6B-flask-expert"
|
48 |
|
49 |
+
# Load the pipeline
|
50 |
+
pipe = pipeline(
|
51 |
+
"text-generation",
|
52 |
+
model=model_name,
|
53 |
+
torch_dtype=torch.bfloat16,
|
54 |
+
device_map="auto"
|
55 |
+
)
|
56 |
|
57 |
+
# Use the Alpaca prompt format
|
58 |
+
question = "How does Flask's `g` object facilitate the sharing of request-specific data?"
|
59 |
+
prompt = f"""### Instruction:
|
60 |
+
{question}
|
61 |
|
62 |
+
### Response:
|
63 |
+
"""
|
64 |
|
65 |
+
# Generate the answer
|
66 |
+
# For more factual answers, use a low temperature.
|
67 |
+
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.1, top_p=0.95)
|
68 |
+
answer = outputs[0]["generated_text"].split("### Response:")[1].strip()
|
69 |
|
70 |
+
print(f"Question: {question}")
|
71 |
+
print(f"Answer: {answer}")
|
72 |
|
73 |
+
Training Data
|
74 |
+
The model was trained on a custom dataset of 600 high-quality Q&A pairs created specifically for this task. The data generation process involved:
|
75 |
|
76 |
+
Cloning the official Flask GitHub repository.
|
77 |
|
78 |
+
Parsing all .py source files and .rst documentation files into meaningful chunks (classes, functions, paragraphs).
|
79 |
|
80 |
+
Using the Gemini API with a series of advanced prompts to generate a diverse set of questions, including:
|
81 |
|
82 |
+
Conceptual and Chain-of-Thought explanations.
|
83 |
|
84 |
+
Adversarial and edge-case scenarios.
|
85 |
|
86 |
+
Beginner and senior developer-level personas.
|
87 |
|
88 |
+
"Guardrail" prompts to teach the model to refuse off-topic questions.
|
89 |
|
90 |
+
The final dataset was manually reviewed and curated to ensure quality and factual accuracy.
|
91 |
|
92 |
+
Training Procedure
|
93 |
+
The model was fine-tuned using Parameter-Efficient Fine-Tuning (PEFT) with the LoRA (Low-Rank Adaptation) method.
|
94 |
|
95 |
+
Frameworks: transformers, peft, bitsandbytes, trl
|
96 |
|
97 |
+
Hardware: A single NVIDIA RTX 3060 with 6GB VRAM.
|
98 |
|
99 |
+
Quantization: The base model was loaded in 4-bit (nf4) to fit within the available memory.
|
100 |
|
101 |
+
LoRA Configuration:
|
102 |
|
103 |
+
r (rank): 16
|
104 |
|
105 |
+
lora_alpha: 32
|
106 |
|
107 |
+
target_modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
|
108 |
|
109 |
+
Hyperparameters:
|
110 |
|
111 |
+
learning_rate: 2e-4
|
112 |
|
113 |
+
num_train_epochs: 2
|
114 |
|
115 |
+
per_device_train_batch_size: 2
|
116 |
|
117 |
+
gradient_accumulation_steps: 8 (Effective batch size of 16)
|
118 |
|
119 |
+
optimizer: paged_adamw_32bit
|
120 |
|
121 |
+
Evaluation Results
|
122 |
+
The fine-tuned model showed significant improvement over the base model's few-shot performance on a held-out test set. The most notable gain was a +123.3% increase in the ROUGE-2 score, indicating a much-improved ability to generate correct technical phrases.
|
123 |
|
124 |
+
Metric
|
125 |
|
126 |
+
Baseline (Untrained)
|
127 |
|
128 |
+
Fine-Tuned Model
|
129 |
|
130 |
+
Improvement
|
131 |
|
132 |
+
ROUGE-1
|
133 |
|
134 |
+
0.306
|
135 |
|
136 |
+
0.382
|
137 |
|
138 |
+
+24.7%
|
139 |
|
140 |
+
ROUGE-2
|
141 |
|
142 |
+
0.067
|
143 |
|
144 |
+
0.149
|
145 |
|
146 |
+
+123.3%
|
147 |
|
148 |
+
ROUGE-L
|
149 |
|
150 |
+
0.162
|
151 |
|
152 |
+
0.240
|
153 |
|
154 |
+
+48.1%
|
155 |
|
156 |
+
Limitations & Bias
|
157 |
+
This is a 0.6B parameter model, and its knowledge is limited to the data it was trained on. It can still hallucinate or provide factually incorrect answers, especially for complex or nuanced questions it has not seen before.
|
158 |
|
159 |
+
The model's answers should be considered a helpful starting point, not a definitive source of truth. Always verify critical information against the official Flask documentation.
|
160 |
|
161 |
+
The training data is derived from the Flask codebase and documentation, so it may reflect the biases and conventions
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|