Update README.md
Browse filesreadme update based on the code
README.md
CHANGED
@@ -1,199 +1,254 @@
|
|
1 |
---
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
---
|
5 |
|
6 |
-
#
|
7 |
|
8 |
-
|
9 |
|
|
|
10 |
|
|
|
11 |
|
12 |
-
|
13 |
|
14 |
-
|
15 |
|
16 |
-
|
17 |
|
18 |
-
|
19 |
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
- **Model type:** [More Information Needed]
|
24 |
-
- **Language(s) (NLP):** [More Information Needed]
|
25 |
-
- **License:** [More Information Needed]
|
26 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
27 |
|
28 |
-
|
|
|
29 |
|
30 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
31 |
|
32 |
-
|
33 |
-
|
34 |
-
|
|
|
|
|
|
|
|
|
|
|
35 |
|
36 |
-
|
|
|
|
|
|
|
|
|
37 |
|
38 |
-
|
|
|
39 |
|
40 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
41 |
|
42 |
-
|
|
|
43 |
|
44 |
-
|
|
|
|
|
|
|
|
|
45 |
|
46 |
-
|
|
|
|
|
47 |
|
48 |
-
|
|
|
49 |
|
50 |
-
|
51 |
|
52 |
-
|
53 |
|
54 |
-
|
55 |
|
56 |
-
|
|
|
57 |
|
58 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
59 |
|
60 |
-
|
61 |
|
62 |
-
|
63 |
|
64 |
-
|
65 |
-
|
66 |
-
|
67 |
-
|
68 |
-
|
69 |
-
|
70 |
-
|
71 |
-
|
72 |
-
|
73 |
-
|
74 |
-
|
75 |
-
|
76 |
-
|
77 |
-
|
78 |
-
### Training Data
|
79 |
-
|
80 |
-
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
81 |
-
|
82 |
-
[More Information Needed]
|
83 |
-
|
84 |
-
### Training Procedure
|
85 |
-
|
86 |
-
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
87 |
-
|
88 |
-
#### Preprocessing [optional]
|
89 |
-
|
90 |
-
[More Information Needed]
|
91 |
-
|
92 |
-
|
93 |
-
#### Training Hyperparameters
|
94 |
-
|
95 |
-
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
96 |
-
|
97 |
-
#### Speeds, Sizes, Times [optional]
|
98 |
-
|
99 |
-
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
100 |
-
|
101 |
-
[More Information Needed]
|
102 |
-
|
103 |
-
## Evaluation
|
104 |
-
|
105 |
-
<!-- This section describes the evaluation protocols and provides the results. -->
|
106 |
-
|
107 |
-
### Testing Data, Factors & Metrics
|
108 |
-
|
109 |
-
#### Testing Data
|
110 |
-
|
111 |
-
<!-- This should link to a Dataset Card if possible. -->
|
112 |
-
|
113 |
-
[More Information Needed]
|
114 |
-
|
115 |
-
#### Factors
|
116 |
-
|
117 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
118 |
-
|
119 |
-
[More Information Needed]
|
120 |
-
|
121 |
-
#### Metrics
|
122 |
-
|
123 |
-
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
124 |
-
|
125 |
-
[More Information Needed]
|
126 |
-
|
127 |
-
### Results
|
128 |
-
|
129 |
-
[More Information Needed]
|
130 |
-
|
131 |
-
#### Summary
|
132 |
-
|
133 |
-
|
134 |
-
|
135 |
-
## Model Examination [optional]
|
136 |
-
|
137 |
-
<!-- Relevant interpretability work for the model goes here -->
|
138 |
-
|
139 |
-
[More Information Needed]
|
140 |
-
|
141 |
-
## Environmental Impact
|
142 |
-
|
143 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
144 |
-
|
145 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
146 |
-
|
147 |
-
- **Hardware Type:** [More Information Needed]
|
148 |
-
- **Hours used:** [More Information Needed]
|
149 |
-
- **Cloud Provider:** [More Information Needed]
|
150 |
-
- **Compute Region:** [More Information Needed]
|
151 |
-
- **Carbon Emitted:** [More Information Needed]
|
152 |
-
|
153 |
-
## Technical Specifications [optional]
|
154 |
-
|
155 |
-
### Model Architecture and Objective
|
156 |
-
|
157 |
-
[More Information Needed]
|
158 |
-
|
159 |
-
### Compute Infrastructure
|
160 |
-
|
161 |
-
[More Information Needed]
|
162 |
-
|
163 |
-
#### Hardware
|
164 |
-
|
165 |
-
[More Information Needed]
|
166 |
-
|
167 |
-
#### Software
|
168 |
-
|
169 |
-
[More Information Needed]
|
170 |
-
|
171 |
-
## Citation [optional]
|
172 |
-
|
173 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
174 |
-
|
175 |
-
**BibTeX:**
|
176 |
-
|
177 |
-
[More Information Needed]
|
178 |
-
|
179 |
-
**APA:**
|
180 |
-
|
181 |
-
[More Information Needed]
|
182 |
|
183 |
-
|
184 |
|
185 |
-
|
186 |
|
187 |
-
|
188 |
|
189 |
-
|
190 |
|
191 |
-
|
192 |
|
193 |
-
##
|
194 |
|
195 |
-
|
196 |
|
197 |
-
|
198 |
|
199 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
language: fa
|
3 |
+
base_model: Qwen/Qwen2.5-14B-Instruct
|
4 |
+
datasets:
|
5 |
+
- safora/PersianSciQA-Extractive
|
6 |
+
tags:
|
7 |
+
- qwen
|
8 |
+
- question-answering
|
9 |
+
- persian
|
10 |
+
- farsi
|
11 |
+
- qlora
|
12 |
+
- scientific-documents
|
13 |
+
license: apache-2.0
|
14 |
---
|
15 |
|
16 |
+
# PersianSciQA-Qwen2.5-14B: A QLoRA Fine-Tuned Model for Scientific Extractive QA in Persian
|
17 |
|
18 |
+
## Model Description
|
19 |
|
20 |
+
This repository contains the **PersianSciQA-Qwen2.5-14B** model, a fine-tuned version of `Qwen/Qwen2.5-14B-Instruct` specialized for **extractive question answering on scientific texts in the Persian language**.
|
21 |
|
22 |
+
The model was trained using the QLoRA method for efficient parameter-tuning. Its primary function is to analyze a given scientific `context` and answer a `question` based **solely** on the information within that context.
|
23 |
|
24 |
+
A key feature of its training is the strict instruction to output the exact phrase `CANNOT_ANSWER` if the context does not contain the information required to answer the question. This makes the model a reliable tool for closed-domain, evidence-based QA tasks.
|
25 |
|
26 |
+
## How to Use
|
27 |
|
28 |
+
To use this model, you must follow the specific prompt template it was trained on. The prompt enforces the model's role as a scientific assistant and its strict answering policy.
|
29 |
|
30 |
+
Here is a complete example using the `transformers` library:
|
31 |
|
32 |
+
```python
|
33 |
+
import torch
|
34 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
|
|
|
|
|
|
|
|
35 |
|
36 |
+
# Set the model ID
|
37 |
+
model_id = "safora/PersianSciQA-Qwen2.5-14B"
|
38 |
|
39 |
+
# Load the tokenizer and model
|
40 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
41 |
+
model = AutoModelForCausalLM.from_pretrained(
|
42 |
+
model_id,
|
43 |
+
torch_dtype=torch.bfloat16,
|
44 |
+
device_map="auto"
|
45 |
+
)
|
46 |
|
47 |
+
# 1. Define the prompt template (MUST match the training format)
|
48 |
+
prompt_template = (
|
49 |
+
'شما یک دستیار متخصص در زمینه اسناد علمی هستید. وظیفه شما این است که به سوال پرسیده شده، **فقط و فقط** بر اساس متن زمینه (Context) ارائه شده پاسخ دهید. پاسخ شما باید دقیق و خلاصه باشد.\n\n'
|
50 |
+
'**دستورالعمل مهم:** اگر اطلاعات لازم برای پاسخ دادن به سوال در متن زمینه وجود ندارد، باید **دقیقا** عبارت "CANNOT_ANSWER" را به عنوان پاسخ بنویسید و هیچ توضیح اضافهای ندهید.\n\n'
|
51 |
+
'**زمینه (Context):**\n---\n{context}\n---\n\n'
|
52 |
+
'**سوال (Question):**\n{question}\n\n'
|
53 |
+
'**پاسخ (Answer):** '
|
54 |
+
)
|
55 |
|
56 |
+
# 2. Provide your context and question
|
57 |
+
context = "سلولهای خورشیدی پروسکایت به دلیل هزینه تولید پایین و بازدهی بالا، به عنوان یک فناوری نوظهور مورد توجه قرار گرفتهاند. بازدهی آزمایشگاهی این سلولها به بیش از ۲۵ درصد رسیده است، اما پایداری طولانیمدت آنها همچنان یک چالش اصلی محسوب میشود."
|
58 |
+
question = "بازدهی سلولهای خورشیدی پروسکایت در آزمایشگاه چقدر است؟"
|
59 |
+
# Example of a question that cannot be answered from the context:
|
60 |
+
# question = "این سلول ها اولین بار در چه سالی ساخته شدند؟"
|
61 |
|
62 |
+
# 3. Format the prompt
|
63 |
+
prompt = prompt_template.format(context=context, question=question)
|
64 |
|
65 |
+
# 4. Generate the response
|
66 |
+
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
|
67 |
+
generation_output = model.generate(
|
68 |
+
**inputs,
|
69 |
+
max_new_tokens=128,
|
70 |
+
eos_token_id=tokenizer.eos_token_id,
|
71 |
+
pad_token_id=tokenizer.eos_token_id
|
72 |
+
)
|
73 |
|
74 |
+
# Decode and print the output
|
75 |
+
response = tokenizer.decode(generation_output[0], skip_special_tokens=True)
|
76 |
|
77 |
+
# The generated text will be after the prompt
|
78 |
+
answer = response.split("**پاسخ (Answer):**")[-1].strip()
|
79 |
+
print(answer)
|
80 |
+
# Expected output: به بیش از ۲۵ درصد رسیده است
|
81 |
+
# For the unanswerable question, expected output: CANNOT_ANSWER
|
82 |
|
83 |
+
Training Details
|
84 |
+
Model
|
85 |
+
The base model is Qwen/Qwen2.5-14B-Instruct, a highly capable instruction-tuned large language model.
|
86 |
|
87 |
+
Dataset
|
88 |
+
The model was fine-tuned on the safora/PersianSciQA-Extractive dataset. This dataset contains triplets of (context, question, model_answer) derived from Persian scientific documents. The dataset is split into:
|
89 |
|
90 |
+
Train: Used for training the model.
|
91 |
|
92 |
+
Validation: Used for evaluating the model during training epochs.
|
93 |
|
94 |
+
Test: A held-out set reserved for final model evaluation.
|
95 |
|
96 |
+
Fine-Tuning Procedure
|
97 |
+
The model was fine-tuned using the QLoRA (Quantized Low-Rank Adaptation) method, which significantly reduces memory usage while maintaining high performance. The training was performed using the trl and peft libraries.
|
98 |
|
99 |
+
Hyperparameters
|
100 |
+
The following key hyperparameters were used during training:
|
101 |
+
Parameter Value
|
102 |
+
LoRA Configuration
|
103 |
+
r (Rank) 16
|
104 |
+
lora_alpha 32
|
105 |
+
lora_dropout 0.05
|
106 |
+
target_modules q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
|
107 |
+
Training Arguments
|
108 |
+
learning_rate 2e-5
|
109 |
+
optimizer paged_adamw_32bit
|
110 |
+
lr_scheduler_type cosine
|
111 |
+
num_train_epochs 1
|
112 |
+
per_device_train_batch_size 1
|
113 |
+
gradient_accumulation_steps 8
|
114 |
+
effective_batch_size 8
|
115 |
+
quantization 4-bit (nf4)
|
116 |
+
compute_dtype bfloat16
|
117 |
|
118 |
+
Of course. Based on your Python script, here is a professional, scientific, and community-focused README.md file for your Hugging Face model card. This is designed for maximum clarity and reusability.
|
119 |
|
120 |
+
Markdown
|
121 |
|
122 |
+
---
|
123 |
+
language: fa
|
124 |
+
base_model: Qwen/Qwen2.5-14B-Instruct
|
125 |
+
datasets:
|
126 |
+
- safora/PersianSciQA-Extractive
|
127 |
+
tags:
|
128 |
+
- qwen
|
129 |
+
- question-answering
|
130 |
+
- persian
|
131 |
+
- farsi
|
132 |
+
- qlora
|
133 |
+
- scientific-documents
|
134 |
+
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
135 |
|
136 |
+
# PersianSciQA-Qwen2.5-14B: A QLoRA Fine-Tuned Model for Scientific Extractive QA in Persian
|
137 |
|
138 |
+
## Model Description
|
139 |
|
140 |
+
This repository contains the **PersianSciQA-Qwen2.5-14B** model, a fine-tuned version of `Qwen/Qwen2.5-14B-Instruct` specialized for **extractive question answering on scientific texts in the Persian language**.
|
141 |
|
142 |
+
The model was trained using the QLoRA method for efficient parameter-tuning. Its primary function is to analyze a given scientific `context` and answer a `question` based **solely** on the information within that context.
|
143 |
|
144 |
+
A key feature of its training is the strict instruction to output the exact phrase `CANNOT_ANSWER` if the context does not contain the information required to answer the question. This makes the model a reliable tool for closed-domain, evidence-based QA tasks.
|
145 |
|
146 |
+
## How to Use
|
147 |
|
148 |
+
To use this model, you must follow the specific prompt template it was trained on. The prompt enforces the model's role as a scientific assistant and its strict answering policy.
|
149 |
|
150 |
+
Here is a complete example using the `transformers` library:
|
151 |
|
152 |
+
```python
|
153 |
+
import torch
|
154 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
155 |
+
|
156 |
+
# Set the model ID
|
157 |
+
model_id = "safora/PersianSciQA-Qwen2.5-14B"
|
158 |
+
|
159 |
+
# Load the tokenizer and model
|
160 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
161 |
+
model = AutoModelForCausalLM.from_pretrained(
|
162 |
+
model_id,
|
163 |
+
torch_dtype=torch.bfloat16,
|
164 |
+
device_map="auto"
|
165 |
+
)
|
166 |
+
|
167 |
+
# 1. Define the prompt template (MUST match the training format)
|
168 |
+
prompt_template = (
|
169 |
+
'شما یک دستیار متخصص در زمینه اسناد علمی هستید. وظیفه شما این است که به سوال پرسیده شده، **فقط و فقط** بر اساس متن زمینه (Context) ارائه شده پاسخ دهید. پاسخ شما باید دقیق و خلاصه باشد.\n\n'
|
170 |
+
'**دستورالعمل مهم:** اگر اطلاعات لازم برای پاسخ دادن به سوال در متن زمینه وجود ندارد، باید **دقیقا** عبارت "CANNOT_ANSWER" را به عنوان پاسخ بنویسید و هیچ توضیح اضافهای ندهید.\n\n'
|
171 |
+
'**زمینه (Context):**\n---\n{context}\n---\n\n'
|
172 |
+
'**سوال (Question):**\n{question}\n\n'
|
173 |
+
'**پاسخ (Answer):** '
|
174 |
+
)
|
175 |
+
|
176 |
+
# 2. Provide your context and question
|
177 |
+
context = "سلولهای خورشیدی پروسکایت به دلیل هزینه تولید پایین و بازدهی بالا، به عنوان یک فناوری نوظهور مورد توجه قرار گرفتهاند. بازدهی آزمایشگاهی این سلولها به بیش از ۲۵ درصد رسیده است، اما پایداری طولانیمدت آنها همچنان یک چالش اصلی محسوب میشود."
|
178 |
+
question = "بازدهی سلولهای خورشیدی پروسکایت در آزمایشگاه چقدر است؟"
|
179 |
+
# Example of a question that cannot be answered from the context:
|
180 |
+
# question = "این سلول ها اولین بار در چه سالی ساخته شدند؟"
|
181 |
+
|
182 |
+
# 3. Format the prompt
|
183 |
+
prompt = prompt_template.format(context=context, question=question)
|
184 |
+
|
185 |
+
# 4. Generate the response
|
186 |
+
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
|
187 |
+
generation_output = model.generate(
|
188 |
+
**inputs,
|
189 |
+
max_new_tokens=128,
|
190 |
+
eos_token_id=tokenizer.eos_token_id,
|
191 |
+
pad_token_id=tokenizer.eos_token_id
|
192 |
+
)
|
193 |
+
|
194 |
+
# Decode and print the output
|
195 |
+
response = tokenizer.decode(generation_output[0], skip_special_tokens=True)
|
196 |
+
|
197 |
+
# The generated text will be after the prompt
|
198 |
+
answer = response.split("**پاسخ (Answer):**")[-1].strip()
|
199 |
+
print(answer)
|
200 |
+
# Expected output: به بیش از ۲۵ درصد رسیده است
|
201 |
+
# For the unanswerable question, expected output: CANNOT_ANSWER
|
202 |
+
Training Details
|
203 |
+
Model
|
204 |
+
The base model is Qwen/Qwen2.5-14B-Instruct, a highly capable instruction-tuned large language model.
|
205 |
+
|
206 |
+
Dataset
|
207 |
+
The model was fine-tuned on the safora/PersianSciQA-Extractive dataset. This dataset contains triplets of (context, question, model_answer) derived from Persian scientific documents. The dataset is split into:
|
208 |
+
|
209 |
+
Train: Used for training the model.
|
210 |
+
|
211 |
+
Validation: Used for evaluating the model during training epochs.
|
212 |
+
|
213 |
+
Test: A held-out set reserved for final model evaluation.
|
214 |
+
|
215 |
+
Fine-Tuning Procedure
|
216 |
+
The model was fine-tuned using the QLoRA (Quantized Low-Rank Adaptation) method, which significantly reduces memory usage while maintaining high performance. The training was performed using the trl and peft libraries.
|
217 |
+
|
218 |
+
Hyperparameters
|
219 |
+
The following key hyperparameters were used during training:
|
220 |
+
|
221 |
+
Parameter Value
|
222 |
+
LoRA Configuration
|
223 |
+
r (Rank) 16
|
224 |
+
lora_alpha 32
|
225 |
+
lora_dropout 0.05
|
226 |
+
target_modules q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
|
227 |
+
Training Arguments
|
228 |
+
learning_rate 2e-5
|
229 |
+
optimizer paged_adamw_32bit
|
230 |
+
lr_scheduler_type cosine
|
231 |
+
num_train_epochs 1
|
232 |
+
per_device_train_batch_size 1
|
233 |
+
gradient_accumulation_steps 8
|
234 |
+
effective_batch_size 8
|
235 |
+
quantization 4-bit (nf4)
|
236 |
+
compute_dtype bfloat16
|
237 |
+
|
238 |
+
|
239 |
+
Evaluation
|
240 |
+
The model's performance has not yet been formally evaluated on the held-out test split. The test split of the safora/PersianSciQA-Extractive dataset, containing 1049 samples, is available for this purpose. Community contributions to evaluate and benchmark this model are welcome.
|
241 |
+
|
242 |
+
Citation
|
243 |
+
If you use this model in your research or work, please cite it as follows:
|
244 |
+
|
245 |
+
Code snippet
|
246 |
+
|
247 |
+
@misc{persiansciqa_qwen2.5_14b,
|
248 |
+
author = {jolfaei,safora},
|
249 |
+
title = {PersianSciQA-Qwen2.5-14B: A QLoRA Fine-Tuned Model for Scientific Extractive QA in Persian},
|
250 |
+
year = {2025},
|
251 |
+
publisher = {Hugging Face},
|
252 |
+
journal = {Hugging Face Model Hub},
|
253 |
+
howpublished = {\url{[https://huggingface.co/safora/PersianSciQA-Qwen2.5-14B](https://huggingface.co/safora/PersianSciQA-Qwen2.5-14B)}}
|
254 |
+
}
|