ogulcanakca commited on
Commit
70bd2ae
·
verified ·
1 Parent(s): 7651ae7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +188 -93
README.md CHANGED
@@ -1,189 +1,279 @@
1
  ---
2
  base_model: meta-llama/Meta-Llama-3-8B-Instruct
3
  library_name: peft
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  ---
5
 
6
- # Model Card for Model ID
7
 
8
- <!-- Provide a quick summary of what the model is/does. -->
9
 
 
10
 
 
11
 
12
- ## Model Details
13
 
14
- ### Model Description
 
 
 
 
15
 
16
- <!-- Provide a longer summary of what this model is. -->
17
 
18
-
19
-
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
-
28
- ### Model Sources [optional]
29
-
30
- <!-- Provide the basic links for the model. -->
31
-
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
35
 
36
  ## Uses
37
 
38
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
-
40
  ### Direct Use
41
 
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
 
44
- [More Information Needed]
45
-
46
- ### Downstream Use [optional]
47
 
48
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
 
50
- [More Information Needed]
51
 
52
  ### Out-of-Scope Use
53
 
54
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
-
56
- [More Information Needed]
 
57
 
58
  ## Bias, Risks, and Limitations
59
 
60
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
-
62
- [More Information Needed]
 
 
63
 
64
  ### Recommendations
65
 
66
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
-
68
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
 
70
  ## How to Get Started with the Model
71
 
72
- Use the code below to get started with the model.
73
-
74
- [More Information Needed]
75
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
76
  ## Training Details
77
 
78
  ### Training Data
79
 
80
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
 
82
- [More Information Needed]
 
 
83
 
84
  ### Training Procedure
85
 
86
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
 
 
87
 
88
  #### Preprocessing [optional]
89
 
90
- [More Information Needed]
91
-
92
 
93
  #### Training Hyperparameters
94
 
95
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
96
 
97
  #### Speeds, Sizes, Times [optional]
98
 
99
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
-
101
- [More Information Needed]
 
102
 
103
  ## Evaluation
104
 
105
- <!-- This section describes the evaluation protocols and provides the results. -->
106
 
107
  ### Testing Data, Factors & Metrics
108
 
109
  #### Testing Data
110
 
111
- <!-- This should link to a Dataset Card if possible. -->
112
-
113
- [More Information Needed]
114
 
115
  #### Factors
116
 
117
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
-
119
- [More Information Needed]
120
 
121
  #### Metrics
122
 
123
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
-
125
- [More Information Needed]
126
 
127
  ### Results
128
 
129
- [More Information Needed]
130
 
131
  #### Summary
132
 
133
-
134
 
135
  ## Model Examination [optional]
136
 
137
- <!-- Relevant interpretability work for the model goes here -->
138
-
139
  [More Information Needed]
140
 
141
  ## Environmental Impact
142
 
143
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
-
145
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
-
147
- - **Hardware Type:** [More Information Needed]
148
- - **Hours used:** [More Information Needed]
149
- - **Cloud Provider:** [More Information Needed]
150
- - **Compute Region:** [More Information Needed]
151
- - **Carbon Emitted:** [More Information Needed]
152
 
153
  ## Technical Specifications [optional]
154
 
155
  ### Model Architecture and Objective
156
 
157
- [More Information Needed]
158
 
159
  ### Compute Infrastructure
160
 
161
- [More Information Needed]
162
-
163
  #### Hardware
164
 
165
- [More Information Needed]
166
 
167
  #### Software
168
 
169
- [More Information Needed]
 
 
 
 
 
 
170
 
171
  ## Citation [optional]
172
 
173
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
-
175
- **BibTeX:**
176
-
177
- [More Information Needed]
178
-
179
- **APA:**
180
 
181
- [More Information Needed]
 
 
182
 
183
  ## Glossary [optional]
184
 
185
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
-
187
  [More Information Needed]
188
 
189
  ## More Information [optional]
@@ -192,11 +282,16 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
192
 
193
  ## Model Card Authors [optional]
194
 
195
- [More Information Needed]
196
 
197
  ## Model Card Contact
198
 
199
- [More Information Needed]
 
200
  ### Framework versions
201
 
202
- - PEFT 0.15.2
 
 
 
 
 
1
  ---
2
  base_model: meta-llama/Meta-Llama-3-8B-Instruct
3
  library_name: peft
4
+ tags:
5
+ - llama-3
6
+ - lora
7
+ - qlora
8
+ - domain-adaptation
9
+ - text-generation
10
+ - turkish
11
+ - tr
12
+ - epdk
13
+ - mevzuat
14
+ - ogulcanakca
15
+ language: tr
16
+ license: llama3
17
  ---
18
 
19
+ # Model Card for ogulcanakca/llama3-8b-epdk-domain-adapter-v1
20
 
21
+ ## Model Description
22
 
23
+ This repository contains a PEFT LoRA adapter fine-tuned on the `meta-llama/Meta-Llama-3-8B-Instruct` base model for **Domain Adaptation**. The adapter was trained on a Turkish dataset comprising various documents related to the Turkish Energy Market Regulatory Authority (EPDK), such as legislation, laws, communiqués, and board decisions.
24
 
25
+ The goal is to make the base Llama 3 8B Instruct model more familiar with the language, terminology, style, and concepts specific to the Turkish energy market regulation domain. When used with the base model, this adapter potentially offers improved understanding of texts from this domain and can generate more consistent outputs related to it.
26
 
27
+ **Note:** This repository does not contain a full model, only the LoRA weights that need to be applied to the base model. This adapter was trained using 4-bit quantization (QLoRA) via the PEFT library.
28
 
29
+ * **Developed by:** ogulcanakca
30
+ * **Model type:** Transformer-based Causal Language Model (Llama 3) + PEFT LoRA adapter
31
+ * **Language(s) (NLP):** Turkish (tr)
32
+ * **License:** Llama 3 Community License ([https://llama.meta.com/llama3/license/](https://llama.meta.com/llama3/license/))
33
+ * **Finetuned from model:** `meta-llama/Meta-Llama-3-8B-Instruct`
34
 
35
+ ### Model Sources
36
 
37
+ * **Repository:** [https://huggingface.co/ogulcanakca/llama3-8b-epdk-domain-adapter-v1](https://huggingface.co/ogulcanakca/llama3-8b-epdk-domain-adapter-v1)
38
+ * **Base Model:** [https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
39
+ * **Dataset Used (Raw Cleaned):** [https://huggingface.co/datasets/ogulcanakca/epdk_elektrik_piyasasi_mevzuat](https://huggingface.co/datasets/ogulcanakca/epdk_elektrik_piyasasi_mevzuat)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
40
 
41
  ## Uses
42
 
 
 
43
  ### Direct Use
44
 
45
+ This LoRA adapter can be used by loading it onto the `meta-llama/Meta-Llama-3-8B-Instruct` base model. It is expected that the base model will then better understand, summarize, or provide more consistent answers to prompts related to EPDK legislation. It can be used with the code example provided in the "How to Get Started with the Model" section below.
46
 
47
+ **Note:** As this adapter was trained for a very short trial period (200 steps), the domain adaptation effect is limited. More extensive training or evaluation is recommended before using it for critical tasks.
 
 
48
 
49
+ ### Downstream Use
50
 
51
+ This adapter could serve as a better starting point for further fine-tuning on more specific tasks related to EPDK regulations (e.g., a specialized Q&A model, text classification, summarization).
52
 
53
  ### Out-of-Scope Use
54
 
55
+ * Not suitable for use as a general-purpose chatbot (the base Instruct model is better for that).
56
+ * Not expected to show significant improvement over the base model on topics outside Turkish energy market regulations.
57
+ * May generate incorrect, incomplete, or outdated information. Should absolutely not be used for legal or financial advice.
58
+ * Should not be used to generate harmful, unethical, discriminatory, or biased content.
59
 
60
  ## Bias, Risks, and Limitations
61
 
62
+ * This adapter inherits potential biases and risks present in the base `meta-llama/Meta-Llama-3-8B-Instruct` model.
63
+ * The training data (EPDK documents) covers a specific time range and may contain outdated information. Information generated by the model may not be current.
64
+ * The model may generate incorrect information ("hallucinations"), especially on topics not well-represented or contradictory in the training data.
65
+ * Potential OCR errors or imperfections during the cleaning phase in the training data might affect the model's performance.
66
+ * **Limited Training:** This adapter was trained for only **200 steps**, which is very short for domain adaptation. Therefore, the adaptation effect is minimal, and the model's knowledge or stylistic alignment within this domain may not be significantly improved. This version should be considered more of a "pipeline test".
67
 
68
  ### Recommendations
69
 
70
+ * The accuracy of information generated by the model should always be verified against reliable sources.
71
+ * Users should be aware of the model's limitations and avoid using it for critical applications (e.g., legal interpretation).
72
+ * For more reliable results, further training with significantly more steps/epochs is recommended.
73
 
74
  ## How to Get Started with the Model
75
 
76
+ You need the `transformers`, `peft`, `accelerate`, and `bitsandbytes` libraries to use this LoRA adapter. The code below loads the base model with 4-bit quantization and applies this adapter:
77
+
78
+ ```python
79
+ import torch
80
+ from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
81
+ from peft import PeftModel
82
+ import os
83
+
84
+ # Ensure you are logged in to Hugging Face if the repo is private or requires Llama access
85
+ # from huggingface_hub import login
86
+ # login(token="YOUR_HF_TOKEN") # Or use huggingface-cli login
87
+
88
+ # Base model ID
89
+ base_model_id = "meta-llama/Meta-Llama-3-8B-Instruct"
90
+ # This adapter's Hub ID
91
+ adapter_id = "ogulcanakca/llama3-8b-epdk-domain-adapter-v1"
92
+
93
+ # 4-bit Quantization Config
94
+ bnb_config = BitsAndBytesConfig(
95
+ load_in_4bit=True,
96
+ bnb_4bit_quant_type="nf4",
97
+ bnb_4bit_compute_dtype=torch.bfloat16 # Use bfloat16 if supported
98
+ )
99
+
100
+ # Load the base model quantized
101
+ print(f"Loading base model: {base_model_id}")
102
+ base_model = AutoModelForCausalLM.from_pretrained(
103
+ base_model_id,
104
+ quantization_config=bnb_config,
105
+ device_map="auto",
106
+ torch_dtype=torch.bfloat16,
107
+ trust_remote_code=True # May be needed
108
+ )
109
+
110
+ # Load the tokenizer
111
+ print("Loading tokenizer...")
112
+ tokenizer = AutoTokenizer.from_pretrained(base_model_id)
113
+ if tokenizer.pad_token is None:
114
+ tokenizer.pad_token = tokenizer.eos_token
115
+
116
+ # Load the LoRA adapter onto the base model
117
+ print(f"Loading adapter: {adapter_id}")
118
+ model = PeftModel.from_pretrained(base_model, adapter_id)
119
+ print("Adapter loaded successfully.")
120
+ model.eval() # Set model to evaluation mode
121
+
122
+ # Inference Example
123
+ prompt = "EPDK'nın elektrik piyasasındaki temel görevleri nelerdir?" # Example prompt in Turkish
124
+ messages = [
125
+ {"role": "system", "content": "You are a helpful assistant knowledgeable about Turkish energy market regulations."},
126
+ {"role": "user", "content": prompt}
127
+ ]
128
+
129
+ # Llama 3 Instruct prompt format
130
+ print("Generating response...")
131
+ input_ids = tokenizer.apply_chat_template(
132
+ messages,
133
+ add_generation_prompt=True,
134
+ return_tensors="pt"
135
+ ).to(model.device)
136
+
137
+ # Terminator IDs to stop generation
138
+ terminators = [
139
+ tokenizer.eos_token_id,
140
+ tokenizer.convert_tokens_to_ids("<|eot_id|>")
141
+ ]
142
+
143
+ # Generate text
144
+ with torch.no_grad():
145
+ outputs = model.generate(
146
+ input_ids,
147
+ max_new_tokens=512, # Max new tokens to generate
148
+ eos_token_id=terminators,
149
+ do_sample=True, # For more creative outputs
150
+ temperature=0.6,
151
+ top_p=0.9,
152
+ )
153
+
154
+ # Decode the generated output
155
+ response_ids = outputs[0][input_ids.shape[-1]:]
156
+ response_text = tokenizer.decode(response_ids, skip_special_tokens=True)
157
+
158
+ print("\nModel Output:")
159
+ print(response_text)
160
+ ```
161
  ## Training Details
162
 
163
  ### Training Data
164
 
165
+ The model was fine-tuned on cleaned text from the `ogulcanakca/epdk_elektrik_piyasasi_mevzuat` Hugging Face dataset. This dataset was derived from ~3300 various Turkish documents (PDF, DOCX, etc.) related to the Turkish Energy Market Regulatory Authority (EPDK), obtained through text extraction, OCR, and basic NLP cleaning processes.
166
 
167
+ Dataset Link (Raw Cleaned): [https://huggingface.co/datasets/ogulcanakca/epdk_elektrik_piyasasi_mevzuat](https://huggingface.co/datasets/ogulcanakca/epdk_elektrik_piyasasi_mevzuat)
168
+
169
+ Before training, the texts in this dataset were chunked using the `meta-llama/Meta-Llama-3-8B-Instruct` tokenizer into segments of **2048 tokens** with an overlap of **200 tokens**. The resulting `domain_adaptation_data.jsonl` file (or equivalent `Dataset` object) containing 31,271 chunks was used for training.
170
 
171
  ### Training Procedure
172
 
173
+ * **Fine-tuning Type:** Domain Adaptation (Causal LM objective).
174
+ * **Technique:** QLoRA (4-bit NormalFloat Quantization + Low-Rank Adaptation) using the PEFT library.
175
+ * **Libraries:** `transformers`, `peft`, `accelerate`, `bitsandbytes`, `datasets`.
176
 
177
  #### Preprocessing [optional]
178
 
179
+ Cleaning steps mentioned above (whitespace, header/footer removal etc.) and tokenizer-based chunking were applied. `DataCollatorForLanguageModeling` was used during training.
 
180
 
181
  #### Training Hyperparameters
182
 
183
+ * **base_model:** `meta-llama/Meta-Llama-3-8B-Instruct`
184
+ * **quantization:** `4-bit` (NF4, compute_dtype=bfloat16)
185
+ * **lora_r:** 16
186
+ * **lora_alpha:** 32
187
+ * **lora_dropout:** 0.05
188
+ * **lora_target_modules:** `["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"]`
189
+ * **learning_rate:** 2e-4 (0.0002)
190
+ * **batch_size:** 1
191
+ * **gradient_accumulation_steps:** 8 (Effective batch size: 8)
192
+ * **optimizer:** Paged AdamW (8-bit)
193
+ * **lr_scheduler_type:** cosine
194
+ * **warmup_ratio:** 0.05
195
+ * **max_steps:** 200 (Short trial run)
196
+ * **seq_length:** 2048
197
+ * **precision:** bf16 (mixed precision)
198
+ * **gradient_checkpointing:** True
199
 
200
  #### Speeds, Sizes, Times [optional]
201
 
202
+ * Training was performed on a single GPU in Kaggle's free tier (likely T4 or P100 - exact type not logged).
203
+ * The 200-step training run took approximately **8.5 hours**. Flash Attention 2 could not be used.
204
+ * Final training loss: ~2.151
205
+ * The saved LoRA adapter size is relatively small (typically tens or hundreds of MBs).
206
 
207
  ## Evaluation
208
 
209
+ No formal evaluation metrics were calculated for this initial trial run (200 steps). The model's performance was only observed via the training loss. After more extensive training, standard language modeling metrics (like Perplexity) or performance on domain-specific downstream tasks could be evaluated.
210
 
211
  ### Testing Data, Factors & Metrics
212
 
213
  #### Testing Data
214
 
215
+ N/A
 
 
216
 
217
  #### Factors
218
 
219
+ N/A
 
 
220
 
221
  #### Metrics
222
 
223
+ N/A
 
 
224
 
225
  ### Results
226
 
227
+ N/A
228
 
229
  #### Summary
230
 
231
+ The short 200-step training demonstrated that the fine-tuning pipeline works, but was insufficient for significant domain adaptation. A slight decrease in training loss was observed.
232
 
233
  ## Model Examination [optional]
234
 
 
 
235
  [More Information Needed]
236
 
237
  ## Environmental Impact
238
 
239
+ * **Hardware Type:** Kaggle GPU (Likely T4 or P100 tier)
240
+ * **Hours used:** ~8.5 hours (for 200 steps)
241
+ * **Cloud Provider:** Kaggle (using Google Cloud infrastructure)
242
+ * **Compute Region:** Unknown (Managed by Kaggle)
243
+ * **Carbon Emitted:** Can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute), but estimating accurately requires specific GPU power consumption data, which is difficult to obtain for Kaggle free tiers.
 
 
 
 
244
 
245
  ## Technical Specifications [optional]
246
 
247
  ### Model Architecture and Objective
248
 
249
+ The base model relies on the Llama 3 architecture (Transformer decoder-only). This adapter adds LoRA layers without changing the base model weights, trained with a Causal Language Modeling objective for domain adaptation.
250
 
251
  ### Compute Infrastructure
252
 
 
 
253
  #### Hardware
254
 
255
+ Kaggle Notebook environment with a single NVIDIA GPU (likely T4 or P100).
256
 
257
  #### Software
258
 
259
+ * PyTorch (`torch==2.5.1` recommended based on Kaggle env)
260
+ * Transformers (`transformers`)
261
+ * PEFT (`peft==0.15.2`)
262
+ * Accelerate (`accelerate`)
263
+ * BitsAndBytes (`bitsandbytes`)
264
+ * Datasets (`datasets`)
265
+ * Python 3.10/3.11 (Kaggle default)
266
 
267
  ## Citation [optional]
268
 
269
+ Please cite the base Llama 3 model and the relevant PEFT/LoRA techniques:
 
 
 
 
 
 
270
 
271
+ * **Llama 3:** [https://ai.meta.com/blog/meta-llama-3/](https://ai.meta.com/blog/meta-llama-3/)
272
+ * **PEFT:** [https://github.com/huggingface/peft](https://github.com/huggingface/peft)
273
+ * **LoRA:** Hu, E. J., Shen, Y., Wallis, P., Allen-Zhu, Z., Li, Y., Wang, S., ... & Chen, W. (2021). LoRA: Low-Rank Adaptation of Large Language Models. arXiv preprint arXiv:2106.09685.
274
 
275
  ## Glossary [optional]
276
 
 
 
277
  [More Information Needed]
278
 
279
  ## More Information [optional]
 
282
 
283
  ## Model Card Authors [optional]
284
 
285
+ ogulcanakca
286
 
287
  ## Model Card Contact
288
 
289
+ ogulcanakca (Hugging Face user)
290
+
291
  ### Framework versions
292
 
293
+ * PEFT 0.15.2
294
+ * Transformers [More Information Needed - Check with `!pip show transformers`]
295
+ * PyTorch [More Information Needed - Check with `!pip show torch`]
296
+ * Datasets [More Information Needed - Check with `!pip show datasets`]
297
+ * Tokenizers [More Information Needed - Check with `!pip show tokenizers`]