Update README.md
Browse files
README.md
CHANGED
@@ -13,4 +13,55 @@ tags:
|
|
13 |
- health
|
14 |
- medical
|
15 |
- nlp
|
16 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
13 |
- health
|
14 |
- medical
|
15 |
- nlp
|
16 |
+
---
|
17 |
+
|
18 |
+
MT5-small is finetuned with large corups of Nepali Health Question-Answering Dataset.
|
19 |
+
|
20 |
+
### Training Procedure
|
21 |
+
|
22 |
+
The model was trained for 30 epochs with the following training parameters:
|
23 |
+
|
24 |
+
- Learning Rate: 2e-4
|
25 |
+
- Batch Size: 2
|
26 |
+
- Gradient Accumulation Steps: 8
|
27 |
+
- FP16 (mixed-precision training): Disabled
|
28 |
+
- Optimizer: AdamW with weight decay
|
29 |
+
|
30 |
+
The training loss consistently decreased, indicating successful learning.
|
31 |
+
|
32 |
+
## Use Case
|
33 |
+
|
34 |
+
```python
|
35 |
+
|
36 |
+
!pip install transformers sentencepiece
|
37 |
+
|
38 |
+
from transformers import MT5ForConditionalGeneration, AutoTokenizer
|
39 |
+
# Load the trained model
|
40 |
+
model = MT5ForConditionalGeneration.from_pretrained("Chhabi/mt5-small-finetuned-Nepali-Health-50k-2")
|
41 |
+
|
42 |
+
# Load the tokenizer for generating new output
|
43 |
+
tokenizer = AutoTokenizer.from_pretrained("Chhabi/mt5-small-finetuned-Nepali-Health-50k-2",use_fast=True)
|
44 |
+
|
45 |
+
|
46 |
+
|
47 |
+
query = "म धेरै थकित महसुस गर्छु र मेरो नाक बगिरहेको छ। साथै, मलाई घाँटी दुखेको छ र अलि टाउको दुखेको छ। मलाई के भइरहेको छ?"
|
48 |
+
input_text = f"answer: {query}"
|
49 |
+
inputs = tokenizer(input_text,return_tensors='pt',max_length=256,truncation=True).to("cuda")
|
50 |
+
print(inputs)
|
51 |
+
generated_text = model.generate(**inputs,max_length=512,min_length=256,length_penalty=3.0,num_beams=10,top_p=0.95,top_k=100,do_sample=True,temperature=0.7,num_return_sequences=3,no_repeat_ngram_size=4)
|
52 |
+
print(generated_text)
|
53 |
+
# generated_text
|
54 |
+
generated_response = tokenizer.batch_decode(generated_text,skip_special_tokens=True)[0]
|
55 |
+
tokens = generated_response.split(" ")
|
56 |
+
filtered_tokens = [token for token in tokens if not token.startswith("<extra_id_")]
|
57 |
+
print(' '.join(filtered_tokens))
|
58 |
+
|
59 |
+
```
|
60 |
+
## Evaluation
|
61 |
+
### BLEU score:
|
62 |
+
|
63 |
+
|
64 |
+

|
65 |
+
|
66 |
+
|
67 |
+
|