Hamma-16's picture
Update README.md
e8717d3 verified
metadata
language: ar
license: apache-2.0
library_name: peft
base_model: CAMeL-Lab/bert-base-arabic-camelbert-mix
tags:
  - arabic
  - dialect-classification
  - lora

HammaLoRACAMeLBert

Advanced Arabic Dialect Classification Model with Complete Training Metrics

Training Metrics

Full Training History

epoch train_loss eval_loss train_accuracy eval_accuracy f1 precision recall
1 2.0154 1.87199 0.315684 0.314607 0.278725 0.273851 0.315684
2 1.4809 1.09191 0.617445 0.629775 0.595584 0.620764 0.617445
3 1.0622 0.928158 0.679133 0.687079 0.676854 0.687955 0.679133
4 0.9443 0.82644 0.714286 0.711236 0.710725 0.716035 0.714286
5 0.8663 0.753623 0.745754 0.740449 0.746578 0.751243 0.745754
6 0.811 0.710841 0.763299 0.751685 0.764064 0.771564 0.763299
7 0.7637 0.661208 0.77741 0.76573 0.778244 0.782777 0.77741
8 0.7277 0.636298 0.783904 0.770225 0.785828 0.794191 0.783904
9 0.7061 0.616007 0.789461 0.769101 0.791083 0.797592 0.789461
10 0.6889 0.594658 0.798264 0.775843 0.799 0.802585 0.798264
11 0.6729 0.58317 0.801823 0.783146 0.802991 0.807269 0.801823
12 0.6591 0.58294 0.801886 0.780337 0.803151 0.809606 0.801886
13 0.6515 0.570984 0.807255 0.782022 0.808294 0.812656 0.807255
14 0.6435 0.563709 0.809441 0.783146 0.81018 0.813134 0.809441
15 0.64 0.562957 0.808816 0.783708 0.809795 0.813538 0.808816

Label Mapping:

{0: 'Egypt', 1: 'Iraq', 2: 'Lebanon', 3: 'Morocco', 4: 'Saudi_Arabia', 5: 'Sudan', 6: 'Tunisia'}

USAGE Example:

from transformers import pipeline

classifier = pipeline(
    "text-classification",
    model="Hamma-16/HammaLoRACAMeLBert",
    device="cuda" if torch.cuda.is_available() else "cpu"
)

sample_text = "ุดู„ูˆู†ูƒ ุงู„ูŠูˆู…ุŸ"
result = classifier(sample_text)
print(f"Text: {sample_text}")
print(f"Predicted: {result[0]['label']} (confidence: {result[0]['score']:.1%})")