Hamma-16's picture
Upload README.md with huggingface_hub
f6358be verified
metadata
language: ar
license: apache-2.0
library_name: peft
base_model: CAMeL-Lab/bert-base-arabic-camelbert-mix
tags:
  - arabic
  - dialect-classification
  - lora

HammaLoRACAMeLBert

Advanced Arabic Dialect Classification Model with Complete Training Metrics

Training Metrics

Full Training History:

epoch train_loss eval_loss train_accuracy eval_accuracy f1 precision recall
1 1.04802 1.05081 0.637238 0.63764 0.6275 0.63901 0.637238
2 0.77121 0.781513 0.736139 0.735955 0.613924 0.630747 0.624063
3 0.620521 0.659047 0.788899 0.779775 0.791235 0.799805 0.788899
4 0.573822 0.644417 0.800762 0.776404 0.67889 0.688134 0.680569
5 0.49636 0.587736 0.824675 0.791573 0.826167 0.833785 0.824675
6 0.476073 0.589205 0.833292 0.792697 0.709791 0.724241 0.708417
7 0.429335 0.547112 0.849525 0.808427 0.850117 0.853328 0.849525
8 0.412983 0.544361 0.855519 0.805056 0.732394 0.747566 0.729645
9 0.388498 0.551354 0.863511 0.808989 0.863979 0.869072 0.863511
10 0.359349 0.52705 0.877185 0.81573 0.74493 0.757555 0.743194
11 0.34684 0.538555 0.879683 0.810674 0.880776 0.884118 0.879683
12 0.337791 0.538365 0.883304 0.807865 0.749058 0.760792 0.747378
13 0.328992 0.53789 0.886301 0.81236 0.887004 0.889841 0.886301
14 0.322374 0.536085 0.889423 0.81573 0.758437 0.767501 0.756868
15 0.320124 0.53761 0.890485 0.814607 0.891161 0.89363 0.890485

Label Mapping:

{0: 'Egypt', 1: 'Iraq', 2: 'Lebanon', 3: 'Morocco', 4: 'Saudi_Arabia', 5: 'Sudan', 6: 'Tunisia'}

USAGE Example:

from transformers import pipeline

classifier = pipeline(
    "text-classification",
    model="Hamma-16/HammaLoRACAMeLBert",
    device="cuda" if torch.cuda.is_available() else "cpu"
)

sample_text = "ุดู„ูˆู†ูƒ ุงู„ูŠูˆู…ุŸ"
result = classifier(sample_text)
print(f"Text: {sample_text}")
print(f"Predicted: {result[0]['label']} (confidence: {result[0]['score']:.1%})")