HammaLoRAMarBert

Advanced Arabic Dialect Classification Model with Complete Training Metrics

Training Metrics

Full Training History

epoch train_loss eval_loss train_accuracy eval_accuracy f1 precision recall
1 1.01756 1.0054 0.70748 0.717978 0.693725 0.706778 0.70748
2 0.762952 0.75223 0.771853 0.78764 0.771604 0.778861 0.771853
3 0.650689 0.648891 0.796329 0.803371 0.797666 0.801681 0.796329
4 0.622925 0.626332 0.801449 0.811798 0.801765 0.80837 0.801449
5 0.576898 0.588152 0.809815 0.812921 0.810793 0.814344 0.809815
6 0.567929 0.60128 0.814623 0.810674 0.816486 0.823517 0.814623
7 0.556496 0.58585 0.818244 0.820225 0.818915 0.822701 0.818244
8 0.54978 0.592384 0.821054 0.820225 0.82197 0.82844 0.821054
9 0.543711 0.587352 0.824301 0.816854 0.826151 0.83428 0.824301
10 0.51674 0.565089 0.830607 0.818539 0.831944 0.83726 0.830607
11 0.520477 0.580509 0.830669 0.819663 0.832265 0.837997 0.830669
12 0.507471 0.563466 0.833729 0.82809 0.834758 0.839029 0.833729
13 0.498436 0.557207 0.834603 0.825281 0.835891 0.840618 0.834603
14 0.496213 0.551106 0.836289 0.828652 0.837213 0.840592 0.836289
15 0.493182 0.549526 0.836414 0.826404 0.837405 0.840693 0.836414

Label Mapping:

{0: 'Egypt', 1: 'Iraq', 2: 'Lebanon', 3: 'Morocco', 4: 'Saudi_Arabia', 5: 'Sudan', 6: 'Tunisia'}

USAGE Example:

from transformers import pipeline

classifier = pipeline(
    "text-classification",
    model="Hamma-16/HammaLoRAMarBert",
    device="cuda" if torch.cuda.is_available() else "cpu"
)

sample_text = "ุดู„ูˆู†ูƒ ุงู„ูŠูˆู…ุŸ"
result = classifier(sample_text)
print(f"Text: {sample_text}")
print(f"Predicted: {result[0]['label']} (confidence: {result[0]['score']:.1%})")
Downloads last month
26
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Hamma-16/HammaLoRAMarBERT-v3

Base model

UBC-NLP/MARBERT
Adapter
(3)
this model