Edit model card

SpanMarker

This is a SpanMarker model trained on the imvladikon/nemo_corpus dataset that can be used for Named Entity Recognition.

Model Details

Model Description

  • Model Type: SpanMarker
  • Maximum Sequence Length: 512 tokens
  • Maximum Entity Length: 100 words
  • Training Dataset: imvladikon/nemo_corpus

Model Sources

Model Labels

Label Examples
ANG "יידיש", "אנגלית", "גרמנית"
DUC "סובארו", "מרצדס", "דינמיט"
EVE "מצדה", "הצהרת בלפור", "ה שואה"
FAC "ברזילי", "תל - ה שומר", "כלא עזה"
GPE "שפרעם", "רצועת עזה", "ה שטחים"
LOC "חאן יונס", "גיבאליה", "שייח רדואן"
ORG "ה ארץ", "מרחב ה גליל", "כך"
PER "נימר חוסיין", "איברהים נימר חוסיין", "רמי רהב"
WOA "ה ארץ", "קדיש", "קיטש ו מוות"

Evaluation

Metrics

Label Precision Recall F1
all 0.7913 0.7607 0.7757
ANG 0.0 0.0 0.0
DUC 0.0 0.0 0.0
FAC 0.3571 0.4545 0.4
GPE 0.7817 0.7897 0.7857
LOC 0.5263 0.4878 0.5063
ORG 0.7854 0.7623 0.7736
PER 0.8725 0.8202 0.8456
WOA 0.0 0.0 0.0

Uses

Direct Use for Inference

from span_marker import SpanMarkerModel

# Download from the 🤗 Hub
model = SpanMarkerModel.from_pretrained("iahlt/span-marker-xlm-roberta-base-nemo-mt-he")
# Run inference
entities = model.predict("גרוסבורד נהג לבדו ב ה מכונית, ב דרכו מ ה עיר מיניאפוליס ב אינדיאנה ל נמל ה תעופה של היא.")

Training Details

Training Set Metrics

Training set Min Median Max
Sentence length 0 25.7252 117
Entities per sentence 0 1.2722 20

Training Hyperparameters

  • learning_rate: 1e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 4
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 2
  • mixed_precision_training: Native AMP

Training Results

Epoch Step Validation Loss Validation Precision Validation Recall Validation F1 Validation Accuracy
0.4393 1000 0.0083 0.7632 0.5812 0.6598 0.9477
0.8785 2000 0.0056 0.8366 0.6774 0.7486 0.9609
1.3178 3000 0.0052 0.8322 0.7655 0.7975 0.9714
1.7571 4000 0.0053 0.8008 0.7735 0.7870 0.9712

Evaluation Results

precision recall f1 number
eval_loss 0.00522302 0.00522302 0.00522302 0.00522302
eval_ANG 0 0 0 3
eval_DUC 0 0 0 2
eval_EVE 0 0 0 12
eval_FAC 0.333333 0.0833333 0.133333 12
eval_GPE 0.887931 0.85124 0.869198 121
eval_LOC 0.703704 0.678571 0.690909 28
eval_ORG 0.719298 0.689076 0.703863 119
eval_PER 0.889447 0.917098 0.903061 193
eval_WOA 0 0 0 9
eval_overall_precision 0.832244 0.832244 0.832244 0.832244
eval_overall_recall 0.765531 0.765531 0.765531 0.765531
eval_overall_f1 0.797495 0.797495 0.797495 0.797495
eval_overall_accuracy 0.971418 0.971418 0.971418 0.971418
eval_runtime 34.3336 34.3336 34.3336 34.3336
eval_samples_per_second 23.505 23.505 23.505 23.505
eval_steps_per_second 11.767 11.767 11.767 11.767
epoch 2 2 2 2

Tests Results

precision recall f1 number
test_loss 0.00604774 0.00604774 0.00604774 0.00604774
test_ANG 0 0 0 1
test_DUC 0 0 0 3
test_FAC 0.357143 0.454545 0.4 11
test_GPE 0.781726 0.789744 0.785714 195
test_LOC 0.526316 0.487805 0.506329 41
test_ORG 0.785354 0.762255 0.773632 408
test_PER 0.87251 0.820225 0.84556 267
test_WOA 0 0 0 6
test_overall_precision 0.791295 0.791295 0.791295 0.791295
test_overall_recall 0.76073 0.76073 0.76073 0.76073
test_overall_f1 0.775711 0.775711 0.775711 0.775711
test_overall_accuracy 0.964642 0.964642 0.964642 0.964642
test_runtime 49.5152 49.5152 49.5152 49.5152
test_samples_per_second 23.286 23.286 23.286 23.286
test_steps_per_second 11.653 11.653 11.653 11.653
epoch 2 2 2 2

Framework Versions

  • Python: 3.10.12
  • SpanMarker: 1.5.0
  • Transformers: 4.35.2
  • PyTorch: 2.1.0+cu118
  • Datasets: 2.15.0
  • Tokenizers: 0.15.0

Citation

@article{10.1162/tacl_a_00404,
    author = {Bareket, Dan and Tsarfaty, Reut},
    title = "{Neural Modeling for Named Entities and Morphology (NEMO2)}",
    journal = {Transactions of the Association for Computational Linguistics},
    volume = {9},
    pages = {909-928},
    year = {2021},
    month = {09},
    abstract = "{Named Entity Recognition (NER) is a fundamental NLP task, commonly formulated as classification over a sequence of tokens. Morphologically rich languages (MRLs) pose a challenge to this basic formulation, as the boundaries of named entities do not necessarily coincide with token boundaries, rather, they respect morphological boundaries. To address NER in MRLs we then need to answer two fundamental questions, namely, what are the basic units to be labeled, and how can these units be detected and classified in realistic settings (i.e., where no gold morphology is available). We empirically investigate these questions on a novel NER benchmark, with parallel token- level and morpheme-level NER annotations, which we develop for Modern Hebrew, a morphologically rich-and-ambiguous language. Our results show that explicitly modeling morphological boundaries leads to improved NER performance, and that a novel hybrid architecture, in which NER precedes and prunes morphological decomposition, greatly outperforms the standard pipeline, where morphological decomposition strictly precedes NER, setting a new performance bar for both Hebrew NER and Hebrew morphological decomposition tasks.}",
    issn = {2307-387X},
    doi = {10.1162/tacl_a_00404},
    url = {https://doi.org/10.1162/tacl\_a\_00404},
    eprint = {https://direct.mit.edu/tacl/article-pdf/doi/10.1162/tacl\_a\_00404/1962472/tacl\_a\_00404.pdf},
}
Downloads last month
5
Safetensors
Model size
278M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train iahlt/span-marker-xlm-roberta-base-nemo-mt-he

Collection including iahlt/span-marker-xlm-roberta-base-nemo-mt-he

Evaluation results