mbley's picture
Update README.md
bb4aacb verified
metadata
tags:
  - setfit
  - sentence-transformers
  - text-classification
  - generated_from_setfit_trainer
widget:
  - text: >-
      46 Abs. 2 BGG zum Beispiel die Schuldneranweisung gemäss den Bestimmungen
      zum Schutz der ehelichen Gemeinschaft (Art. 177 ZGB; BGE 134 III 667), die
      Einsprache gegen die Ausstellung einer Erbenbescheinigung (Art. 559 Abs. 1
      ZGB; Urteil 5A_162/2007 vom 16. Juli 2007 E. 5.2) oder das Inventar über
      das Kindesvermögen (Art. 318 Abs. 2 ZGB; Urteil 5A_169/2007 vom 21. Juni
      2007 E. 3).
  - text: >-
      Im OP der Kinderklinik der MHH werden pro Jahr zwischen 1500 und 2000
      Operationen durchgeführt.
  - text: Die Bindungen sollten anfangs in Fahrtrichtung zeigen.
  - text: Raumausstatter gesucht, Recklinghausen
  - text: Mehr Leistung durch Selbstgespräche
pipeline_tag: text-classification
library_name: setfit
inference: false
license: mit
datasets:
  - mbley/german-webtext-quality-classification-dataset
language:
  - de
base_model:
  - distilbert/distilbert-base-german-cased

Bootstrapping a Sentence-Level Corpus Quality Classifier for Web Text using Active Learning (RANLP25)

A multi-label sentence classifier trained with Active Learning for predicting high- or low-qality labels of german webtext.

Training and evaluation code: https://github.com/maximilian-bley/german-webtext-quality-classification

Model Details

Labels

  • 0=Sentence Boundary: Sentence boundary errors occur if the start or ending of a sentence is malformed. This is the case if it begins with a lower case letter or an atypical character, or lacks a proper terminal punctuation mark (e.g., period, exclamation mark, or question mark).

  • 1=Grammar Mistake: Grammar mistakes are any grammatical errors such as incorrect articles, cases, word order and incorrect use or absence of words. Moreover, random-looking sequences of words, usually series of nouns, should be tagged. In most cases where this label is applicable, the sentence' comprehensibility or message is impaired.

  • 2=Spelling Anomaly: A spelling anomaly is tagged when a word does not correspond to German spelling. This includes typos and incorrect capitalization (e.g. “all caps” or lower-case nouns). Spelling anomalies are irregularities that occur within the word boundary, meaning here text between two whitespaces. In particular, individual letters or nonsensical word fragments are also tagged.

  • 3=Punctuation Error: Punctuation errors are tagged if a punctuation symbol has been placed incorrectly or is missing in the intended place. This includes comma errors, missing quotation marks or parentheses, periods instead of question marks or incorrect or missing dashes or hyphens.

  • 4=Non-linguistic Content: Non-linguistic content includes all types of encoding errors, language-atypical occurrences of numbers and characters (e.g. random sequences of characters or letters), code (remnants), URLs, hashtags and emoticons.

  • 5=Letter Spacing: Letter spacings are deliberately inserted spaces between the characters of a word.

  • 6=Clean: Assigned if none of the other labels apply.

Results

F1-Measures: f1, macro, micro, sample
[0.93 0.86 0.6  0.51 0.84 0.73 0.87] 0.76 0.83 0.82

Precison: P, macro, micro, sample
[0.91 0.91 0.74 0.44 0.86 0.94 0.82] 0.8 0.84 0.83

Recall: R, macro, micro, sample
[0.96 0.82 0.5  0.6  0.82 0.6  0.93] 0.75 0.82 0.83

Subset-Acc:  0.67

Model Description

  • Model Type: SetFit
  • Classification head: a SetFitHead instance
  • Maximum Sequence Length: 512 tokens Number of Classes: 7 Language: German

Model Sources

  • Repository:
  • Paper:

Uses

Direct Use for Inference

First install the SetFit library:

pip install setfit

Then you can load this model and run inference.

from setfit import SetFitModel

# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("setfit_model_id")
# Run inference
preds = model("在 Greding 出 口 离 开 A9 高 速 公 路 。")

Training Details

Training Hyperparameters

  • batch_size: (16, 32)
  • num_epochs: (2, 32)
  • max_steps: -1
  • sampling_strategy: oversampling
  • body_learning_rate: (2e-05, 1e-05)
  • head_learning_rate: 0.01
  • loss: CoSENTLoss
  • distance_metric: cosine_distance
  • margin: 0.25
  • end_to_end: True
  • use_amp: False
  • warmup_proportion: 0.1
  • l2_weight: 0.01
  • max_length: 512
  • seed: 13579
  • eval_max_steps: -1
  • load_best_model_at_end: False

Framework Versions

  • Python: 3.10.4
  • SetFit: 1.1.2
  • Sentence Transformers: 4.0.2
  • Transformers: 4.51.1
  • PyTorch: 2.6.0+cu126
  • Datasets: 3.5.0
  • Tokenizers: 0.21.1

Citation

BibTeX

@article{https://doi.org/10.48550/arxiv.2209.11055,
    doi = {10.48550/ARXIV.2209.11055},
    url = {https://arxiv.org/abs/2209.11055},
    author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
    keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
    title = {Efficient Few-Shot Learning Without Prompts},
    publisher = {arXiv},
    year = {2022},
    copyright = {Creative Commons Attribution 4.0 International}
}