SetFit with FacebookAI/xlm-roberta-base

This is a SetFit model that can be used for Text Classification. More specifically, the model is meant for detecting checkable claims in sentences extracted from news articles, in Finnish. In this particular model, a checkable claim constitutes claims about Quantity, Prediction, Correlation/Causation, Laws/Rules of operation. Non-claims include claims about personal experience, other claims, and non-checkable/non-claim-sentences.

This SetFit model uses FacebookAI/xlm-roberta-base as the Sentence Transformer embedding model.

The model has been trained using an efficient few-shot learning technique that involves:

  1. Fine-tuning a Sentence Transformer with contrastive learning.
  2. Training a classification head with features from the fine-tuned Sentence Transformer.

Model Details

Model Description

Model Sources

Model Labels

Label Examples
0
  • 'Uusi maakuntajohtaja haluaa tehdä määrätietoisesti avointa elinkeinopolitiikkaa.'
  • '– Verot eivät saa nousta.'
  • 'Enemmänkin ulkoinen apu, sikäli kun sitä tarvitaan, on luonteeltaan täydentävää, läsnäoloa.'
1
  • '– Seuraava laskumarkkina voi olla edessä, kun keskuspankit alkavat vähentää likviditeettiä pois markkinoilta, Rothovius tuumaa.'
  • 'Hallituspuolueista puolestaan huomautettiin, että sotea on yritetty saadaan kuntoon 15 vuotta, josta kokoomus on ollut hallitusvastuussa valtaosan.'

Evaluation

Metrics

Label Metric (10-fold-cross-validated)
all 0.84

Uses

Direct Use for Inference

First install the SetFit library:

pip install setfit

Then you can load this model and run inference.

from setfit import SetFitModel

# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("HY-Aalto-DIME/Finn-claim-detect-xlm-roberta-CF2")
# Run inference
preds = model("Toinen pulma on se, lopahtaako harrastajien into.")

Training Details

Training Set Metrics

Training set Min Median Max
Word count 1 11.0767 29
Label Training Sample Count
0 727
1 329

Training Hyperparameters

  • batch_size: (16, 16)
  • num_epochs: (4, 4)
  • max_steps: -1
  • sampling_strategy: oversampling
  • num_iterations: 6
  • body_learning_rate: (2e-05, 1e-05)
  • head_learning_rate: 0.01
  • loss: CosineSimilarityLoss
  • distance_metric: cosine_distance
  • margin: 0.25
  • end_to_end: False
  • use_amp: False
  • warmup_proportion: 0.1
  • seed: 42
  • eval_max_steps: -1
  • load_best_model_at_end: False

Framework Versions

  • Python: 3.11.9
  • SetFit: 1.0.3
  • Sentence Transformers: 3.2.0
  • Transformers: 4.44.0
  • PyTorch: 2.4.0+cu124
  • Datasets: 2.21.0
  • Tokenizers: 0.19.1

Citation

BibTeX

@article{,
    doi = {},
    url = {},
    author = {},
    keywords = {},
    title = {},
    publisher = {},
    year = {},
    copyright = {}
}
Downloads last month
14
Safetensors
Model size
278M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for HY-Aalto-DIME/Finn-claim-detect-xlm-roberta-CF2

Finetuned
(3370)
this model

Evaluation results