ModernBERT Environment Claims Classifier
This model is a fine-tuned version of answerdotai/ModernBERT-base trained on the QuotaClimat FrugalAIChallenge dataset.
- Data augmentation
Training Details
The model was trained using the following configuration:
training_args = TrainingArguments(
output_dir="ModernBERT-envclaims-v0",
per_device_train_batch_size=32,
per_device_eval_batch_size=16,
learning_rate=2e-5,
num_train_epochs=3,
bf16=True,
optim="adamw_torch_fused",
# Logging & Evaluation
logging_strategy="steps",
logging_steps=100,
eval_strategy="epoch",
save_strategy="epoch",
save_total_limit=2,
load_best_model_at_end=True,
metric_for_best_model="f1",
# Training optimization
weight_decay=0.01,
lr_scheduler_type="cosine",
warmup_ratio=0.1,
# Hub parameters
push_to_hub=True,
hub_strategy="every_save"
)
Model Performance
The model achieved an F1 score of 0.745 on the evaluation set.
Usage
You can use this model directly with the Hugging Face Transformers library:
from transformers import pipeline
classifier = pipeline(
"text-classification",
modelcamillebrl/ModernBERT-envclaims-v1"
)
text = "Your claim here"
class_predicted = classifier(text)
The model classifies texts into the following categories:
- Label 0: not_relevant
- Label 1: not_happening
- Label 2: not_human
- Label 3: not_bad
- Label 4: solutions_harmful_unnecessary
- Label 5: science_unreliable
- Label 6: proponents_biased
- Label 7: fossil_fuels_needed
- Downloads last month
- 0
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.