Edit model card

deberta v3 base - Natural Language Inference

Model overview

This model is trained for the Natural Language Inference task. It takes two sentences as input (a premise and a hypothesis) and predicts the relationship between them by assigning one of three labels: "entailment," "neutral," or "contradiction." The model is based on the microsoft/deberta-v3-base model, fine-tuned on the nyu-mll/multi_nli dataset, and returns scores corresponding to the labels.

Results

After fine-tuning on the dataset, the model achieved the following results:

  • Loss: 0.276
  • Accuracy: 0.899
  • F1-Score: 0.899

These metrics were evaluated on the validation_mismatched split of the dataset.

Usage

from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch

model_name = "chincyk/deberta-v3-base-nli"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)

premise = "The flight arrived on time at the airport."
hypothesis = "The flight was delayed by several hours."
inputs = tokenizer(premise, hypothesis, return_tensors='pt')

with torch.no_grad():
    outputs = model(**inputs)

logits = outputs.logits
probs = torch.softmax(logits, dim=-1).squeeze()

id2label = model.config.id2label

for i, prob in enumerate(probs):
    print(f"{id2label[i]}: {prob:.4f}")
Downloads last month
2
Safetensors
Model size
184M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for chincyk/deberta-v3-base-nli

Finetuned
(227)
this model

Dataset used to train chincyk/deberta-v3-base-nli