cherifkhalifah's picture
Finetuned model on SNLI
b8be1b1 verified
|
raw
history blame
18.9 kB
metadata
base_model: sentence-transformers/all-MiniLM-L12-v2
library_name: sentence-transformers
metrics:
  - pearson_cosine
  - spearman_cosine
  - pearson_manhattan
  - spearman_manhattan
  - pearson_euclidean
  - spearman_euclidean
  - pearson_dot
  - spearman_dot
  - pearson_max
  - spearman_max
pipeline_tag: sentence-similarity
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - generated_from_trainer
  - dataset_size:100000
  - loss:CosineSimilarityLoss
widget:
  - source_sentence: >-
      NIPA personal income includes pension contributions by employers in the
      year income is earned , and benefits paid at retirement are not a
      component of NIPA income .
    sentences:
      - >-
        While not the only makeup of income , NIPA is one of the more well known
        income distinctions .
      - >-
        Les temples de karnak et de Louxor ont été démolis pour faire place à
        des projets de construction en Cisjordanie .
      - >-
        Les restaurants sont tenus à des règles strictes pour contenir leur
        odeur .
  - source_sentence: >-
      right right you know the one that 's one reason we bought a house here in
      Plano we were hoping you know well the school district 's gonna be good
      you know for resale value and so on and so forth but
    sentences:
      - We moved to Plano because we thought the school district was good .
      - These and those .
      - >-
        L' obsession a suscité une suggestion que tous étaient des boucs
        émissaires de la guerre .
  - source_sentence: >-
      Dans l' homme invisible , le talentueux dixième narrateur doit surmonter
      non seulement les différentes idéologies qui lui sont présentées comme
      masques ou subversions d' identité , mais aussi les différents rôles et
      prescriptions pour le leadership que sa propre race lui souhaite de
      réaliser .
    sentences:
      - '" We ''re too uptight now ! " Said Tommy'
      - Le talentueux dixième narrateur doit surmonter les idéologies .
      - >-
        Saddam is not taking advantage of the current Arab love towards the
        United States
  - source_sentence: >-
      Les lacunes trouvées au cours de la surveillance en cours ou au moyen d'
      évaluations distinctes devraient être communiquées à l' individu
      responsable de la fonction et à au moins un niveau de gestion au-dessus de
      cet individu .
    sentences:
      - L' économie diminuera également si les conditions du marché changent .
      - The Watergate comparison wasn 't just for Democratic bashing .
      - Il n' y a pas lieu de signaler les lacunes .
  - source_sentence: >-
      it looks fertile and it it um i mean it rains enough they have the climate
      and the rain and if not it 's like i 've been to Saint Thomas and it just
      starts from the ocean up
    sentences:
      - Il n' a jamais triché .
      - They don 't know how to do it .
      - >-
        They have the rain and the climate so I imagine the lands would be
        fertile .
model-index:
  - name: SentenceTransformer based on sentence-transformers/all-MiniLM-L12-v2
    results:
      - task:
          type: semantic-similarity
          name: Semantic Similarity
        dataset:
          name: snli dev
          type: snli-dev
        metrics:
          - type: pearson_cosine
            value: 0.3725313255221131
            name: Pearson Cosine
          - type: spearman_cosine
            value: 0.3729470854776107
            name: Spearman Cosine
          - type: pearson_manhattan
            value: 0.3650227128515394
            name: Pearson Manhattan
          - type: spearman_manhattan
            value: 0.37250760289182383
            name: Spearman Manhattan
          - type: pearson_euclidean
            value: 0.36567325497563746
            name: Pearson Euclidean
          - type: spearman_euclidean
            value: 0.37294699995093694
            name: Spearman Euclidean
          - type: pearson_dot
            value: 0.3725313190046259
            name: Pearson Dot
          - type: spearman_dot
            value: 0.3729474276296007
            name: Spearman Dot
          - type: pearson_max
            value: 0.3725313255221131
            name: Pearson Max
          - type: spearman_max
            value: 0.3729474276296007
            name: Spearman Max

SentenceTransformer based on sentence-transformers/all-MiniLM-L12-v2

This is a sentence-transformers model finetuned from sentence-transformers/all-MiniLM-L12-v2. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: sentence-transformers/all-MiniLM-L12-v2
  • Maximum Sequence Length: 128 tokens
  • Output Dimensionality: 384 tokens
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("cherifkhalifah/finetuned-snli-MiniLM-L12-v2-100k-en-fr")
# Run inference
sentences = [
    "it looks fertile and it it um i mean it rains enough they have the climate and the rain and if not it 's like i 've been to Saint Thomas and it just starts from the ocean up",
    'They have the rain and the climate so I imagine the lands would be fertile .',
    "They don 't know how to do it .",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Semantic Similarity

Metric Value
pearson_cosine 0.3725
spearman_cosine 0.3729
pearson_manhattan 0.365
spearman_manhattan 0.3725
pearson_euclidean 0.3657
spearman_euclidean 0.3729
pearson_dot 0.3725
spearman_dot 0.3729
pearson_max 0.3725
spearman_max 0.3729

Training Details

Training Dataset

Unnamed Dataset

  • Size: 100,000 training samples
  • Columns: sentence_0, sentence_1, and label
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1 label
    type string string float
    details
    • min: 5 tokens
    • mean: 35.27 tokens
    • max: 128 tokens
    • min: 5 tokens
    • mean: 18.46 tokens
    • max: 66 tokens
    • min: 0.0
    • mean: 0.5
    • max: 1.0
  • Samples:
    sentence_0 sentence_1 label
    Natalia M' a regardé . Natalia a regardé et attend que je lui donne l' épée . 0.5
    And he sounded sincere . He sounded sincere.He was sounding sincere in his words . 0.0
    There 's a small zoo area where you can see snakes , lizards , birds of prey , wolves , hyenas , foxes , and various desert cats , including cheetahs and leopards . The zoo is home to some endangered desert animals . 0.5
  • Loss: CosineSimilarityLoss with these parameters:
    {
        "loss_fct": "torch.nn.modules.loss.MSELoss"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • num_train_epochs: 4
  • fp16: True
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 4
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • eval_use_gather_object: False
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step Training Loss snli-dev_spearman_max
0.08 500 0.2008 0.0433
0.16 1000 0.1757 0.1024
0.24 1500 0.1732 0.1503
0.32 2000 0.1685 0.2168
0.4 2500 0.1702 0.2206
0.48 3000 0.1676 0.2117
0.56 3500 0.1637 0.2624
0.64 4000 0.1636 0.2169
0.72 4500 0.1608 0.0051
0.8 5000 0.1601 0.2236
0.88 5500 0.1597 0.2471
0.96 6000 0.1596 0.2934
1.0 6250 - 0.2905
1.04 6500 0.1602 0.3001
1.12 7000 0.1571 0.3116
1.2 7500 0.1588 0.3145
1.28 8000 0.1562 0.3304
1.3600 8500 0.1548 0.3376
1.44 9000 0.156 0.3359
1.52 9500 0.1552 0.3194
1.6 10000 0.153 0.3474
1.6800 10500 0.1529 0.3220
1.76 11000 0.1518 0.3255
1.8400 11500 0.1499 0.3332
1.92 12000 0.1524 0.3521
2.0 12500 0.1512 0.3425
2.08 13000 0.1514 0.3462
2.16 13500 0.1516 0.3414
2.24 14000 0.1532 0.3453
2.32 14500 0.1459 0.3699
2.4 15000 0.1524 0.3576
2.48 15500 0.1506 0.3418
2.56 16000 0.1488 0.3559
2.64 16500 0.1486 0.3597
2.7200 17000 0.1469 0.3552
2.8 17500 0.1448 0.3459
2.88 18000 0.1458 0.3503
2.96 18500 0.1468 0.3647
3.0 18750 - 0.3611
3.04 19000 0.1472 0.3741
3.12 19500 0.1457 0.3603
3.2 20000 0.147 0.3576
3.2800 20500 0.1451 0.3663
3.36 21000 0.1438 0.3734
3.44 21500 0.1471 0.3698
3.52 22000 0.1462 0.3646
3.6 22500 0.1436 0.3740
3.68 23000 0.1441 0.3696
3.76 23500 0.1423 0.3636
3.84 24000 0.1411 0.3713
3.92 24500 0.1438 0.3706
4.0 25000 0.1421 0.3729

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.1.1
  • Transformers: 4.44.2
  • PyTorch: 2.4.1+cu121
  • Accelerate: 0.34.2
  • Datasets: 3.0.1
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}