ModernBERT Embed base Legal Matryoshka

This is a sentence-transformers model finetuned from nomic-ai/modernbert-embed-base on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: nomic-ai/modernbert-embed-base
  • Maximum Sequence Length: 8192 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity
  • Training Dataset:
    • json
  • Language: en
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: ModernBertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("ritesh-07/modernbert-embed-base-legal-matryoshka-2")
# Run inference
sentences = [
    'Either way, the protégé firm’s project is subject to evaluation by the agency, and that project is \nassessed against the same evaluation criteria used to evaluate projects submitted by offerors \ngenerally.  As Plaintiffs’ counsel aptly stated during Oral Argument, the Solicitations’ terms offer \n“a distinction without a difference.”  Oral Arg. Tr. at 28:23–24.',
    'What is subject to evaluation by the agency?',
    'What does the court reject?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.5518
cosine_accuracy@3 0.5997
cosine_accuracy@5 0.6909
cosine_accuracy@10 0.7558
cosine_precision@1 0.5518
cosine_precision@3 0.5188
cosine_precision@5 0.3991
cosine_precision@10 0.2338
cosine_recall@1 0.1995
cosine_recall@3 0.5135
cosine_recall@5 0.6388
cosine_recall@10 0.7459
cosine_ndcg@10 0.6552
cosine_mrr@10 0.599
cosine_map@100 0.6391

Information Retrieval

Metric Value
cosine_accuracy@1 0.5332
cosine_accuracy@3 0.5734
cosine_accuracy@5 0.6615
cosine_accuracy@10 0.7465
cosine_precision@1 0.5332
cosine_precision@3 0.4992
cosine_precision@5 0.3845
cosine_precision@10 0.2312
cosine_recall@1 0.1908
cosine_recall@3 0.4915
cosine_recall@5 0.6137
cosine_recall@10 0.7351
cosine_ndcg@10 0.6384
cosine_mrr@10 0.5798
cosine_map@100 0.6199

Information Retrieval

Metric Value
cosine_accuracy@1 0.5162
cosine_accuracy@3 0.5487
cosine_accuracy@5 0.6306
cosine_accuracy@10 0.7017
cosine_precision@1 0.5162
cosine_precision@3 0.4812
cosine_precision@5 0.3675
cosine_precision@10 0.2181
cosine_recall@1 0.1835
cosine_recall@3 0.4713
cosine_recall@5 0.5862
cosine_recall@10 0.69
cosine_ndcg@10 0.6056
cosine_mrr@10 0.5548
cosine_map@100 0.5942

Information Retrieval

Metric Value
cosine_accuracy@1 0.4297
cosine_accuracy@3 0.4606
cosine_accuracy@5 0.5564
cosine_accuracy@10 0.6569
cosine_precision@1 0.4297
cosine_precision@3 0.4065
cosine_precision@5 0.3221
cosine_precision@10 0.2017
cosine_recall@1 0.1491
cosine_recall@3 0.3934
cosine_recall@5 0.5098
cosine_recall@10 0.6349
cosine_ndcg@10 0.5347
cosine_mrr@10 0.4757
cosine_map@100 0.5206

Information Retrieval

Metric Value
cosine_accuracy@1 0.3153
cosine_accuracy@3 0.3493
cosine_accuracy@5 0.4173
cosine_accuracy@10 0.5131
cosine_precision@1 0.3153
cosine_precision@3 0.3009
cosine_precision@5 0.2396
cosine_precision@10 0.157
cosine_recall@1 0.1127
cosine_recall@3 0.2952
cosine_recall@5 0.3794
cosine_recall@10 0.4929
cosine_ndcg@10 0.4074
cosine_mrr@10 0.3558
cosine_map@100 0.3996

Training Details

Training Dataset

json

  • Dataset: json
  • Size: 5,822 training samples
  • Columns: positive and anchor
  • Approximate statistics based on the first 1000 samples:
    positive anchor
    type string string
    details
    • min: 26 tokens
    • mean: 97.05 tokens
    • max: 160 tokens
    • min: 8 tokens
    • mean: 16.68 tokens
    • max: 46 tokens
  • Samples:
    positive anchor
    Martinez v. State. We explained that, in United States v. Vayner, 769 F.3d 125 (2d Cir.
    2014), the Second Circuit had determined that Federal Rule of Evidence 901 “is satisfied
    if sufficient proof has been introduced so that a reasonable juror could find in favor of
    authenticity or identification.” Sublet, 442 Md. at 666, 113 A.3d at 715 (quoting Vayner,
    What Federal Rule of Evidence did the Second Circuit interpret in United States v. Vayner?
    was not a party, but which contained similar allegations to her complaint here.4 The seven-
    paragraph “Argument” section of defendant’s motion was divided equally between the two
    grounds, with the first paragraph quoting the statute, and the next three paragraphs arguing the
    first ground, and the following three paragraphs arguing the second ground. With respect to
    How is the 'Argument' section of the defendant's motion divided?
    20 El derecho aplicable en el caso de epígrafe se remite al Código Civil de Puerto
    Rico de 1930, puesto que, la presentación de la Demanda y los hechos que dan
    base a esta tuvieron su lugar antes de la aprobación del nuevo Código Civil de
    Puerto Rico, Ley 55-2020, según enmendado.



    KLAN202300916

    14
    cumplimiento de los contratos, y no debemos relevar a una parte del
    ¿Cuál es el número del documento judicial mencionado en el extracto?
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 16
  • learning_rate: 2e-05
  • num_train_epochs: 4
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • bf16: True
  • tf32: False
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 16
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 4
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: False
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss dim_768_cosine_ndcg@10 dim_512_cosine_ndcg@10 dim_256_cosine_ndcg@10 dim_128_cosine_ndcg@10 dim_64_cosine_ndcg@10
0.8791 10 5.6072 - - - - -
1.0 12 - 0.5880 0.5784 0.5408 0.4667 0.3408
1.7033 20 2.5041 - - - - -
2.0 24 - 0.6403 0.6249 0.5903 0.5162 0.3884
2.5275 30 1.8714 - - - - -
3.0 36 - 0.6550 0.6347 0.6034 0.5320 0.4023
3.3516 40 1.524 - - - - -
4.0 48 - 0.6552 0.6384 0.6056 0.5347 0.4074
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.11.13
  • Sentence Transformers: 4.1.0
  • Transformers: 4.53.2
  • PyTorch: 2.6.0+cu124
  • Accelerate: 1.8.1
  • Datasets: 4.0.0
  • Tokenizers: 0.21.2

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
5
Safetensors
Model size
149M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ritesh-07/modernbert-embed-base-legal-matryoshka-2

Finetuned
(60)
this model

Evaluation results