metadata
language:
- en
license: apache-2.0
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:5822
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: nomic-ai/modernbert-embed-base
widget:
- source_sentence: |-
members and the partners may not assume that all lawyers
associated with the firm will inevitably conform to the rules.
Subdivision (c) expresses a general principle of personal
responsibility for acts of another. See also rule 4-8.4(a).
Subdivision (c)(2) defines the duty of a partner or other lawyer
having comparable managerial authority in a law firm, as well as a
sentences:
- On what date did the CIA locate and release the document in part?
- >-
Can members and partners assume that all lawyers will conform to the
rules?
- >-
Where is the statement about Senetas's belief regarding DR's technology
located?
- source_sentence: >-
9 Galbally Dep. Tr. 56:20-22; 58:2-10 (“FDA approval to me was one of the
most
important aspects as to why I invested, and the fact that it was likely to
be approved at
some stage during 2017”); 98:7-9.
10 Galbally Dep. Tr. 20:11-24.
Senetas Corporation, Ltd. v. DeepRadiology Corporation
C.A. No. 2019-0170-PWG
July 30, 2019
4
sentences:
- What does the Agency offer to the potential requester?
- >-
On what date was the document titled 'Senetas Corporation, Ltd. v.
DeepRadiology Corporation C.A. No. 2019-0170-PWG' created?
- ¿Qué no logró establecer la parte apelada?
- source_sentence: |-
relacionados los tres señalamientos de error, los discutiremos de
forma conjunta.
KLAN202300916
18
Es la contención de la parte apelante que, al ser titular del
material audiovisual en controversia, ostenta todo el derecho de
desplegarlo en la Internet y en cualquier otro medio bajo la Ley de
Derecho de Autor federal (Copyright Act). Arguye que, el reclamo de
sentences:
- Which exhibit did the image displayed by the prosecutor come from?
- >-
¿Qué parte sostiene tener el derecho de desplegar el material
audiovisual?
- >-
Where in the Defendant's Reply can information about the 'hourly rate
build up' be found?
- source_sentence: >-
jurisdiction under subsection (b) in his reply brief. Generally, “[p]oints
not argued” in an appellant’s
initial brief “are forfeited and shall not be raised in the reply brief.”
Ill. S. Ct. R. 341(h)(7) (eff. Oct. 1,
2020).
9The statement of jurisdiction in defendant’s brief stated that the notice
was filed on May 17, 2022,
but this appears to be a typo.
- 8 -
sentences:
- >-
On what date did the statement of jurisdiction in defendant’s brief
claim the notice was filed?
- What do agencies use to make source selection decisions?
- >-
During which part of the proceeding is the plaintiffs' suggestion
referenced?
- source_sentence: >-
No. 11-445, ECF No. 52-1; id. Ex. B at 1, No. 11-445, ECF No. 52-1. On
December 8, 2009, the
plaintiff limited the scope of this request by notifying the CIA that it
could “limit [its] search for
requests submitted by Michael Ravnitzky to only requests submitted in 2006
and 2009” and that
it could “limit [its] search to the last four years in which requests were
received from [each]
sentences:
- How is the document listed in the Vaughn index?
- Whose requests did the CIA specifically limit its search to?
- >-
What court case is referenced alongside the American Immigration
Council?
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: ModernBERT Embed base Legal Matryoshka
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.5703245749613601
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.624420401854714
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.6924265842349304
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.7743431221020093
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.5703245749613601
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.5455950540958269
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.41452859350850085
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.24018547140649152
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.20556414219474498
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.5334878928387429
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.6562339000515199
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.7582431736218445
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.6719357607925999
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.6179423959176661
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.6566999443914384
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.5564142194744977
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.6197836166924265
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.6970633693972179
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.7573415765069552
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.5564142194744977
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.5358062854198866
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.41051004636785166
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.23693972179289027
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.20066975785677482
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.526275115919629
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.6527563111798043
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.7465224111282844
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.661632178488043
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.6061688869262281
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.6465145932511888
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.5409582689335394
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.5765069551777434
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.6537867078825348
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.7279752704791345
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.5409582689335394
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.508500772797527
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.3839258114374034
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.226275115919629
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.19410097887686759
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.4988408037094281
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.6115404430705821
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.7134209170530655
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.6311397137767786
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.5805567822182967
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.6192535685391065
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1
value: 0.46986089644513135
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.5146831530139103
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.5857805255023184
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.6646058732612056
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.46986089644513135
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.4487377640391551
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.34188562596599686
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.2054095826893354
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.16911385883565172
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.4416537867078825
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.5471406491499228
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.6505667181865017
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.5655614028820953
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.5130118250288265
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.5535347867731277
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 64
type: dim_64
metrics:
- type: cosine_accuracy@1
value: 0.3678516228748068
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.401854714064915
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.4775888717156105
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.5564142194744977
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.3678516228748068
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3498196805770222
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.2751159196290572
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.171097372488408
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.13111798042246264
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.3422205048943843
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.43469860896445134
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.5414734672849046
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.4576624400779507
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.40745626947327096
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.4494322513495018
name: Cosine Map@100
ModernBERT Embed base Legal Matryoshka
This is a sentence-transformers model finetuned from nomic-ai/modernbert-embed-base on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: nomic-ai/modernbert-embed-base
- Maximum Sequence Length: 8192 tokens
- Output Dimensionality: 768 dimensions
- Similarity Function: Cosine Similarity
- Training Dataset:
- Language: en
- License: apache-2.0
Model Sources
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: ModernBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("PhilLel/modernbert-embed-base-legal-matryoshka-2")
sentences = [
'No. 11-445, ECF No. 52-1; id. Ex. B at 1, No. 11-445, ECF No. 52-1. On December 8, 2009, the \nplaintiff limited the scope of this request by notifying the CIA that it could “limit [its] search for \nrequests submitted by Michael Ravnitzky to only requests submitted in 2006 and 2009” and that \nit could “limit [its] search to the last four years in which requests were received from [each]',
'Whose requests did the CIA specifically limit its search to?',
'How is the document listed in the Vaughn index?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
Evaluation
Metrics
Information Retrieval
Metric |
Value |
cosine_accuracy@1 |
0.5703 |
cosine_accuracy@3 |
0.6244 |
cosine_accuracy@5 |
0.6924 |
cosine_accuracy@10 |
0.7743 |
cosine_precision@1 |
0.5703 |
cosine_precision@3 |
0.5456 |
cosine_precision@5 |
0.4145 |
cosine_precision@10 |
0.2402 |
cosine_recall@1 |
0.2056 |
cosine_recall@3 |
0.5335 |
cosine_recall@5 |
0.6562 |
cosine_recall@10 |
0.7582 |
cosine_ndcg@10 |
0.6719 |
cosine_mrr@10 |
0.6179 |
cosine_map@100 |
0.6567 |
Information Retrieval
Metric |
Value |
cosine_accuracy@1 |
0.5564 |
cosine_accuracy@3 |
0.6198 |
cosine_accuracy@5 |
0.6971 |
cosine_accuracy@10 |
0.7573 |
cosine_precision@1 |
0.5564 |
cosine_precision@3 |
0.5358 |
cosine_precision@5 |
0.4105 |
cosine_precision@10 |
0.2369 |
cosine_recall@1 |
0.2007 |
cosine_recall@3 |
0.5263 |
cosine_recall@5 |
0.6528 |
cosine_recall@10 |
0.7465 |
cosine_ndcg@10 |
0.6616 |
cosine_mrr@10 |
0.6062 |
cosine_map@100 |
0.6465 |
Information Retrieval
Metric |
Value |
cosine_accuracy@1 |
0.541 |
cosine_accuracy@3 |
0.5765 |
cosine_accuracy@5 |
0.6538 |
cosine_accuracy@10 |
0.728 |
cosine_precision@1 |
0.541 |
cosine_precision@3 |
0.5085 |
cosine_precision@5 |
0.3839 |
cosine_precision@10 |
0.2263 |
cosine_recall@1 |
0.1941 |
cosine_recall@3 |
0.4988 |
cosine_recall@5 |
0.6115 |
cosine_recall@10 |
0.7134 |
cosine_ndcg@10 |
0.6311 |
cosine_mrr@10 |
0.5806 |
cosine_map@100 |
0.6193 |
Information Retrieval
Metric |
Value |
cosine_accuracy@1 |
0.4699 |
cosine_accuracy@3 |
0.5147 |
cosine_accuracy@5 |
0.5858 |
cosine_accuracy@10 |
0.6646 |
cosine_precision@1 |
0.4699 |
cosine_precision@3 |
0.4487 |
cosine_precision@5 |
0.3419 |
cosine_precision@10 |
0.2054 |
cosine_recall@1 |
0.1691 |
cosine_recall@3 |
0.4417 |
cosine_recall@5 |
0.5471 |
cosine_recall@10 |
0.6506 |
cosine_ndcg@10 |
0.5656 |
cosine_mrr@10 |
0.513 |
cosine_map@100 |
0.5535 |
Information Retrieval
Metric |
Value |
cosine_accuracy@1 |
0.3679 |
cosine_accuracy@3 |
0.4019 |
cosine_accuracy@5 |
0.4776 |
cosine_accuracy@10 |
0.5564 |
cosine_precision@1 |
0.3679 |
cosine_precision@3 |
0.3498 |
cosine_precision@5 |
0.2751 |
cosine_precision@10 |
0.1711 |
cosine_recall@1 |
0.1311 |
cosine_recall@3 |
0.3422 |
cosine_recall@5 |
0.4347 |
cosine_recall@10 |
0.5415 |
cosine_ndcg@10 |
0.4577 |
cosine_mrr@10 |
0.4075 |
cosine_map@100 |
0.4494 |
Training Details
Training Dataset
json
- Dataset: json
- Size: 5,822 training samples
- Columns:
positive
and anchor
- Approximate statistics based on the first 1000 samples:
|
positive |
anchor |
type |
string |
string |
details |
- min: 28 tokens
- mean: 96.98 tokens
- max: 157 tokens
|
- min: 8 tokens
- mean: 16.79 tokens
- max: 41 tokens
|
- Samples:
positive |
anchor |
After the bench conference concluded, the following exchange occurred between the prosecutor and Mr. Zimmerman: [PROSECUTOR:] Did you watch this video in preparation? [MR. ZIMMERMAN:] Yes, I did. [PROSECUTOR:] Okay. And after seeing that video[,] was that a true and accurate depiction of the events that occurred that day? [MR. ZIMMERMAN:] Yes. |
What was Mr. Zimmerman's response when asked if he watched the video in preparation? |
those guidelines still left a significant amount of ambiguity about “precisely what records [were] being requested.” Id. (internal quotation marks omitted). Notably, although the plaintiff limited the date range and number of reports requested, the plaintiff’s request would still place an unreasonable search burden for two primary reasons. First, the plaintiff’s guideline asking for |
What aspect of the plaintiff's request is mentioned as limited? |
motion without prejudice and permit him to do the same. See Prop. of the People, Inc., 330 F. Supp. 3d at 390 (denying the parties’ motions without prejudice because the agency failed to submit sufficient information justifying its FOIA withholdings and permitting both parties to file renewed motions). Thus, it is hereby ORDERED that Defendant’s Motion for Summary Judgment, ECF |
What were the parties allowed to do after their motions were denied? |
- Loss:
MatryoshkaLoss
with these parameters:{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: epoch
per_device_train_batch_size
: 32
per_device_eval_batch_size
: 16
gradient_accumulation_steps
: 16
learning_rate
: 2e-05
num_train_epochs
: 4
lr_scheduler_type
: cosine
warmup_ratio
: 0.1
bf16
: True
tf32
: True
load_best_model_at_end
: True
optim
: adamw_torch_fused
batch_sampler
: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir
: False
do_predict
: False
eval_strategy
: epoch
prediction_loss_only
: True
per_device_train_batch_size
: 32
per_device_eval_batch_size
: 16
per_gpu_train_batch_size
: None
per_gpu_eval_batch_size
: None
gradient_accumulation_steps
: 16
eval_accumulation_steps
: None
torch_empty_cache_steps
: None
learning_rate
: 2e-05
weight_decay
: 0.0
adam_beta1
: 0.9
adam_beta2
: 0.999
adam_epsilon
: 1e-08
max_grad_norm
: 1.0
num_train_epochs
: 4
max_steps
: -1
lr_scheduler_type
: cosine
lr_scheduler_kwargs
: {}
warmup_ratio
: 0.1
warmup_steps
: 0
log_level
: passive
log_level_replica
: warning
log_on_each_node
: True
logging_nan_inf_filter
: True
save_safetensors
: True
save_on_each_node
: False
save_only_model
: False
restore_callback_states_from_checkpoint
: False
no_cuda
: False
use_cpu
: False
use_mps_device
: False
seed
: 42
data_seed
: None
jit_mode_eval
: False
use_ipex
: False
bf16
: True
fp16
: False
fp16_opt_level
: O1
half_precision_backend
: auto
bf16_full_eval
: False
fp16_full_eval
: False
tf32
: True
local_rank
: 0
ddp_backend
: None
tpu_num_cores
: None
tpu_metrics_debug
: False
debug
: []
dataloader_drop_last
: False
dataloader_num_workers
: 0
dataloader_prefetch_factor
: None
past_index
: -1
disable_tqdm
: False
remove_unused_columns
: True
label_names
: None
load_best_model_at_end
: True
ignore_data_skip
: False
fsdp
: []
fsdp_min_num_params
: 0
fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
tp_size
: 0
fsdp_transformer_layer_cls_to_wrap
: None
accelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
deepspeed
: None
label_smoothing_factor
: 0.0
optim
: adamw_torch_fused
optim_args
: None
adafactor
: False
group_by_length
: False
length_column_name
: length
ddp_find_unused_parameters
: None
ddp_bucket_cap_mb
: None
ddp_broadcast_buffers
: False
dataloader_pin_memory
: True
dataloader_persistent_workers
: False
skip_memory_metrics
: True
use_legacy_prediction_loop
: False
push_to_hub
: False
resume_from_checkpoint
: None
hub_model_id
: None
hub_strategy
: every_save
hub_private_repo
: None
hub_always_push
: False
gradient_checkpointing
: False
gradient_checkpointing_kwargs
: None
include_inputs_for_metrics
: False
include_for_metrics
: []
eval_do_concat_batches
: True
fp16_backend
: auto
push_to_hub_model_id
: None
push_to_hub_organization
: None
mp_parameters
:
auto_find_batch_size
: False
full_determinism
: False
torchdynamo
: None
ray_scope
: last
ddp_timeout
: 1800
torch_compile
: False
torch_compile_backend
: None
torch_compile_mode
: None
include_tokens_per_second
: False
include_num_input_tokens_seen
: False
neftune_noise_alpha
: None
optim_target_modules
: None
batch_eval_metrics
: False
eval_on_start
: False
use_liger_kernel
: False
eval_use_gather_object
: False
average_tokens_across_devices
: False
prompts
: None
batch_sampler
: no_duplicates
multi_dataset_batch_sampler
: proportional
Training Logs
Epoch |
Step |
Training Loss |
dim_768_cosine_ndcg@10 |
dim_512_cosine_ndcg@10 |
dim_256_cosine_ndcg@10 |
dim_128_cosine_ndcg@10 |
dim_64_cosine_ndcg@10 |
1.0 |
6 |
- |
0.5702 |
0.5637 |
0.5165 |
0.4642 |
0.3672 |
1.7033 |
10 |
107.719 |
- |
- |
- |
- |
- |
2.0 |
12 |
- |
0.6308 |
0.6204 |
0.5816 |
0.5030 |
0.3945 |
3.0 |
18 |
- |
0.6403 |
0.6286 |
0.5892 |
0.5124 |
0.3973 |
3.3516 |
20 |
58.188 |
0.6406 |
0.6285 |
0.5906 |
0.5135 |
0.3979 |
1.0 |
6 |
- |
0.6590 |
0.6518 |
0.6151 |
0.5451 |
0.4307 |
1.7033 |
10 |
49.076 |
- |
- |
- |
- |
- |
2.0 |
12 |
- |
0.6696 |
0.6602 |
0.6247 |
0.5612 |
0.4497 |
3.0 |
18 |
- |
0.6719 |
0.6616 |
0.6311 |
0.5656 |
0.4577 |
3.3516 |
20 |
36.707 |
0.6719 |
0.6616 |
0.6311 |
0.5656 |
0.4577 |
- The bold row denotes the saved checkpoint.
Framework Versions
- Python: 3.10.12
- Sentence Transformers: 4.1.0
- Transformers: 4.51.3
- PyTorch: 2.7.0+cu126
- Accelerate: 1.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MatryoshkaLoss
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}