For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our Github.

Usage

📌 Tip: For NV-Embed-V2, using Transformers versions later than 4.47.0 may lead to performance degradation, as model_type=bidir_mistral in config.json is no longer supported.

We recommend using Transformers 4.47.0.

Sentence Transformers Usage

You can evaluate this model loaded by Sentence Transformers with the following code snippet:

import mteb
from sentence_transformers import SparseEncoder
model = SparseEncoder(
    "Y-Research-Group/CSR-NV_Embed_v2-Retrieval-SciFACT ",
    trust_remote_code=True
)
model.prompts = {
    "SciFact-query": "Instrcut: Given a scientific claim, retrieve documents that support or refute the claim\nQuery:"
}
task = mteb.get_tasks(tasks=["SciFact"])
evaluation = mteb.MTEB(tasks=task)
evaluation.run(
    model,
    eval_splits=["test"],
    output_folder="./results/SciFact",
    show_progress_bar=True
    encode_kwargs={"convert_to_sparse_tensor": False, "batch_size": 8},
)  # MTEB don't support sparse tensors yet, so we need to convert to dense tensors

Citation

@inproceedings{wenbeyond,
  title={Beyond Matryoshka: Revisiting Sparse Coding for Adaptive Representation},
  author={Wen, Tiansheng and Wang, Yifei and Zeng, Zequn and Peng, Zhong and Su, Yudi and Liu, Xinyang and Chen, Bo and Liu, Hongwei and Jegelka, Stefanie and You, Chenyu},
  booktitle={Forty-second International Conference on Machine Learning}
}
Downloads last month
5
Safetensors
Model size
7.85B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train Y-Research-Group/CSR-NV_Embed_v2-Retrieval-SciFACT

Collection including Y-Research-Group/CSR-NV_Embed_v2-Retrieval-SciFACT

Evaluation results