INFO: The model is being continuously updated.
The model is a multilingual-e5-base model fine-tuned with the task of semantic textual similarity in mind.
Model Training
The model has been fine-tuned on the German subsets of the following datasets:
The training procedure can be divided into two stages:
- training on paraphrase datasets with the Multiple Negatives Ranking Loss
- training on semantic textual similarity datasets using the Cosine Similarity Loss
Results
The model achieves the following results:
- 0.920 on stsb's validation subset
- 0.904 on stsb's test subset
- Downloads last month
- 9,675
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.