jp-parallel-gloss makes predictions on similarity of Japanese-to-English glosses (definitions). This model is fine-tuned using a dataset of over 550,000 parallel/non-parallel gloss pairs from the JMDict database and antonym/synonym pairs from WordNet. The base model used is cross-encoder/ms-macro-MiniLM-L-6-v2.

This model is intended to be used as a CrossEncoder model with sigmoid activation.

from sentence_transformers import CrossEncoder
from torch import nn

model = CrossEncoder("nphach/jp-parallel-gloss", default_activation_function=nn.Sigmoid())
similarity = model.predict(['translation', 'meaning of a word in another language'])

See its application in Kotoba Tag.

Downloads last month
239
Safetensors
Model size
22.7M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Model tree for nphach/jp-parallel-gloss

Finetuned
(7)
this model