Impulse2000/multilingual-e5-large-instruct-GGUF

This model was converted to GGUF format from intfloat/multilingual-e5-large-instruct using llama.cpp via its 'convert_hf_to_gguf.py' script. Refer to the original model card for more details on the model.

Downloads last month
109
GGUF
Model size
559M params
Architecture
bert

8-bit

16-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for Impulse2000/multilingual-e5-large-instruct-GGUF

Quantized
(7)
this model

Evaluation results