thoddnn/colqwen2.5-v0.2-mlx-8bit-test

The Model thoddnn/colqwen2.5-v0.2-mlx-8bit-test was converted to MLX format from thoddnn/colqwen2.5-v0.2-mlx using mlx-lm version 0.0.3.

Use with mlx

pip install mlx-embeddings
from mlx_embeddings import load, generate
import mlx.core as mx

model, tokenizer = load("thoddnn/colqwen2.5-v0.2-mlx-8bit-test")

# For text embeddings
output = generate(model, processor, texts=["I like grapes", "I like fruits"])
embeddings = output.text_embeds  # Normalized embeddings

# Compute dot product between normalized embeddings
similarity_matrix = mx.matmul(embeddings, embeddings.T)

print("Similarity matrix between texts:")
print(similarity_matrix)

Downloads last month
27
Safetensors
Model size
1.54B params
Tensor type
FP16
·
U32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support