Local Installation Video and Testing - Step by Step
#1
by
fahdmirzac
- opened
Hi,
Kudos on producing such a sublime model. I did a local installation and testing video :
https://youtu.be/tMZSo21cIPs?si=SkGjJglyclwE7_jz
Thanks and regards,
Fahd
This may not be seen, but it would be great to see some Gemma3 models tuned for text-embedding benchmarks (eg. MTEB Leaderboard). In most of my LLM work I use embedding models like the Qwen3-Embedding series, but there are currently very few high quality alternatives.
Thanks for the release :)
Hi @fahdmirzac ,
Thanks for your interest and great suggestion! We're actively evaluating possible directions for fine-tuning, including for embedding use cases. Your input helps guide priorities — much appreciated!
for some odd reasons I am getting stuck here
outputs = model.generate(**inputs,
max_new_tokens=256,
temperature=0.7,
do_sample=True,
pad_token_id = tokenizer.eos_token_id)
Anyone else having this issue?