Great job Google, Now we need...
#2
by
yousef1727
- opened
For future development, we need embedding models optimized for RAG tasks, starting with something like gemma-3-270M-embedding.
Expanding the lineup with additional sizes — 270M, 500M, 700M, and beyond — would offer flexibility for different workloads and hardware constraints.
If Gemma had a broader range of sizes and task-specific variants, it could greatly enhance adoption and unlock new use cases.
Hi @yousef1727 ,
Thanks for your interest and great suggestion! We're actively evaluating possible directions of embedding models, RAG related tasks with smaller models. Your input helps guide priorities much appreciated for future Gemma models evolution.
Thanks.
This comment has been hidden