view post Post 3665 Google releases Gemma 4. โจGemma 4 introduces 4 models: E2B, E4B, 26B-A4B, 31B.The multimodal reasoning models are under Apache 2.0.Run E2B and E4B on ~6GB RAM, and on phones. Run 26B-A4B and 31B on ~18GB.GGUFs: https://huggingface.co/collections/unsloth/gemma-4Guide: https://unsloth.ai/docs/models/gemma-4 See translation ๐ฅ 22 22 ๐ 8 8 ๐ 1 1 โค๏ธ 1 1 + Reply
view article Article Alyah โญ๏ธ: Toward Robust Evaluation of Emirati Dialect Capabilities in Arabic LLMs Jan 27 โข 24
view post Post 5212 We collaborated with Hugging Face to enable you to train MoE models 12ร faster with 35% less VRAM via our new Triton kernels (no accuracy loss). ๐คTrain gpt-oss locally on 12.8GB VRAM with our free notebooks: https://unsloth.ai/docs/new/faster-moe See translation 1 reply ยท ๐ฅ 29 29 ๐ค 5 5 + Reply