Post
2979
π Real-Time On-Device AI Agent with Polaris-4B β Run It Yourself, No Cloud, No Cost
We just deployed a real-time on-device AI agent using the Polaris-4B-Preview model β one of the top-performing <6B open LLMs on Hugging Face.
π± Whatβs remarkable?
This model runs entirely on a mobile device, without cloud, and without any manual optimization. It was built using ZETIC.MLange, and the best part?
β‘οΈ Itβs totally automated, free to use, and anyone can do it.
You donβt need to write deployment code, tweak backends, or touch device-specific SDKs. Just upload your model β and ZETIC.MLange handles the rest.
π§ About the Model
- Model: Polaris-4B-Preview
- Size: ~4B parameters
- Ranking: Top 3 on Hugging Face LLM Leaderboard (<6B)
- Tokenizer: Token-incremental inference supported
- Modifications: None β stock weights, just optimized for mobile
βοΈ What ZETIC.MLange Does
ZETIC.MLange is a fully automated deployment framework for On-Device AI, built for AI engineers who want to focus on models β not infrastructure.
Hereβs what it does in minutes:
- π Analyzes model structure
- βοΈ Converts to mobile-optimized format (e.g., GGUF, ONNX)
- π¦ Generates a runnable runtime environment with pre/post-processing
- π± Targets real mobile hardware (CPU, GPU, NPU β including Qualcomm, MediaTek, Apple)
- π― Gives you a downloadable SDK or mobile app component β ready to run
And yes β this is available now, for free, at https://mlange.zetic.ai
π§ͺ For AI Engineers Like You, If you want to:
- Test LLMs directly on-device
- Run models offline with no latency
- Avoid cloud GPU costs
- Deploy to mobile without writing app-side inference code
Then this is your moment. You can do exactly what we did, using your own models β all in a few clicks.
π― Start here β https://mlange.zetic.ai
π¬ Want to try Polaris-4B on your own app? [email protected], or just visit https://mlange.zetic.ai , it is opened as free!
Great work @Chancy , @Zhihui , @tobiaslee !
We just deployed a real-time on-device AI agent using the Polaris-4B-Preview model β one of the top-performing <6B open LLMs on Hugging Face.
π± Whatβs remarkable?
This model runs entirely on a mobile device, without cloud, and without any manual optimization. It was built using ZETIC.MLange, and the best part?
β‘οΈ Itβs totally automated, free to use, and anyone can do it.
You donβt need to write deployment code, tweak backends, or touch device-specific SDKs. Just upload your model β and ZETIC.MLange handles the rest.
π§ About the Model
- Model: Polaris-4B-Preview
- Size: ~4B parameters
- Ranking: Top 3 on Hugging Face LLM Leaderboard (<6B)
- Tokenizer: Token-incremental inference supported
- Modifications: None β stock weights, just optimized for mobile
βοΈ What ZETIC.MLange Does
ZETIC.MLange is a fully automated deployment framework for On-Device AI, built for AI engineers who want to focus on models β not infrastructure.
Hereβs what it does in minutes:
- π Analyzes model structure
- βοΈ Converts to mobile-optimized format (e.g., GGUF, ONNX)
- π¦ Generates a runnable runtime environment with pre/post-processing
- π± Targets real mobile hardware (CPU, GPU, NPU β including Qualcomm, MediaTek, Apple)
- π― Gives you a downloadable SDK or mobile app component β ready to run
And yes β this is available now, for free, at https://mlange.zetic.ai
π§ͺ For AI Engineers Like You, If you want to:
- Test LLMs directly on-device
- Run models offline with no latency
- Avoid cloud GPU costs
- Deploy to mobile without writing app-side inference code
Then this is your moment. You can do exactly what we did, using your own models β all in a few clicks.
π― Start here β https://mlange.zetic.ai
π¬ Want to try Polaris-4B on your own app? [email protected], or just visit https://mlange.zetic.ai , it is opened as free!
Great work @Chancy , @Zhihui , @tobiaslee !