LocalBrain Gemma 2B Model
This is a Gemma 2B model optimized for mobile devices, designed for the LocalBrain app.
Model Details
- Model: Google Gemma 2B
- Format: TensorFlow Lite
- Size: ~2.5GB
- Purpose: Offline AI chat on mobile devices
- App: LocalBrain - Private AI Assistant
Usage
This model is specifically packaged for the LocalBrain mobile app, which provides:
- 100% offline AI conversations
- Complete privacy (data never leaves device)
- Fast local processing
- No API calls required
Download
The model file brain-model.bin
can be downloaded directly for use in the LocalBrain app.
License
This model is released under the Apache 2.0 license, following Google's Gemma licensing terms.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support