sarvam-m GGUF Models
Model Generation Details
This model was generated using llama.cpp at commit f5cd27b7
.
Ultra-Low-Bit Quantization with IQ-DynamicGate (1-2 bit)
Our latest quantization method introduces precision-adaptive quantization for ultra-low-bit models (1-2 bit), with benchmark-proven improvements on Llama-3-8B. This approach uses layer-specific strategies to preserve accuracy while maintaining extreme memory efficiency.
Benchmark Context
All tests conducted on Llama-3-8B-Instruct using:
- Standard perplexity evaluation pipeline
- 2048-token context window
- Same prompt set across all quantizations
Method
- Dynamic Precision Allocation:
- First/Last 25% of layers β IQ4_XS (selected layers)
- Middle 50% β IQ2_XXS/IQ3_S (increase efficiency)
- Critical Component Protection:
- Embeddings/output layers use Q5_K
- Reduces error propagation by 38% vs standard 1-2bit
Quantization Performance Comparison (Llama-3-8B)
Quantization | Standard PPL | DynamicGate PPL | Ξ PPL | Std Size | DG Size | Ξ Size | Std Speed | DG Speed |
---|---|---|---|---|---|---|---|---|
IQ2_XXS | 11.30 | 9.84 | -12.9% | 2.5G | 2.6G | +0.1G | 234s | 246s |
IQ2_XS | 11.72 | 11.63 | -0.8% | 2.7G | 2.8G | +0.1G | 242s | 246s |
IQ2_S | 14.31 | 9.02 | -36.9% | 2.7G | 2.9G | +0.2G | 238s | 244s |
IQ1_M | 27.46 | 15.41 | -43.9% | 2.2G | 2.5G | +0.3G | 206s | 212s |
IQ1_S | 53.07 | 32.00 | -39.7% | 2.1G | 2.4G | +0.3G | 184s | 209s |
Key:
- PPL = Perplexity (lower is better)
- Ξ PPL = Percentage change from standard to DynamicGate
- Speed = Inference time (CPU avx2, 2048 token context)
- Size differences reflect mixed quantization overhead
Key Improvements:
- π₯ IQ1_M shows massive 43.9% perplexity reduction (27.46 β 15.41)
- π IQ2_S cuts perplexity by 36.9% while adding only 0.2GB
- β‘ IQ1_S maintains 39.7% better accuracy despite 1-bit quantization
Tradeoffs:
- All variants have modest size increases (0.1-0.3GB)
- Inference speeds remain comparable (<5% difference)
When to Use These Models
π Fitting models into GPU VRAM
β Memory-constrained deployments
β Cpu and Edge Devices where 1-2bit errors can be tolerated
β Research into ultra-low-bit quantization
Choosing the Right Model Format
Selecting the correct model format depends on your hardware capabilities and memory constraints.
BF16 (Brain Float 16) β Use if BF16 acceleration is available
- A 16-bit floating-point format designed for faster computation while retaining good precision.
- Provides similar dynamic range as FP32 but with lower memory usage.
- Recommended if your hardware supports BF16 acceleration (check your device's specs).
- Ideal for high-performance inference with reduced memory footprint compared to FP32.
π Use BF16 if:
β Your hardware has native BF16 support (e.g., newer GPUs, TPUs).
β You want higher precision while saving memory.
β You plan to requantize the model into another format.
π Avoid BF16 if:
β Your hardware does not support BF16 (it may fall back to FP32 and run slower).
β You need compatibility with older devices that lack BF16 optimization.
F16 (Float 16) β More widely supported than BF16
- A 16-bit floating-point high precision but with less of range of values than BF16.
- Works on most devices with FP16 acceleration support (including many GPUs and some CPUs).
- Slightly lower numerical precision than BF16 but generally sufficient for inference.
π Use F16 if:
β Your hardware supports FP16 but not BF16.
β You need a balance between speed, memory usage, and accuracy.
β You are running on a GPU or another device optimized for FP16 computations.
π Avoid F16 if:
β Your device lacks native FP16 support (it may run slower than expected).
β You have memory limitations.
Quantized Models (Q4_K, Q6_K, Q8, etc.) β For CPU & Low-VRAM Inference
Quantization reduces model size and memory usage while maintaining as much accuracy as possible.
- Lower-bit models (Q4_K) β Best for minimal memory usage, may have lower precision.
- Higher-bit models (Q6_K, Q8_0) β Better accuracy, requires more memory.
π Use Quantized Models if:
β You are running inference on a CPU and need an optimized model.
β Your device has low VRAM and cannot load full-precision models.
β You want to reduce memory footprint while keeping reasonable accuracy.
π Avoid Quantized Models if:
β You need maximum accuracy (full-precision models are better for this).
β Your hardware has enough VRAM for higher-precision formats (BF16/F16).
Very Low-Bit Quantization (IQ3_XS, IQ3_S, IQ3_M, Q4_K, Q4_0)
These models are optimized for extreme memory efficiency, making them ideal for low-power devices or large-scale deployments where memory is a critical constraint.
IQ3_XS: Ultra-low-bit quantization (3-bit) with extreme memory efficiency.
- Use case: Best for ultra-low-memory devices where even Q4_K is too large.
- Trade-off: Lower accuracy compared to higher-bit quantizations.
IQ3_S: Small block size for maximum memory efficiency.
- Use case: Best for low-memory devices where IQ3_XS is too aggressive.
IQ3_M: Medium block size for better accuracy than IQ3_S.
- Use case: Suitable for low-memory devices where IQ3_S is too limiting.
Q4_K: 4-bit quantization with block-wise optimization for better accuracy.
- Use case: Best for low-memory devices where Q6_K is too large.
Q4_0: Pure 4-bit quantization, optimized for ARM devices.
- Use case: Best for ARM-based devices or low-memory environments.
Summary Table: Model Format Selection
Model Format | Precision | Memory Usage | Device Requirements | Best Use Case |
---|---|---|---|---|
BF16 | Highest | High | BF16-supported GPU/CPUs | High-speed inference with reduced memory |
F16 | High | High | FP16-supported devices | GPU inference when BF16 isn't available |
Q4_K | Medium Low | Low | CPU or Low-VRAM devices | Best for memory-constrained environments |
Q6_K | Medium | Moderate | CPU with more memory | Better accuracy while still being quantized |
Q8_0 | High | Moderate | CPU or GPU with enough VRAM | Best accuracy among quantized models |
IQ3_XS | Very Low | Very Low | Ultra-low-memory devices | Extreme memory efficiency and low accuracy |
Q4_0 | Low | Low | ARM or low-memory devices | llama.cpp can optimize for ARM devices |
Included Files & Details
sarvam-m-bf16.gguf
- Model weights preserved in BF16.
- Use this if you want to requantize the model into a different format.
- Best if your device supports BF16 acceleration.
sarvam-m-f16.gguf
- Model weights stored in F16.
- Use if your device supports FP16, especially if BF16 is not available.
sarvam-m-bf16-q8_0.gguf
- Output & embeddings remain in BF16.
- All other layers quantized to Q8_0.
- Use if your device supports BF16 and you want a quantized version.
sarvam-m-f16-q8_0.gguf
- Output & embeddings remain in F16.
- All other layers quantized to Q8_0.
sarvam-m-q4_k.gguf
- Output & embeddings quantized to Q8_0.
- All other layers quantized to Q4_K.
- Good for CPU inference with limited memory.
sarvam-m-q4_k_s.gguf
- Smallest Q4_K variant, using less memory at the cost of accuracy.
- Best for very low-memory setups.
sarvam-m-q6_k.gguf
- Output & embeddings quantized to Q8_0.
- All other layers quantized to Q6_K .
sarvam-m-q8_0.gguf
- Fully Q8 quantized model for better accuracy.
- Requires more memory but offers higher precision.
sarvam-m-iq3_xs.gguf
- IQ3_XS quantization, optimized for extreme memory efficiency.
- Best for ultra-low-memory devices.
sarvam-m-iq3_m.gguf
- IQ3_M quantization, offering a medium block size for better accuracy.
- Suitable for low-memory devices.
sarvam-m-q4_0.gguf
- Pure Q4_0 quantization, optimized for ARM devices.
- Best for low-memory environments.
- Prefer IQ4_NL for better accuracy.
π If you find these models useful
β€ Please click "Like" if you find this useful!
Help me test my AI-Powered Network Monitor Assistant with quantum-ready security checks:
π Free Network Monitor
π¬ How to test:
Choose an AI assistant type:
TurboLLM
(GPT-4o-mini)HugLLM
(Hugginface Open-source)TestLLM
(Experimental CPU-only)
What Iβm Testing
Iβm pushing the limits of small open-source models for AI network monitoring, specifically:
- Function calling against live network services
- How small can a model go while still handling:
- Automated Nmap scans
- Quantum-readiness checks
- Network Monitoring tasks
π‘ TestLLM β Current experimental model (llama.cpp on 2 CPU threads):
- β Zero-configuration setup
- β³ 30s load time (slow inference but no API costs)
- π§ Help wanted! If youβre into edge-device AI, letβs collaborate!
Other Assistants
π’ TurboLLM β Uses gpt-4o-mini for:
- Create custom cmd processors to run .net code on Free Network Monitor Agents
- Real-time network diagnostics and monitoring
- Security Audits
- Penetration testing (Nmap/Metasploit)
- π Get more tokens by logging in or downloading our Free Network Monitor Agent with integrated AI Assistant
π΅ HugLLM β Latest Open-source models:
- π Runs on Hugging Face Inference API
π‘ Example commands to you could test:
"Give me info on my websites SSL certificate"
"Check if my server is using quantum safe encyption for communication"
"Run a comprehensive security audit on my server"
- '"Create a cmd processor to .. (what ever you want)" Note you need to install a Free Network Monitor Agent to run the .net code from. This is a very flexible and powerful feature. Use with caution!
Sarvam-M
Model Information
sarvam-m
is a multilingual, hybrid-reasoning, text-only language model built on Mistral-Small. This post-trained version delivers exceptional improvements over the base model:
- +20% average improvement on Indian language benchmarks
- +21.6% enhancement on math benchmarks
- +17.6% boost on programming benchmarks
Performance gains are even more impressive at the intersection of Indian languages and mathematics, with an outstanding +86% improvement in romanized Indian language GSM-8K benchmarks.
Learn more about sarvam-m in our detailed blog post.
Key Features
Hybrid Thinking Mode: A single versatile model supporting both "think" and "non-think" modes. Use the think mode for complex logical reasoning, mathematical problems, and coding tasks, or switch to non-think mode for efficient, general-purpose conversation.
Advanced Indic Skills: Specifically post-trained on Indian languages alongside English, embodying a character that authentically reflects and emphasizes Indian cultural values.
Superior Reasoning Capabilities: Outperforms most similarly-sized models on coding and math benchmarks, demonstrating exceptional reasoning abilities.
Seamless Chatting Experience: Full support for both Indic scripts and romanized versions of Indian languages, providing a smooth and accessible multilingual conversation experience.
Quickstart
The following code snippet demonstrates how to use sarvam-m
using Transformers.
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "sarvamai/sarvam-m"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name, torch_dtype="auto", device_map="auto"
)
# prepare the model input
prompt = "Who are you and what is your purpose on this planet?"
messages = [{"role": "user", "content": prompt}]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
enable_thinking=True, # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(**model_inputs, max_new_tokens=8192)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]) :].tolist()
output_text = tokenizer.decode(output_ids)
if "</think>" in output_text:
reasoning_content = output_text.split("</think>")[0].rstrip("\n")
content = output_text.split("</think>")[-1].lstrip("\n").rstrip("</s>")
else:
reasoning_content = ""
content = output_text.rstrip("</s>")
print("reasoning content:", reasoning_content)
print("content:", content)
For thinking mode, we recommend
temperature=0.5
; for no-think mode,temperature=0.2
.
With Sarvam APIs
from openai import OpenAI
base_url = "https://api.sarvam.ai/v1"
model_name = "sarvam-m"
api_key = "Your-API-Key" # get it from https://dashboard.sarvam.ai/
client = OpenAI(
base_url=base_url,
api_key=api_key,
).with_options(max_retries=1)
messages = [
{"role": "system", "content": "You're a helpful AI assistant"},
{"role": "user", "content": "Explain quantum computing in simple terms"},
]
response1 = client.chat.completions.create(
model=model_name,
messages=messages,
reasoning_effort="medium", # Enable thinking mode. `None` for disable.
max_completion_tokens=4096,
)
print("First response:", response1.choices[0].message.content)
# Building messages for the second turn (using previous response as context)
messages.extend(
[
{
"role": "assistant",
"content": response1.choices[0].message.content,
},
{"role": "user", "content": "Can you give an analogy for superposition?"},
]
)
response2 = client.chat.completions.create(
model=model_name,
messages=messages,
reasoning_effort="medium",
max_completion_tokens=8192,
)
print("Follow-up response:", response2.choices[0].message.content)
Refer to API docs here: sarvam Chat Completions API docs
reasoning_effort
can take three possible values: low
, medium
, and high
to be consistent with the OpenAI API spec. Setting any of the three values just enables the thinking mode of sarvam-m.
VLLM Deployment
For easy deployment, we can use vllm>=0.8.5
and create an OpenAI-compatible API endpoint with vllm serve sarvamai/sarvam-m
.
If you want to use vLLM with python, you can do the following.
from openai import OpenAI
# Modify OpenAI's API key and API base to use vLLM's API server.
openai_api_key = "EMPTY"
openai_api_base = "http://localhost:8000/v1"
client = OpenAI(
api_key=openai_api_key,
base_url=openai_api_base,
)
models = client.models.list()
model = models.data[0].id
messages = [{"role": "user", "content": "Why is 42 the best number?"}]
# By default, thinking mode is enabled.
# If you want to disable thinking, add:
# extra_body={"chat_template_kwargs": {"enable_thinking": False}}
response = client.chat.completions.create(model=model, messages=messages)
output_text = response.choices[0].message.content
if "</think>" in output_text:
reasoning_content = output_text.split("</think>")[0].rstrip("\n")
content = output_text.split("</think>")[-1].lstrip("\n")
else:
reasoning_content = ""
content = output_text
print("reasoning content:", reasoning_content)
print("content:", content)
# For the next round, add the model's response directly as assistant turn.
messages.append(
{"role": "assistant", "content": output_text}
)
- Downloads last month
- 1,433
1-bit
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
Model tree for Mungert/sarvam-m-GGUF
Base model
mistralai/Mistral-Small-3.1-24B-Base-2503