munish0838 commited on
Commit
bc8b071
·
verified ·
1 Parent(s): e1a1d62

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +104 -0
README.md ADDED
@@ -0,0 +1,104 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ library_name: transformers
5
+ language:
6
+ - bn
7
+ - en
8
+ - gu
9
+ - hi
10
+ - kn
11
+ - ml
12
+ - mr
13
+ - or
14
+ - pa
15
+ - ta
16
+ - te
17
+
18
+ ---
19
+
20
+ [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
21
+
22
+
23
+ # QuantFactory/sarvam-1-GGUF
24
+ This is quantized version of [sarvamai/sarvam-1](https://huggingface.co/sarvamai/sarvam-1) created using llama.cpp
25
+
26
+ # Original Model Card
27
+
28
+
29
+
30
+ # Sarvam-1
31
+
32
+ Sarvam-1 is a 2-billion parameter language model specifically optimized for Indian languages. It provides best in-class performance in 10 Indic languages (bn, gu, hi, kn, ml, mr, or, pa, ta, te) when compared with popular models like Gemma-2-2B and Llama-3.2-3B. It is also competitive against the much larger models like Llama-3.1-8B in these languages. More details can be found in our [release blog](https://www.sarvam.ai/blogs/sarvam-1).
33
+
34
+ The model was trained with [NVIDIA NeMo™ Framework](https://github.com/NVIDIA/NeMo) on the Yotta Shakti Cloud using HGX H100 systems.
35
+
36
+ *Note: This is a text-completion model. It is meant to be finetuned on downstream tasks, and cannot be used directly as a chat or an instruction-following model.*
37
+
38
+ ## Key Features
39
+
40
+ - **Optimized for 10 Indian Languages**: Built from the ground up to support major Indian languages alongside English
41
+ - **Superior Token Efficiency**: Achieves fertility rates of 1.4-2.1 across all supported languages, 2-4x more efficient than existing multilingual models
42
+ - **High-Quality Training Data**: Trained on a curated corpus of ~4 trillion tokens with 2 trillion high-quality Indic tokens
43
+ - **Efficient Inference**: 4-6x faster inference compared to larger models while matching or exceeding their performance on Indic language tasks
44
+
45
+ ## Model Architecture
46
+
47
+ - Hidden size: 2048
48
+ - Intermediate size: 11,008
49
+ - Number of attention heads: 16
50
+ - Number of hidden layers: 28
51
+ - Number of key-value heads: 8
52
+ - Maximum position embeddings: 8,192
53
+ - Activation function: SwiGLU
54
+ - Positional embeddings: Rotary (RoPE) with theta=10,000
55
+ - Training: Grouped-query attention and bfloat16 mixed-precision
56
+
57
+ ## Performance
58
+
59
+ ### Translated Academic Benchmarks (Zero-shot)
60
+
61
+ - MMLU: 38.22
62
+ - ARC-Challenge: 46.71
63
+ - TriviaQA: 86.11
64
+ - BoolQ: 62.59
65
+
66
+ ### IndicGenBench (One-shot)
67
+
68
+ - Flores English-to-Indic translation: 46.81 chrF++
69
+ - CrossSum: 20.88 chrF++
70
+ - XORQA: 26.47 F1
71
+ - XQUAD: 41.58 F1
72
+
73
+ ## Usage
74
+
75
+ ```python
76
+ from transformers import AutoModelForCausalLM, AutoTokenizer
77
+
78
+ # Load model and tokenizer
79
+ model = AutoModelForCausalLM.from_pretrained("sarvamai/sarvam-1")
80
+ tokenizer = AutoTokenizer.from_pretrained("sarvamai/sarvam-1")
81
+
82
+ # Example usage
83
+ text = "कर्नाटक की राजधानी है:"
84
+ inputs = tokenizer(text, return_tensors="pt")
85
+ outputs = model.generate(**inputs, max_new_tokens=5)
86
+ result = tokenizer.decode(outputs[0])
87
+ ```
88
+
89
+ ## Training Details
90
+
91
+ - Training Infrastructure: Yotta's Shakti cluster
92
+ - Hardware: 1,024 GPUs
93
+ - Training Duration: 5 days
94
+ - Framework: NVIDIA NeMo
95
+
96
+ ## License
97
+
98
+ Sarvam non-commercial license: See the [LICENSE](LICENSE.md) file
99
+
100
+ ## Acknowledgements
101
+
102
+ - NVIDIA: for support with the NeMo codebase
103
+ - Yotta: for sccess to the Shakti GPU cluster
104
+ - AI4Bharat: for their academic partnership and expertise in Indian language technologies