morriszms commited on
Commit
7a97c07
·
verified ·
1 Parent(s): 3cb21b9

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,15 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ sarvam-m-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
37
+ sarvam-m-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
38
+ sarvam-m-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
39
+ sarvam-m-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
40
+ sarvam-m-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
41
+ sarvam-m-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
42
+ sarvam-m-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
43
+ sarvam-m-Q5_0.gguf filter=lfs diff=lfs merge=lfs -text
44
+ sarvam-m-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
45
+ sarvam-m-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
46
+ sarvam-m-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
47
+ sarvam-m-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,160 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ language:
5
+ - en
6
+ - bn
7
+ - hi
8
+ - kn
9
+ - gu
10
+ - mr
11
+ - ml
12
+ - or
13
+ - pa
14
+ - ta
15
+ - te
16
+ base_model: sarvamai/sarvam-m
17
+ base_model_relation: finetune
18
+ tags:
19
+ - TensorBlock
20
+ - GGUF
21
+ ---
22
+
23
+ <div style="width: auto; margin-left: auto; margin-right: auto">
24
+ <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
25
+ </div>
26
+
27
+ [![Website](https://img.shields.io/badge/Website-tensorblock.co-blue?logo=google-chrome&logoColor=white)](https://tensorblock.co)
28
+ [![Twitter](https://img.shields.io/twitter/follow/tensorblock_aoi?style=social)](https://twitter.com/tensorblock_aoi)
29
+ [![Discord](https://img.shields.io/badge/Discord-Join%20Us-5865F2?logo=discord&logoColor=white)](https://discord.gg/Ej5NmeHFf2)
30
+ [![GitHub](https://img.shields.io/badge/GitHub-TensorBlock-black?logo=github&logoColor=white)](https://github.com/TensorBlock)
31
+ [![Telegram](https://img.shields.io/badge/Telegram-Group-blue?logo=telegram)](https://t.me/TensorBlock)
32
+
33
+
34
+ ## sarvamai/sarvam-m - GGUF
35
+
36
+ <div style="text-align: left; margin: 20px 0;">
37
+ <a href="https://discord.com/invite/Ej5NmeHFf2" style="display: inline-block; padding: 10px 20px; background-color: #5865F2; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
38
+ Join our Discord to learn more about what we're building ↗
39
+ </a>
40
+ </div>
41
+
42
+ This repo contains GGUF format model files for [sarvamai/sarvam-m](https://huggingface.co/sarvamai/sarvam-m).
43
+
44
+ The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5753](https://github.com/ggml-org/llama.cpp/commit/73e53dc834c0a2336cd104473af6897197b96277).
45
+
46
+ ## Our projects
47
+ <table border="1" cellspacing="0" cellpadding="10">
48
+ <tr>
49
+ <th colspan="2" style="font-size: 25px;">Forge</th>
50
+ </tr>
51
+ <tr>
52
+ <th colspan="2">
53
+ <img src="https://imgur.com/faI5UKh.jpeg" alt="Forge Project" width="900"/>
54
+ </th>
55
+ </tr>
56
+ <tr>
57
+ <th colspan="2">An OpenAI-compatible multi-provider routing layer.</th>
58
+ </tr>
59
+ <tr>
60
+ <th colspan="2">
61
+ <a href="https://github.com/TensorBlock/forge" target="_blank" style="
62
+ display: inline-block;
63
+ padding: 8px 16px;
64
+ background-color: #FF7F50;
65
+ color: white;
66
+ text-decoration: none;
67
+ border-radius: 6px;
68
+ font-weight: bold;
69
+ font-family: sans-serif;
70
+ ">🚀 Try it now! 🚀</a>
71
+ </th>
72
+ </tr>
73
+
74
+ <tr>
75
+ <th style="font-size: 25px;">Awesome MCP Servers</th>
76
+ <th style="font-size: 25px;">TensorBlock Studio</th>
77
+ </tr>
78
+ <tr>
79
+ <th><img src="https://imgur.com/2Xov7B7.jpeg" alt="MCP Servers" width="450"/></th>
80
+ <th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Studio" width="450"/></th>
81
+ </tr>
82
+ <tr>
83
+ <th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
84
+ <th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
85
+ </tr>
86
+ <tr>
87
+ <th>
88
+ <a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
89
+ display: inline-block;
90
+ padding: 8px 16px;
91
+ background-color: #FF7F50;
92
+ color: white;
93
+ text-decoration: none;
94
+ border-radius: 6px;
95
+ font-weight: bold;
96
+ font-family: sans-serif;
97
+ ">👀 See what we built 👀</a>
98
+ </th>
99
+ <th>
100
+ <a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
101
+ display: inline-block;
102
+ padding: 8px 16px;
103
+ background-color: #FF7F50;
104
+ color: white;
105
+ text-decoration: none;
106
+ border-radius: 6px;
107
+ font-weight: bold;
108
+ font-family: sans-serif;
109
+ ">👀 See what we built 👀</a>
110
+ </th>
111
+ </tr>
112
+ </table>
113
+
114
+ ## Prompt template
115
+
116
+ ```
117
+ <s>[SYSTEM_PROMPT]Think deeply before answering the user's question. Do the thinking inside <think>...</think> tags.
118
+
119
+ {system_prompt}[/SYSTEM_PROMPT][INST]{prompt}[/INST]<think>
120
+ ```
121
+
122
+ ## Model file specification
123
+
124
+ | Filename | Quant type | File Size | Description |
125
+ | -------- | ---------- | --------- | ----------- |
126
+ | [sarvam-m-Q2_K.gguf](https://huggingface.co/tensorblock/sarvamai_sarvam-m-GGUF/blob/main/sarvam-m-Q2_K.gguf) | Q2_K | 8.890 GB | smallest, significant quality loss - not recommended for most purposes |
127
+ | [sarvam-m-Q3_K_S.gguf](https://huggingface.co/tensorblock/sarvamai_sarvam-m-GGUF/blob/main/sarvam-m-Q3_K_S.gguf) | Q3_K_S | 10.400 GB | very small, high quality loss |
128
+ | [sarvam-m-Q3_K_M.gguf](https://huggingface.co/tensorblock/sarvamai_sarvam-m-GGUF/blob/main/sarvam-m-Q3_K_M.gguf) | Q3_K_M | 11.474 GB | very small, high quality loss |
129
+ | [sarvam-m-Q3_K_L.gguf](https://huggingface.co/tensorblock/sarvamai_sarvam-m-GGUF/blob/main/sarvam-m-Q3_K_L.gguf) | Q3_K_L | 12.401 GB | small, substantial quality loss |
130
+ | [sarvam-m-Q4_0.gguf](https://huggingface.co/tensorblock/sarvamai_sarvam-m-GGUF/blob/main/sarvam-m-Q4_0.gguf) | Q4_0 | 13.442 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
131
+ | [sarvam-m-Q4_K_S.gguf](https://huggingface.co/tensorblock/sarvamai_sarvam-m-GGUF/blob/main/sarvam-m-Q4_K_S.gguf) | Q4_K_S | 13.549 GB | small, greater quality loss |
132
+ | [sarvam-m-Q4_K_M.gguf](https://huggingface.co/tensorblock/sarvamai_sarvam-m-GGUF/blob/main/sarvam-m-Q4_K_M.gguf) | Q4_K_M | 14.334 GB | medium, balanced quality - recommended |
133
+ | [sarvam-m-Q5_0.gguf](https://huggingface.co/tensorblock/sarvamai_sarvam-m-GGUF/blob/main/sarvam-m-Q5_0.gguf) | Q5_0 | 16.304 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
134
+ | [sarvam-m-Q5_K_S.gguf](https://huggingface.co/tensorblock/sarvamai_sarvam-m-GGUF/blob/main/sarvam-m-Q5_K_S.gguf) | Q5_K_S | 16.304 GB | large, low quality loss - recommended |
135
+ | [sarvam-m-Q5_K_M.gguf](https://huggingface.co/tensorblock/sarvamai_sarvam-m-GGUF/blob/main/sarvam-m-Q5_K_M.gguf) | Q5_K_M | 16.764 GB | large, very low quality loss - recommended |
136
+ | [sarvam-m-Q6_K.gguf](https://huggingface.co/tensorblock/sarvamai_sarvam-m-GGUF/blob/main/sarvam-m-Q6_K.gguf) | Q6_K | 19.346 GB | very large, extremely low quality loss |
137
+ | [sarvam-m-Q8_0.gguf](https://huggingface.co/tensorblock/sarvamai_sarvam-m-GGUF/blob/main/sarvam-m-Q8_0.gguf) | Q8_0 | 25.055 GB | very large, extremely low quality loss - not recommended |
138
+
139
+
140
+ ## Downloading instruction
141
+
142
+ ### Command line
143
+
144
+ Firstly, install Huggingface Client
145
+
146
+ ```shell
147
+ pip install -U "huggingface_hub[cli]"
148
+ ```
149
+
150
+ Then, downoad the individual model file the a local directory
151
+
152
+ ```shell
153
+ huggingface-cli download tensorblock/sarvamai_sarvam-m-GGUF --include "sarvam-m-Q2_K.gguf" --local-dir MY_LOCAL_DIR
154
+ ```
155
+
156
+ If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
157
+
158
+ ```shell
159
+ huggingface-cli download tensorblock/sarvamai_sarvam-m-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
160
+ ```
sarvam-m-Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dea0ae72a67e602a3edd33c488113fd8938735afdcbb60517b7bf8819f9674b7
3
+ size 8890326880
sarvam-m-Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:78c2e8d9353e3c85acefbc5e267b0bd066b7202dd6fbc00e6b39c8d4cf9231f1
3
+ size 12400762720
sarvam-m-Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9142f0757969c3375f461372b7a8d8259adb277555917eda57e3479df55b00a0
3
+ size 11474083680
sarvam-m-Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c9485fe938955fade7f0707af9511a0ca13f0fa44c3dd009b7394521f51a08e0
3
+ size 10400276320
sarvam-m-Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:07829fed1998add8ab25a5ad86d46251cbdb43e1df41fc01b3aa102ffaf34c5a
3
+ size 13441802080
sarvam-m-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:33aeff800db7fd49bcaa286ce62164bc4d62456ec66a6ee043b595232013dffe
3
+ size 14333910880
sarvam-m-Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:efed55b94bbf549663c83aa34591808dcefea0f6362a022ededb687bb4459b34
3
+ size 13549281120
sarvam-m-Q5_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:88ef6dd4fb677797df8bf6f9d27b98cdf872cb35aed2eb207207eb26e701319d
3
+ size 16304414560
sarvam-m-Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b179f8d2d6c65f9a9284fbc47be155daee5a6b23ec51a2ed16d3b98d5f43b708
3
+ size 16763985760
sarvam-m-Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:28cf0f5f0f616d54f74dc78ce831a122a5da05a82a50b70d533259375c6c47dd
3
+ size 16304414560
sarvam-m-Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a2ae1694ba44764391f0a63a2cc806a056ecc1408d43c14c66039a108281b1cd
3
+ size 19345940320
sarvam-m-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ae13037238ef16dc4980e3665c34b7b3bc5b45e1055fc244f57745185e00db16
3
+ size 25054781280