morriszms commited on
Commit
46eb520
Β·
verified Β·
1 Parent(s): 4160694

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,15 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ Qwen2.5-CoderX-7B-v0.5-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
37
+ Qwen2.5-CoderX-7B-v0.5-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
38
+ Qwen2.5-CoderX-7B-v0.5-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
39
+ Qwen2.5-CoderX-7B-v0.5-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
40
+ Qwen2.5-CoderX-7B-v0.5-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
41
+ Qwen2.5-CoderX-7B-v0.5-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
42
+ Qwen2.5-CoderX-7B-v0.5-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
43
+ Qwen2.5-CoderX-7B-v0.5-Q5_0.gguf filter=lfs diff=lfs merge=lfs -text
44
+ Qwen2.5-CoderX-7B-v0.5-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
45
+ Qwen2.5-CoderX-7B-v0.5-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
46
+ Qwen2.5-CoderX-7B-v0.5-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
47
+ Qwen2.5-CoderX-7B-v0.5-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
Qwen2.5-CoderX-7B-v0.5-Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:edfac219ad7a9e6b950daead088c8631a9879edccd0d09aaff498f4eab7aced4
3
+ size 3015940672
Qwen2.5-CoderX-7B-v0.5-Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aee106a4cdd72374d6c69479829ad97a7c715289427964cfb9c1bc6024297115
3
+ size 4088459840
Qwen2.5-CoderX-7B-v0.5-Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a783b1fc2149a42fdeba4e5affe89ed954600d02e441d38389673bdd53cdf1c9
3
+ size 3808391744
Qwen2.5-CoderX-7B-v0.5-Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e6b885ca142abebdeff9e52d6c93d0ae845824a11dcb4ec78637b0ba0bedde94
3
+ size 3492368960
Qwen2.5-CoderX-7B-v0.5-Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:298d82320ae2d12da3d3beac2f612e4183c4f42ed1c202392cd88c34e3fe0f6c
3
+ size 4431391296
Qwen2.5-CoderX-7B-v0.5-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8f5fa17f2f6b248840fede2b4822356abfc0a5795af172a7426146212980dfae
3
+ size 4683074112
Qwen2.5-CoderX-7B-v0.5-Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3a54649bda3dccd6cfe47658ce9449ece8111959d731fab003db6a27323f69a3
3
+ size 4457769536
Qwen2.5-CoderX-7B-v0.5-Q5_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:89d9d96e65aab13141c2c3efa76bed84d553b2c5f2dd14b4dbd6e0d3b0c67b8f
3
+ size 5315177024
Qwen2.5-CoderX-7B-v0.5-Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ffc446b52bb2bae0476d56e465cd6d1a263988d1e7fefcba286c35bad081554b
3
+ size 5444831808
Qwen2.5-CoderX-7B-v0.5-Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dae4caa8f88ff9e8206772f1cd27e588adf88c868bc80b7e346a2a58a7adb2d6
3
+ size 5315177024
Qwen2.5-CoderX-7B-v0.5-Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:15640c8baa553021340298ce1a6cbcacd9ae372a2e74be427f9016a4c6af6d8c
3
+ size 6254199360
Qwen2.5-CoderX-7B-v0.5-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4deaed7f01e370de6c1857895f293a6dff78719c0b30941faa066802f0c388b3
3
+ size 8098525760
README.md ADDED
@@ -0,0 +1,162 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: oscar128372/Qwen2.5-CoderX-7B-v0.5
3
+ tags:
4
+ - code-generation
5
+ - text-generation
6
+ - instruction-following
7
+ - fine-tuned
8
+ - qwen2
9
+ - unsloth
10
+ - transformers
11
+ - trl
12
+ - sft
13
+ - python
14
+ - physics-simulation
15
+ - algorithm-design
16
+ - TensorBlock
17
+ - GGUF
18
+ license: apache-2.0
19
+ language:
20
+ - en
21
+ ---
22
+
23
+ <div style="width: auto; margin-left: auto; margin-right: auto">
24
+ <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
25
+ </div>
26
+
27
+ [![Website](https://img.shields.io/badge/Website-tensorblock.co-blue?logo=google-chrome&logoColor=white)](https://tensorblock.co)
28
+ [![Twitter](https://img.shields.io/twitter/follow/tensorblock_aoi?style=social)](https://twitter.com/tensorblock_aoi)
29
+ [![Discord](https://img.shields.io/badge/Discord-Join%20Us-5865F2?logo=discord&logoColor=white)](https://discord.gg/Ej5NmeHFf2)
30
+ [![GitHub](https://img.shields.io/badge/GitHub-TensorBlock-black?logo=github&logoColor=white)](https://github.com/TensorBlock)
31
+ [![Telegram](https://img.shields.io/badge/Telegram-Group-blue?logo=telegram)](https://t.me/TensorBlock)
32
+
33
+
34
+ ## oscar128372/Qwen2.5-CoderX-7B-v0.5 - GGUF
35
+
36
+ <div style="text-align: left; margin: 20px 0;">
37
+ <a href="https://discord.com/invite/Ej5NmeHFf2" style="display: inline-block; padding: 10px 20px; background-color: #5865F2; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
38
+ Join our Discord to learn more about what we're building β†—
39
+ </a>
40
+ </div>
41
+
42
+ This repo contains GGUF format model files for [oscar128372/Qwen2.5-CoderX-7B-v0.5](https://huggingface.co/oscar128372/Qwen2.5-CoderX-7B-v0.5).
43
+
44
+ The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b5753](https://github.com/ggml-org/llama.cpp/commit/73e53dc834c0a2336cd104473af6897197b96277).
45
+
46
+ ## Our projects
47
+ <table border="1" cellspacing="0" cellpadding="10">
48
+ <tr>
49
+ <th colspan="2" style="font-size: 25px;">Forge</th>
50
+ </tr>
51
+ <tr>
52
+ <th colspan="2">
53
+ <img src="https://imgur.com/faI5UKh.jpeg" alt="Forge Project" width="900"/>
54
+ </th>
55
+ </tr>
56
+ <tr>
57
+ <th colspan="2">An OpenAI-compatible multi-provider routing layer.</th>
58
+ </tr>
59
+ <tr>
60
+ <th colspan="2">
61
+ <a href="https://github.com/TensorBlock/forge" target="_blank" style="
62
+ display: inline-block;
63
+ padding: 8px 16px;
64
+ background-color: #FF7F50;
65
+ color: white;
66
+ text-decoration: none;
67
+ border-radius: 6px;
68
+ font-weight: bold;
69
+ font-family: sans-serif;
70
+ ">πŸš€ Try it now! πŸš€</a>
71
+ </th>
72
+ </tr>
73
+
74
+ <tr>
75
+ <th style="font-size: 25px;">Awesome MCP Servers</th>
76
+ <th style="font-size: 25px;">TensorBlock Studio</th>
77
+ </tr>
78
+ <tr>
79
+ <th><img src="https://imgur.com/2Xov7B7.jpeg" alt="MCP Servers" width="450"/></th>
80
+ <th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Studio" width="450"/></th>
81
+ </tr>
82
+ <tr>
83
+ <th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
84
+ <th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
85
+ </tr>
86
+ <tr>
87
+ <th>
88
+ <a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
89
+ display: inline-block;
90
+ padding: 8px 16px;
91
+ background-color: #FF7F50;
92
+ color: white;
93
+ text-decoration: none;
94
+ border-radius: 6px;
95
+ font-weight: bold;
96
+ font-family: sans-serif;
97
+ ">πŸ‘€ See what we built πŸ‘€</a>
98
+ </th>
99
+ <th>
100
+ <a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
101
+ display: inline-block;
102
+ padding: 8px 16px;
103
+ background-color: #FF7F50;
104
+ color: white;
105
+ text-decoration: none;
106
+ border-radius: 6px;
107
+ font-weight: bold;
108
+ font-family: sans-serif;
109
+ ">πŸ‘€ See what we built πŸ‘€</a>
110
+ </th>
111
+ </tr>
112
+ </table>
113
+
114
+ ## Prompt template
115
+
116
+ ```
117
+ <|im_start|>system
118
+ {system_prompt}<|im_end|>
119
+ <|im_start|>user
120
+ {prompt}<|im_end|>
121
+ <|im_start|>assistant
122
+ ```
123
+
124
+ ## Model file specification
125
+
126
+ | Filename | Quant type | File Size | Description |
127
+ | -------- | ---------- | --------- | ----------- |
128
+ | [Qwen2.5-CoderX-7B-v0.5-Q2_K.gguf](https://huggingface.co/tensorblock/oscar128372_Qwen2.5-CoderX-7B-v0.5-GGUF/blob/main/Qwen2.5-CoderX-7B-v0.5-Q2_K.gguf) | Q2_K | 3.016 GB | smallest, significant quality loss - not recommended for most purposes |
129
+ | [Qwen2.5-CoderX-7B-v0.5-Q3_K_S.gguf](https://huggingface.co/tensorblock/oscar128372_Qwen2.5-CoderX-7B-v0.5-GGUF/blob/main/Qwen2.5-CoderX-7B-v0.5-Q3_K_S.gguf) | Q3_K_S | 3.492 GB | very small, high quality loss |
130
+ | [Qwen2.5-CoderX-7B-v0.5-Q3_K_M.gguf](https://huggingface.co/tensorblock/oscar128372_Qwen2.5-CoderX-7B-v0.5-GGUF/blob/main/Qwen2.5-CoderX-7B-v0.5-Q3_K_M.gguf) | Q3_K_M | 3.808 GB | very small, high quality loss |
131
+ | [Qwen2.5-CoderX-7B-v0.5-Q3_K_L.gguf](https://huggingface.co/tensorblock/oscar128372_Qwen2.5-CoderX-7B-v0.5-GGUF/blob/main/Qwen2.5-CoderX-7B-v0.5-Q3_K_L.gguf) | Q3_K_L | 4.088 GB | small, substantial quality loss |
132
+ | [Qwen2.5-CoderX-7B-v0.5-Q4_0.gguf](https://huggingface.co/tensorblock/oscar128372_Qwen2.5-CoderX-7B-v0.5-GGUF/blob/main/Qwen2.5-CoderX-7B-v0.5-Q4_0.gguf) | Q4_0 | 4.431 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
133
+ | [Qwen2.5-CoderX-7B-v0.5-Q4_K_S.gguf](https://huggingface.co/tensorblock/oscar128372_Qwen2.5-CoderX-7B-v0.5-GGUF/blob/main/Qwen2.5-CoderX-7B-v0.5-Q4_K_S.gguf) | Q4_K_S | 4.458 GB | small, greater quality loss |
134
+ | [Qwen2.5-CoderX-7B-v0.5-Q4_K_M.gguf](https://huggingface.co/tensorblock/oscar128372_Qwen2.5-CoderX-7B-v0.5-GGUF/blob/main/Qwen2.5-CoderX-7B-v0.5-Q4_K_M.gguf) | Q4_K_M | 4.683 GB | medium, balanced quality - recommended |
135
+ | [Qwen2.5-CoderX-7B-v0.5-Q5_0.gguf](https://huggingface.co/tensorblock/oscar128372_Qwen2.5-CoderX-7B-v0.5-GGUF/blob/main/Qwen2.5-CoderX-7B-v0.5-Q5_0.gguf) | Q5_0 | 5.315 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
136
+ | [Qwen2.5-CoderX-7B-v0.5-Q5_K_S.gguf](https://huggingface.co/tensorblock/oscar128372_Qwen2.5-CoderX-7B-v0.5-GGUF/blob/main/Qwen2.5-CoderX-7B-v0.5-Q5_K_S.gguf) | Q5_K_S | 5.315 GB | large, low quality loss - recommended |
137
+ | [Qwen2.5-CoderX-7B-v0.5-Q5_K_M.gguf](https://huggingface.co/tensorblock/oscar128372_Qwen2.5-CoderX-7B-v0.5-GGUF/blob/main/Qwen2.5-CoderX-7B-v0.5-Q5_K_M.gguf) | Q5_K_M | 5.445 GB | large, very low quality loss - recommended |
138
+ | [Qwen2.5-CoderX-7B-v0.5-Q6_K.gguf](https://huggingface.co/tensorblock/oscar128372_Qwen2.5-CoderX-7B-v0.5-GGUF/blob/main/Qwen2.5-CoderX-7B-v0.5-Q6_K.gguf) | Q6_K | 6.254 GB | very large, extremely low quality loss |
139
+ | [Qwen2.5-CoderX-7B-v0.5-Q8_0.gguf](https://huggingface.co/tensorblock/oscar128372_Qwen2.5-CoderX-7B-v0.5-GGUF/blob/main/Qwen2.5-CoderX-7B-v0.5-Q8_0.gguf) | Q8_0 | 8.099 GB | very large, extremely low quality loss - not recommended |
140
+
141
+
142
+ ## Downloading instruction
143
+
144
+ ### Command line
145
+
146
+ Firstly, install Huggingface Client
147
+
148
+ ```shell
149
+ pip install -U "huggingface_hub[cli]"
150
+ ```
151
+
152
+ Then, downoad the individual model file the a local directory
153
+
154
+ ```shell
155
+ huggingface-cli download tensorblock/oscar128372_Qwen2.5-CoderX-7B-v0.5-GGUF --include "Qwen2.5-CoderX-7B-v0.5-Q2_K.gguf" --local-dir MY_LOCAL_DIR
156
+ ```
157
+
158
+ If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
159
+
160
+ ```shell
161
+ huggingface-cli download tensorblock/oscar128372_Qwen2.5-CoderX-7B-v0.5-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
162
+ ```