jaketrock
commited on
Commit
·
5ff1fa5
1
Parent(s):
ad13800
modify scripts
Browse files- .gitignore +1 -0
- README_QUANTIZATION.md +63 -0
- pt.jinja +102 -0
- quantize_models.sh +71 -0
- tokenizer_config.json +1 -1
.gitignore
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
.DS_Store
|
README_QUANTIZATION.md
ADDED
@@ -0,0 +1,63 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Model Quantization with llama.cpp
|
2 |
+
|
3 |
+
This README explains how to use the `quantize_models.sh` script to create quantized versions of your GGUF models.
|
4 |
+
|
5 |
+
## Prerequisites
|
6 |
+
|
7 |
+
- llama.cpp must be cloned and built in this directory
|
8 |
+
- You need a base GGUF model (default is `osmosis-mcp-4B-BF16.gguf`)
|
9 |
+
|
10 |
+
## How to Use
|
11 |
+
|
12 |
+
1. Make sure your base model is in the current directory
|
13 |
+
2. Run the script:
|
14 |
+
|
15 |
+
```bash
|
16 |
+
./quantize_models.sh
|
17 |
+
```
|
18 |
+
|
19 |
+
## Supported Quantization Formats
|
20 |
+
|
21 |
+
The script will create the following quantized versions:
|
22 |
+
|
23 |
+
| Format | Description | Approximate Size |
|
24 |
+
|----------|-----------------------------------------------------------------|-------------------|
|
25 |
+
| Q4_K_S | 4-bit quantization, smaller size | ~29% of original |
|
26 |
+
| Q5_K_M | 5-bit quantization, medium size | ~34% of original |
|
27 |
+
| Q5_K_S | 5-bit quantization, smaller size | ~33% of original |
|
28 |
+
| Q6_K | 6-bit quantization, balanced quality and size | ~38% of original |
|
29 |
+
| IQ4_XS | Improved 4-bit non-linear quantization, extra small | ~27% of original |
|
30 |
+
| Q8_0 | 8-bit quantization, highest quality | ~50% of original |
|
31 |
+
| Q2_K | 2-bit quantization, extremely small | ~18% of original |
|
32 |
+
| Q3_K_L | 3-bit quantization, larger size | ~25% of original |
|
33 |
+
| Q3_K_M | 3-bit quantization, medium size | ~23% of original |
|
34 |
+
| Q3_K_S | 3-bit quantization, smaller size | ~21% of original |
|
35 |
+
| Q4_K_M | 4-bit quantization, medium size | ~28% of original |
|
36 |
+
|
37 |
+
## Customizing the Script
|
38 |
+
|
39 |
+
If you want to quantize a different base model, edit the `INPUT_MODEL` variable in the script:
|
40 |
+
|
41 |
+
```bash
|
42 |
+
# Input model file
|
43 |
+
INPUT_MODEL="your-model-file.gguf"
|
44 |
+
```
|
45 |
+
|
46 |
+
## Time and Space Requirements
|
47 |
+
|
48 |
+
- Quantization can take from several minutes to an hour depending on your hardware
|
49 |
+
- Make sure you have enough free disk space for all the output models
|
50 |
+
- The total disk space required will be approximately 3x the size of the original model
|
51 |
+
|
52 |
+
## Using the Quantized Models
|
53 |
+
|
54 |
+
Each quantized model can be used with llama.cpp tools:
|
55 |
+
|
56 |
+
```bash
|
57 |
+
llama.cpp/build/bin/llama-cli -m osmosis-mcp-4b.Q4_K_S.gguf -p "Your prompt here"
|
58 |
+
```
|
59 |
+
|
60 |
+
Choose the quantization format based on your needs:
|
61 |
+
- Smaller quantization (Q2_K, Q3_K_S) for limited hardware resources
|
62 |
+
- Medium quantization (Q4_K_M, Q5_K_S) for balanced performance
|
63 |
+
- Larger quantization (Q6_K, Q8_0) for highest quality with sufficient hardware
|
pt.jinja
ADDED
@@ -0,0 +1,102 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{%- if tools %}
|
2 |
+
{{- '<|im_start|>system\n' }}
|
3 |
+
{%- if messages[0].role == 'system' %}
|
4 |
+
{{- messages[0].content + '\n\n' }}
|
5 |
+
{%- endif %}
|
6 |
+
{{- "# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }}
|
7 |
+
{%- for tool in tools %}
|
8 |
+
{{- "\n" }}
|
9 |
+
{{- tool | tojson }}
|
10 |
+
{%- endfor %}
|
11 |
+
{{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }}
|
12 |
+
{%- else %}
|
13 |
+
{%- if messages[0].role == 'system' %}
|
14 |
+
{{- '<|im_start|>system\n' + messages[0].content + '<|im_end|>\n' }}
|
15 |
+
{%- endif %}
|
16 |
+
{%- endif %}
|
17 |
+
|
18 |
+
{%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %}
|
19 |
+
|
20 |
+
{#— scan backward without using reverse filter —#}
|
21 |
+
{%- for i in range(messages|length - 1, -1, -1) %}
|
22 |
+
{%- set message = messages[i] %}
|
23 |
+
{%- set index = i %}
|
24 |
+
{%- set tool_start = "<tool_response>" %}
|
25 |
+
{%- set tool_start_length = tool_start|length %}
|
26 |
+
{%- set start_of_message = message.content[:tool_start_length] %}
|
27 |
+
{%- set tool_end = "</tool_response>" %}
|
28 |
+
{%- set tool_end_length = tool_end|length %}
|
29 |
+
{%- set start_pos = (message.content|length) - tool_end_length %}
|
30 |
+
{%- if start_pos < 0 %}
|
31 |
+
{%- set start_pos = 0 %}
|
32 |
+
{%- endif %}
|
33 |
+
{%- set end_of_message = message.content[start_pos:] %}
|
34 |
+
{%- if ns.multi_step_tool and message.role == "user" and not (start_of_message == tool_start and end_of_message == tool_end) %}
|
35 |
+
{%- set ns.multi_step_tool = false %}
|
36 |
+
{%- set ns.last_query_index = index %}
|
37 |
+
{%- endif %}
|
38 |
+
{%- endfor %}
|
39 |
+
|
40 |
+
{%- for message in messages %}
|
41 |
+
{%- if (message.role == "user") or (message.role == "system" and not loop.first) %}
|
42 |
+
{{- '<|im_start|>' + message.role + '\n' + message.content + '<|im_end|>' + '\n' }}
|
43 |
+
{%- elif message.role == "assistant" %}
|
44 |
+
{%- set content = message.content %}
|
45 |
+
{%- set reasoning_content = '' %}
|
46 |
+
{%- if message.reasoning_content is defined and message.reasoning_content is not none %}
|
47 |
+
{%- set reasoning_content = message.reasoning_content %}
|
48 |
+
{%- else %}
|
49 |
+
{%- if '</think>' in message.content %}
|
50 |
+
{%- set content = (message.content.split('</think>')|last).lstrip('\n') %}
|
51 |
+
{%- set reasoning_content = (message.content.split('</think>')|first).rstrip('\n') %}
|
52 |
+
{%- set reasoning_content = (reasoning_content.split('<think>')|last).lstrip('\n') %}
|
53 |
+
{%- endif %}
|
54 |
+
{%- endif %}
|
55 |
+
{%- if loop.index0 > ns.last_query_index %}
|
56 |
+
{%- if loop.last or (not loop.last and reasoning_content) %}
|
57 |
+
{{- '<|im_start|>' + message.role + '\n<think>\n' + reasoning_content.strip('\n') + '\n</think>\n\n' + content.lstrip('\n') }}
|
58 |
+
{%- else %}
|
59 |
+
{{- '<|im_start|>' + message.role + '\n' + content }}
|
60 |
+
{%- endif %}
|
61 |
+
{%- else %}
|
62 |
+
{{- '<|im_start|>' + message.role + '\n' + content }}
|
63 |
+
{%- endif %}
|
64 |
+
{%- if message.tool_calls %}
|
65 |
+
{%- for tool_call in message.tool_calls %}
|
66 |
+
{%- if (loop.first and content) or (not loop.first) %}
|
67 |
+
{{- '\n' }}
|
68 |
+
{%- endif %}
|
69 |
+
{%- if tool_call.function %}
|
70 |
+
{%- set tool_call = tool_call.function %}
|
71 |
+
{%- endif %}
|
72 |
+
{{- '<tool_call>\n{"name": "' }}
|
73 |
+
{{- tool_call.name }}
|
74 |
+
{{- '", "arguments": ' }}
|
75 |
+
{%- if tool_call.arguments is string %}
|
76 |
+
{{- tool_call.arguments }}
|
77 |
+
{%- else %}
|
78 |
+
{{- tool_call.arguments | tojson }}
|
79 |
+
{%- endif %}
|
80 |
+
{{- '}\n</tool_call>' }}
|
81 |
+
{%- endfor %}
|
82 |
+
{%- endif %}
|
83 |
+
{{- '<|im_end|>\n' }}
|
84 |
+
{%- elif message.role == "tool" %}
|
85 |
+
{%- if loop.first or (messages[loop.index0 - 1].role != "tool") %}
|
86 |
+
{{- '<|im_start|>user' }}
|
87 |
+
{%- endif %}
|
88 |
+
{{- '\n<tool_response>\n' }}
|
89 |
+
{{- message.content }}
|
90 |
+
{{- '\n</tool_response>' }}
|
91 |
+
{%- if loop.last or (messages[loop.index0 + 1].role != "tool") %}
|
92 |
+
{{- '<|im_end|>\n' }}
|
93 |
+
{%- endif %}
|
94 |
+
{%- endif %}
|
95 |
+
{%- endfor %}
|
96 |
+
|
97 |
+
{%- if add_generation_prompt %}
|
98 |
+
{{- '<|im_start|>assistant\n' }}
|
99 |
+
{%- if enable_thinking is defined and enable_thinking is false %}
|
100 |
+
{{- '<think>\n\n</think>\n\n' }}
|
101 |
+
{%- endif %}
|
102 |
+
{%- endif %}
|
quantize_models.sh
ADDED
@@ -0,0 +1,71 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/bin/bash
|
2 |
+
|
3 |
+
cd llama.cpp
|
4 |
+
python3 convert_hf_to_gguf.py ../../osmosis-mcp-4b --outtype bf16
|
5 |
+
cd ..
|
6 |
+
|
7 |
+
# Input model file
|
8 |
+
INPUT_MODEL="osmosis-mcp-4B-BF16.gguf"
|
9 |
+
|
10 |
+
# Define quantization formats to generate
|
11 |
+
QUANT_FORMATS=(
|
12 |
+
"Q4_K_S"
|
13 |
+
"Q5_K_M"
|
14 |
+
"Q5_K_S"
|
15 |
+
"Q6_K"
|
16 |
+
"IQ4_XS"
|
17 |
+
"Q8_0"
|
18 |
+
"Q2_K"
|
19 |
+
"Q3_K_L"
|
20 |
+
"Q3_K_M"
|
21 |
+
"Q3_K_S"
|
22 |
+
"Q4_K_M"
|
23 |
+
)
|
24 |
+
|
25 |
+
# Check if input model exists
|
26 |
+
if [ ! -f "$INPUT_MODEL" ]; then
|
27 |
+
echo "Error: Input model file $INPUT_MODEL not found."
|
28 |
+
exit 1
|
29 |
+
fi
|
30 |
+
|
31 |
+
# Path to llama-quantize tool
|
32 |
+
QUANTIZE_TOOL="llama.cpp/build/bin/llama-quantize"
|
33 |
+
|
34 |
+
# Check if quantize tool exists
|
35 |
+
if [ ! -f "$QUANTIZE_TOOL" ]; then
|
36 |
+
echo "Error: Quantize tool not found at $QUANTIZE_TOOL"
|
37 |
+
exit 1
|
38 |
+
fi
|
39 |
+
|
40 |
+
# Process each quantization format
|
41 |
+
for format in "${QUANT_FORMATS[@]}"; do
|
42 |
+
echo "------------------------------------------------------"
|
43 |
+
echo "Starting quantization: $format"
|
44 |
+
echo "------------------------------------------------------"
|
45 |
+
|
46 |
+
# Define output filename with the exact format requested
|
47 |
+
OUTPUT_MODEL="osmosis-mcp-4b.${format}.gguf"
|
48 |
+
|
49 |
+
# Check if output model already exists
|
50 |
+
if [ -f "$OUTPUT_MODEL" ]; then
|
51 |
+
echo "Model $OUTPUT_MODEL already exists. Skipping..."
|
52 |
+
continue
|
53 |
+
fi
|
54 |
+
|
55 |
+
# Run quantization
|
56 |
+
echo "Quantizing to $format..."
|
57 |
+
"$QUANTIZE_TOOL" "$INPUT_MODEL" "$OUTPUT_MODEL" "$format"
|
58 |
+
|
59 |
+
# Check if quantization was successful
|
60 |
+
if [ $? -eq 0 ]; then
|
61 |
+
echo "Successfully created $OUTPUT_MODEL"
|
62 |
+
else
|
63 |
+
echo "Failed to create $OUTPUT_MODEL"
|
64 |
+
fi
|
65 |
+
|
66 |
+
echo ""
|
67 |
+
done
|
68 |
+
|
69 |
+
echo "All quantizations completed!"
|
70 |
+
echo "Generated models:"
|
71 |
+
ls -lah osmosis-mcp-4b.*.gguf
|
tokenizer_config.json
CHANGED
@@ -227,7 +227,7 @@
|
|
227 |
"<|video_pad|>"
|
228 |
],
|
229 |
"bos_token": null,
|
230 |
-
"chat_template": "{%- if tools %}\n {{- '<|im_start|>system\\n' }}\n {%- if messages[0].role == 'system' %}\n {{- messages[0].content + '\\n\\n' }}\n {%- endif %}\n {{- \"# Tools\\n\\nYou may call one or more functions to assist with the user query.\\n\\nYou are provided with function signatures within <tools></tools> XML tags:\\n<tools>\" }}\n {%- for tool in tools %}\n {{- \"\\n\" }}\n {{- tool | tojson }}\n {%- endfor %}\n {{- \"\\n</tools>\\n\\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\\n<tool_call>\\n{\\\"name\\\": <function-name>, \\\"arguments\\\": <args-json-object>}\\n</tool_call><|im_end|>\\n\" }}\n{%- else %}\n {%- if messages[0].role == 'system' %}\n {{- '<|im_start|>system\\n' + messages[0].content + '<|im_end|>\\n' }}\n {%- endif %}\n{%- endif %}\n{%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %}\n{%- for
|
231 |
"clean_up_tokenization_spaces": false,
|
232 |
"eos_token": "<|im_end|>",
|
233 |
"errors": "replace",
|
|
|
227 |
"<|video_pad|>"
|
228 |
],
|
229 |
"bos_token": null,
|
230 |
+
"chat_template": "{%- if tools %}\n {{- '<|im_start|>system\\n' }}\n {%- if messages[0].role == 'system' %}\n {{- messages[0].content + '\\n\\n' }}\n {%- endif %}\n {{- \"# Tools\\n\\nYou may call one or more functions to assist with the user query.\\n\\nYou are provided with function signatures within <tools></tools> XML tags:\\n<tools>\" }}\n {%- for tool in tools %}\n {{- \"\\n\" }}\n {{- tool | tojson }}\n {%- endfor %}\n {{- \"\\n</tools>\\n\\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\\n<tool_call>\\n{\\\"name\\\": <function-name>, \\\"arguments\\\": <args-json-object>}\\n</tool_call><|im_end|>\\n\" }}\n{%- else %}\n {%- if messages[0].role == 'system' %}\n {{- '<|im_start|>system\\n' + messages[0].content + '<|im_end|>\\n' }}\n {%- endif %}\n{%- endif %}\n \n{%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %}\n \n{#— scan backward without using reverse filter —#}\n{%- for i in range(messages|length - 1, -1, -1) %}\n {%- set message = messages[i] %}\n {%- set index = i %}\n {%- set tool_start = \"<tool_response>\" %}\n {%- set tool_start_length = tool_start|length %}\n {%- set start_of_message = message.content[:tool_start_length] %}\n {%- set tool_end = \"</tool_response>\" %}\n {%- set tool_end_length = tool_end|length %}\n {%- set start_pos = (message.content|length) - tool_end_length %}\n {%- if start_pos < 0 %}\n {%- set start_pos = 0 %}\n {%- endif %}\n {%- set end_of_message = message.content[start_pos:] %}\n {%- if ns.multi_step_tool and message.role == \"user\" and not (start_of_message == tool_start and end_of_message == tool_end) %}\n {%- set ns.multi_step_tool = false %}\n {%- set ns.last_query_index = index %}\n {%- endif %}\n{%- endfor %}\n{%- for message in messages %}\n {%- if (message.role == \"user\") or (message.role == \"system\" and not loop.first) %}\n {{- '<|im_start|>' + message.role + '\\n' + message.content + '<|im_end|>' + '\\n' }}\n {%- elif message.role == \"assistant\" %}\n {%- set content = message.content %}\n {%- set reasoning_content = '' %}\n {%- if message.reasoning_content is defined and message.reasoning_content is not none %}\n {%- set reasoning_content = message.reasoning_content %}\n {%- else %}\n {%- if '</think>' in message.content %}\n {%- set content = (message.content.split('</think>')|last).lstrip('\\n') %}\n {%- set reasoning_content = (message.content.split('</think>')|first).rstrip('\\n') %}\n {%- set reasoning_content = (reasoning_content.split('<think>')|last).lstrip('\\n') %}\n {%- endif %}\n {%- endif %}\n {%- if loop.index0 > ns.last_query_index %}\n {%- if loop.last or (not loop.last and reasoning_content) %}\n {{- '<|im_start|>' + message.role + '\\n<think>\\n' + reasoning_content.strip('\\n') + '\\n</think>\\n\\n' + content.lstrip('\\n') }}\n {%- else %}\n {{- '<|im_start|>' + message.role + '\\n' + content }}\n {%- endif %}\n {%- else %}\n {{- '<|im_start|>' + message.role + '\\n' + content }}\n {%- endif %}\n {%- if message.tool_calls %}\n {%- for tool_call in message.tool_calls %}\n {%- if (loop.first and content) or (not loop.first) %}\n {{- '\\n' }}\n {%- endif %}\n {%- if tool_call.function %}\n {%- set tool_call = tool_call.function %}\n {%- endif %}\n {{- '<tool_call>\\n{\"name\": \"' }}\n {{- tool_call.name }}\n {{- '\", \"arguments\": ' }}\n {%- if tool_call.arguments is string %}\n {{- tool_call.arguments }}\n {%- else %}\n {{- tool_call.arguments | tojson }}\n {%- endif %}\n {{- '}\\n</tool_call>' }}\n {%- endfor %}\n {%- endif %}\n {{- '<|im_end|>\\n' }}\n {%- elif message.role == \"tool\" %}\n {%- if loop.first or (messages[loop.index0 - 1].role != \"tool\") %}\n {{- '<|im_start|>user' }}\n {%- endif %}\n {{- '\\n<tool_response>\\n' }}\n {{- message.content }}\n {{- '\\n</tool_response>' }}\n {%- if loop.last or (messages[loop.index0 + 1].role != \"tool\") %}\n {{- '<|im_end|>\\n' }}\n {%- endif %}\n {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '<|im_start|>assistant\\n' }}\n {%- if enable_thinking is defined and enable_thinking is false %}\n {{- '<think>\\n\\n</think>\\n\\n' }}\n {%- endif %}\n{%- endif %}",
|
231 |
"clean_up_tokenization_spaces": false,
|
232 |
"eos_token": "<|im_end|>",
|
233 |
"errors": "replace",
|