Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -15,6 +15,22 @@ library_name: gguf
|
|
15 |
|
16 |
This is a collection of GGUF quantized versions of [pravdin/merged-Gensyn-Qwen2.5-1.5B-Instruct-deepseek-ai-DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/pravdin/merged-Gensyn-Qwen2.5-1.5B-Instruct-deepseek-ai-DeepSeek-R1-Distill-Qwen-1.5B).
|
17 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
18 |
## π Available Quantization Formats
|
19 |
|
20 |
This repository contains multiple quantization formats optimized for different use cases:
|
|
|
15 |
|
16 |
This is a collection of GGUF quantized versions of [pravdin/merged-Gensyn-Qwen2.5-1.5B-Instruct-deepseek-ai-DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/pravdin/merged-Gensyn-Qwen2.5-1.5B-Instruct-deepseek-ai-DeepSeek-R1-Distill-Qwen-1.5B).
|
17 |
|
18 |
+
## π³ Model Tree
|
19 |
+
|
20 |
+
This model was created by merging the following models:
|
21 |
+
|
22 |
+
```
|
23 |
+
pravdin/merged-Gensyn-Qwen2.5-1.5B-Instruct-deepseek-ai-DeepSeek-R1-Distill-Qwen-1.5B
|
24 |
+
βββ Merge Method: dare_ties
|
25 |
+
βββ Gensyn/Qwen2.5-1.5B-Instruct
|
26 |
+
βββ deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
|
27 |
+
βββ density: 0.6
|
28 |
+
βββ weight: 0.5
|
29 |
+
```
|
30 |
+
|
31 |
+
**Merge Method**: DARE_TIES - Advanced merging technique that reduces interference between models
|
32 |
+
|
33 |
+
|
34 |
## π Available Quantization Formats
|
35 |
|
36 |
This repository contains multiple quantization formats optimized for different use cases:
|