File size: 4,787 Bytes
05e3303
 
 
 
 
 
 
 
 
818c005
 
56f34f8
 
818c005
56f34f8
818c005
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
56f34f8
818c005
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
---
license: llama3.1
base_model:
- deepseek-ai/DeepSeek-R1-Distill-Llama-8B
- meta-llama/Llama-3.1-8B-Instruct
---

The aim of this model is to retain the reasoning capabilities of <a href="https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B">DeepSeek-R1-Distill-Llama-8B</a>, while aligning more with the original <a href="https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct">Llama 3.1 model</a> on which it is based.

As this model derives from Llama 3.1, the <a href="https://www.llama.com/llama3_1/license/">Llama 3.1 Community License Agreement</a> applies.

Use the [DeepSeek Chat Prompt Template](https://docs.unsloth.ai/basics/tutorial-how-to-run-deepseek-r1-on-your-own-local-device#deepseek-chat-template) with this model.

## 8B Safetensors BF16 format:
Use with [transformers](https://huggingface.co/docs/transformers/en/index) as you would Llama 3.1, but use the [DeepSeek Chat Prompt Template](https://docs.unsloth.ai/basics/tutorial-how-to-run-deepseek-r1-on-your-own-local-device#deepseek-chat-template) as you would with the original DeepSeek-R1-Distill-Llama models.

Use model id ___BlueBeck/LlamaAligned-DeepSeekR1-Distill-8b___

[Or download files from here](https://huggingface.co/BlueBeck/LlamaAligned-DeepSeekR1-Distill-8b/tree/main)

## 8B GGUF Quantised versions:

Use these with [Llama.cpp](https://github.com/ggerganov/llama.cpp), [LM Studio](https://lmstudio.ai/) or [Kobold.cpp](https://github.com/LostRuins/koboldcpp).
Thanks to [mradermacher](https://huggingface.co/mradermacher) for converting these from the [safetensors](https://huggingface.co/BlueBeck/LlamaAligned-DeepSeekR1-Distill-8b/tree/main) format.

| Filename | Type       | Size      | Quality     |
| -------- | ---------- | --------- | ----------- |
| [LlamaAligned-DeepSeekR1-Distill-8b-Q4_K_M.gguf](https://huggingface.co/BlueBeck/LlamaAligned-DeepSeekR1-Distill-8b/resolve/quants/LlamaAligned-DeepSeekR1-Distill-8b.Q4_K_M.gguf?download=true) | Q4_K_M | 4.92GB | OK quality, default. |
| [LlamaAligned-DeepSeekR1-Distill-8b-Q8_0.gguf](https://huggingface.co/BlueBeck/LlamaAligned-DeepSeekR1-Distill-8b/resolve/quants/LlamaAligned-DeepSeekR1-Distill-8b.Q8_0.gguf?download=true) | Q8_0 | 8.54GB | Best quality quantised version. |
| [LlamaAligned-DeepSeekR1-Distill-8b-Q6_K.gguf](https://huggingface.co/BlueBeck/LlamaAligned-DeepSeekR1-Distill-8b/resolve/quants/LlamaAligned-DeepSeekR1-Distill-8b.Q6_K.gguf?download=true) | Q6_K | 6.6GB | High quality. |
| [LlamaAligned-DeepSeekR1-Distill-8b-Q5_K_M>.gguf](https://huggingface.co/BlueBeck/LlamaAligned-DeepSeekR1-Distill-8b/resolve/quants/LlamaAligned-DeepSeekR1-Distill-8b.Q5_K_M.gguf?download=true) | Q5_K_M> | 5.73GB | Good quality. |
| [LlamaAligned-DeepSeekR1-Distill-8b-Q3_K_S.gguf](https://huggingface.co/BlueBeck/LlamaAligned-DeepSeekR1-Distill-8b/resolve/quants/LlamaAligned-DeepSeekR1-Distill-8b.Q3_K_S.gguf?download=true) | Q3_K_S | 3.66GB | Lower quality. |


## 70B Safetensors BF16 format:
Use with [transformers](https://huggingface.co/docs/transformers/en/index) as you would Llama 3.3, but use the [DeepSeek Chat Prompt Template](https://docs.unsloth.ai/basics/tutorial-how-to-run-deepseek-r1-on-your-own-local-device#deepseek-chat-template) as you would with the original DeepSeek-R1-Distill-Llama models.

[Or download files from here](https://huggingface.co/BlueBeck/LlamaAligned-DeepSeekR1-Distill-70b/tree/main)

## 70B GGUF Quantised versions:

Use these with [Llama.cpp](https://github.com/ggerganov/llama.cpp), [LM Studio](https://lmstudio.ai/) or [Kobold.cpp](https://github.com/LostRuins/koboldcpp).
Thanks to [mradermacher](https://huggingface.co/mradermacher) for converting these from the [safetensors](https://huggingface.co/BlueBeck/LlamaAligned-DeepSeekR1-Distill-70b/tree/main) format.

| Filename | Type       | Size      | Quality     |
| -------- | ---------- | --------- | ----------- |
| [LlamaAligned-DeepSeekR1-Distill-70b-Q4_K_M.gguf](https://huggingface.co/BlueBeck/LlamaAligned-DeepSeekR1-Distill-70b/resolve/quants/LlamaAligned-DeepSeekR1-Distill-70b.Q4_K_M.gguf?download=true) | Q4_K_M | 42.5GB | OK quality, default. |
| LlamaAligned-DeepSeekR1-Distill-70b-Q8_0.gguf [part1](https://huggingface.co/BlueBeck/LlamaAligned-DeepSeekR1-Distill-70b/resolve/quants/LlamaAligned-DeepSeekR1-Distill-70b.Q8_0.gguf.part1of2?download=true) [part2](https://huggingface.co/BlueBeck/LlamaAligned-DeepSeekR1-Distill-70b/resolve/quants/LlamaAligned-DeepSeekR1-Distill-70b.Q8_0.gguf.part2of2?download=true)| Q8_0 | 75.0GB | Best quality quantised version. |
| [LlamaAligned-DeepSeekR1-Distill-70b-Q3_K_S.gguf](https://huggingface.co/BlueBeck/LlamaAligned-DeepSeekR1-Distill-70b/resolve/quants/LlamaAligned-DeepSeekR1-Distill-70b.Q3_K_S.gguf?download=true) | Q3_K_S | 30.9GB | Lower quality. |