File size: 5,828 Bytes
1a52af2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3a0a46f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1a52af2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
---
library_name: Transformers
tags:
- transformers
- fine-tuned
- language-modeling
- direct-preference-optimization
- TensorBlock
- GGUF
datasets:
- Intel/orca_dpo_pairs
license: apache-2.0
base_model: RatanRohith/NeuralPizza-7B-V0.2
---

<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
    <div style="display: flex; flex-direction: column; align-items: flex-start;">
        <p style="margin-top: 0.5em; margin-bottom: 0em;">
            Feedback and support: TensorBlock's  <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
        </p>
    </div>
</div>

## RatanRohith/NeuralPizza-7B-V0.2 - GGUF

This repo contains GGUF format model files for [RatanRohith/NeuralPizza-7B-V0.2](https://huggingface.co/RatanRohith/NeuralPizza-7B-V0.2).

The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).

## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
  <th style="font-size: 25px;">Awesome MCP Servers</th>
  <th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
  <tr>
    <th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
    <th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
  </tr>
  <tr>
    <th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
    <th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
  </tr>
<tr>
  <th>
    <a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
      display: inline-block;
      padding: 8px 16px;
      background-color: #FF7F50;
      color: white;
      text-decoration: none;
      border-radius: 6px;
      font-weight: bold;
      font-family: sans-serif;
    ">πŸ‘€ See what we built πŸ‘€</a>
  </th>
  <th>
    <a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
      display: inline-block;
      padding: 8px 16px;
      background-color: #FF7F50;
      color: white;
      text-decoration: none;
      border-radius: 6px;
      font-weight: bold;
      font-family: sans-serif;
    ">πŸ‘€ See what we built πŸ‘€</a>
  </th>
</tr>
</table>
## Prompt template

```

```

## Model file specification

| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [NeuralPizza-7B-V0.2-Q2_K.gguf](https://huggingface.co/tensorblock/NeuralPizza-7B-V0.2-GGUF/blob/main/NeuralPizza-7B-V0.2-Q2_K.gguf) | Q2_K | 2.719 GB | smallest, significant quality loss - not recommended for most purposes |
| [NeuralPizza-7B-V0.2-Q3_K_S.gguf](https://huggingface.co/tensorblock/NeuralPizza-7B-V0.2-GGUF/blob/main/NeuralPizza-7B-V0.2-Q3_K_S.gguf) | Q3_K_S | 3.165 GB | very small, high quality loss |
| [NeuralPizza-7B-V0.2-Q3_K_M.gguf](https://huggingface.co/tensorblock/NeuralPizza-7B-V0.2-GGUF/blob/main/NeuralPizza-7B-V0.2-Q3_K_M.gguf) | Q3_K_M | 3.519 GB | very small, high quality loss |
| [NeuralPizza-7B-V0.2-Q3_K_L.gguf](https://huggingface.co/tensorblock/NeuralPizza-7B-V0.2-GGUF/blob/main/NeuralPizza-7B-V0.2-Q3_K_L.gguf) | Q3_K_L | 3.822 GB | small, substantial quality loss |
| [NeuralPizza-7B-V0.2-Q4_0.gguf](https://huggingface.co/tensorblock/NeuralPizza-7B-V0.2-GGUF/blob/main/NeuralPizza-7B-V0.2-Q4_0.gguf) | Q4_0 | 4.109 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [NeuralPizza-7B-V0.2-Q4_K_S.gguf](https://huggingface.co/tensorblock/NeuralPizza-7B-V0.2-GGUF/blob/main/NeuralPizza-7B-V0.2-Q4_K_S.gguf) | Q4_K_S | 4.140 GB | small, greater quality loss |
| [NeuralPizza-7B-V0.2-Q4_K_M.gguf](https://huggingface.co/tensorblock/NeuralPizza-7B-V0.2-GGUF/blob/main/NeuralPizza-7B-V0.2-Q4_K_M.gguf) | Q4_K_M | 4.368 GB | medium, balanced quality - recommended |
| [NeuralPizza-7B-V0.2-Q5_0.gguf](https://huggingface.co/tensorblock/NeuralPizza-7B-V0.2-GGUF/blob/main/NeuralPizza-7B-V0.2-Q5_0.gguf) | Q5_0 | 4.998 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [NeuralPizza-7B-V0.2-Q5_K_S.gguf](https://huggingface.co/tensorblock/NeuralPizza-7B-V0.2-GGUF/blob/main/NeuralPizza-7B-V0.2-Q5_K_S.gguf) | Q5_K_S | 4.998 GB | large, low quality loss - recommended |
| [NeuralPizza-7B-V0.2-Q5_K_M.gguf](https://huggingface.co/tensorblock/NeuralPizza-7B-V0.2-GGUF/blob/main/NeuralPizza-7B-V0.2-Q5_K_M.gguf) | Q5_K_M | 5.131 GB | large, very low quality loss - recommended |
| [NeuralPizza-7B-V0.2-Q6_K.gguf](https://huggingface.co/tensorblock/NeuralPizza-7B-V0.2-GGUF/blob/main/NeuralPizza-7B-V0.2-Q6_K.gguf) | Q6_K | 5.942 GB | very large, extremely low quality loss |
| [NeuralPizza-7B-V0.2-Q8_0.gguf](https://huggingface.co/tensorblock/NeuralPizza-7B-V0.2-GGUF/blob/main/NeuralPizza-7B-V0.2-Q8_0.gguf) | Q8_0 | 7.696 GB | very large, extremely low quality loss - not recommended |


## Downloading instruction

### Command line

Firstly, install Huggingface Client

```shell
pip install -U "huggingface_hub[cli]"
```

Then, downoad the individual model file the a local directory

```shell
huggingface-cli download tensorblock/NeuralPizza-7B-V0.2-GGUF --include "NeuralPizza-7B-V0.2-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```

If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:

```shell
huggingface-cli download tensorblock/NeuralPizza-7B-V0.2-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```