Update README.md
Browse files
README.md
CHANGED
@@ -10,7 +10,6 @@ tags:
|
|
10 |
- imatrix
|
11 |
- mistral
|
12 |
- merge
|
13 |
-
- nsfw
|
14 |
inference: false
|
15 |
datasets:
|
16 |
- ResplendentAI/Alpaca_NSFW_Shuffled
|
@@ -126,7 +125,7 @@ This repository hosts GGUF-Imatrix quantizations for [ResplendentAI/Sinerva_7B](
|
|
126 |
```
|
127 |
Base⇢ GGUF(F16)⇢ Imatrix-Data(F16)⇢ GGUF(Imatrix-Quants)
|
128 |
```
|
129 |
-
|
130 |
```python
|
131 |
quantization_options = [
|
132 |
"Q4_K_M", "IQ4_XS", "Q5_K_M", "Q5_K_S", "Q6_K",
|
@@ -134,6 +133,8 @@ To be uploaded:
|
|
134 |
]
|
135 |
```
|
136 |
|
|
|
|
|
137 |
**This is experimental.**
|
138 |
|
139 |
For imatrix data generation, kalomaze's `groups_merged.txt` with added roleplay chats was used, you can find it [here](https://huggingface.co/Lewdiculous/Datura_7B-GGUF-Imatrix/blob/main/imatrix-with-rp-format-data.txt).
|
|
|
10 |
- imatrix
|
11 |
- mistral
|
12 |
- merge
|
|
|
13 |
inference: false
|
14 |
datasets:
|
15 |
- ResplendentAI/Alpaca_NSFW_Shuffled
|
|
|
125 |
```
|
126 |
Base⇢ GGUF(F16)⇢ Imatrix-Data(F16)⇢ GGUF(Imatrix-Quants)
|
127 |
```
|
128 |
+
Quants:
|
129 |
```python
|
130 |
quantization_options = [
|
131 |
"Q4_K_M", "IQ4_XS", "Q5_K_M", "Q5_K_S", "Q6_K",
|
|
|
133 |
]
|
134 |
```
|
135 |
|
136 |
+
If you want anything that's not here or another model, feel free to request.
|
137 |
+
|
138 |
**This is experimental.**
|
139 |
|
140 |
For imatrix data generation, kalomaze's `groups_merged.txt` with added roleplay chats was used, you can find it [here](https://huggingface.co/Lewdiculous/Datura_7B-GGUF-Imatrix/blob/main/imatrix-with-rp-format-data.txt).
|