Initial GGML model commit
Browse files
README.md
CHANGED
@@ -1,6 +1,7 @@
|
|
1 |
---
|
2 |
inference: false
|
3 |
license: other
|
|
|
4 |
---
|
5 |
|
6 |
<!-- header start -->
|
@@ -9,7 +10,7 @@ license: other
|
|
9 |
</div>
|
10 |
<div style="display: flex; justify-content: space-between; width: 100%;">
|
11 |
<div style="display: flex; flex-direction: column; align-items: flex-start;">
|
12 |
-
<p><a href="https://discord.gg/
|
13 |
</div>
|
14 |
<div style="display: flex; flex-direction: column; align-items: flex-end;">
|
15 |
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
|
@@ -17,38 +18,48 @@ license: other
|
|
17 |
</div>
|
18 |
<!-- header end -->
|
19 |
|
20 |
-
# Meta's LLaMA
|
21 |
|
22 |
-
These files are GGML format model files for [Meta's LLaMA
|
23 |
|
24 |
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
|
25 |
-
* [
|
26 |
-
* [
|
27 |
-
* [
|
28 |
-
* [
|
29 |
-
* [ctransformers](https://github.com/marella/ctransformers)
|
|
|
|
|
|
|
30 |
|
31 |
## Repositories available
|
32 |
|
33 |
-
* [
|
34 |
-
* [
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
35 |
|
36 |
<!-- compatibility_ggml start -->
|
37 |
## Compatibility
|
38 |
|
39 |
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
|
40 |
|
41 |
-
|
42 |
-
|
43 |
-
They should be compatible with all current UIs and libraries that use llama.cpp, such as those listed at the top of this README.
|
44 |
|
45 |
### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
|
46 |
|
47 |
-
These new quantisation methods are
|
48 |
|
49 |
-
They
|
50 |
|
51 |
## Explanation of the new k-quant methods
|
|
|
|
|
52 |
|
53 |
The new methods available are:
|
54 |
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
|
@@ -59,26 +70,26 @@ The new methods available are:
|
|
59 |
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
|
60 |
|
61 |
Refer to the Provided Files table below to see what files use which methods, and how.
|
|
|
62 |
<!-- compatibility_ggml end -->
|
63 |
|
64 |
## Provided files
|
65 |
| Name | Quant method | Bits | Size | Max RAM required | Use case |
|
66 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|
67 |
-
| llama-7b.ggmlv3.q2_K.bin | q2_K | 2 | 2.80 GB
|
68 |
-
| llama-7b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 3.55 GB
|
69 |
-
| llama-7b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 3.23 GB
|
70 |
-
| llama-7b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 2.90 GB
|
71 |
-
| llama-7b.ggmlv3.q4_0.bin | q4_0 | 4 | 3.79 GB
|
72 |
-
| llama-7b.ggmlv3.q4_1.bin | q4_1 | 4 | 4.21 GB
|
73 |
-
| llama-7b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 4.05 GB
|
74 |
-
| llama-7b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 3.79 GB
|
75 |
-
| llama-7b.ggmlv3.q5_0.bin | q5_0 | 5 | 4.63 GB
|
76 |
-
| llama-7b.ggmlv3.q5_1.bin | q5_1 | 5 | 5.06 GB
|
77 |
-
| llama-7b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 4.77 GB
|
78 |
-
| llama-7b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 4.63 GB
|
79 |
-
| llama-7b.ggmlv3.q6_K.bin | q6_K | 6 | 5.53 GB
|
80 |
-
| llama-7b.ggmlv3.q8_0.bin | q8_0 | 8 | 7.16 GB
|
81 |
-
|
82 |
|
83 |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
|
84 |
|
@@ -87,7 +98,7 @@ Refer to the Provided Files table below to see what files use which methods, and
|
|
87 |
I use the following command line; adjust for your tastes and needs:
|
88 |
|
89 |
```
|
90 |
-
./main -t 10 -ngl 32 -m llama-7b.ggmlv3.
|
91 |
```
|
92 |
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
|
93 |
|
@@ -104,7 +115,7 @@ Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](http
|
|
104 |
|
105 |
For further support, and discussions on these models and AI in general, join us at:
|
106 |
|
107 |
-
[TheBloke AI's Discord server](https://discord.gg/
|
108 |
|
109 |
## Thanks, and how to contribute.
|
110 |
|
@@ -119,11 +130,17 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
|
|
119 |
* Patreon: https://patreon.com/TheBlokeAI
|
120 |
* Ko-Fi: https://ko-fi.com/TheBlokeAI
|
121 |
|
122 |
-
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz
|
123 |
|
124 |
-
**Patreon special mentions**:
|
125 |
|
126 |
Thank you to all my generous patrons and donaters!
|
127 |
|
128 |
<!-- footer end -->
|
129 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
inference: false
|
3 |
license: other
|
4 |
+
model_type: llama
|
5 |
---
|
6 |
|
7 |
<!-- header start -->
|
|
|
10 |
</div>
|
11 |
<div style="display: flex; justify-content: space-between; width: 100%;">
|
12 |
<div style="display: flex; flex-direction: column; align-items: flex-start;">
|
13 |
+
<p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
|
14 |
</div>
|
15 |
<div style="display: flex; flex-direction: column; align-items: flex-end;">
|
16 |
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
|
|
|
18 |
</div>
|
19 |
<!-- header end -->
|
20 |
|
21 |
+
# Meta's LLaMA 7b GGML
|
22 |
|
23 |
+
These files are GGML format model files for [Meta's LLaMA 7b](https://ai.meta.com/blog/large-language-model-llama-meta-ai).
|
24 |
|
25 |
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
|
26 |
+
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a powerful GGML web UI with full GPU acceleration out of the box. Especially good for story telling.
|
27 |
+
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with GPU acceleration via the c_transformers backend.
|
28 |
+
* [LM Studio](https://lmstudio.ai/), a fully featured local GUI. Supports full GPU accel on macOS. Also supports Windows, without GPU accel.
|
29 |
+
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most popular web UI. Requires extra steps to enable GPU accel via llama.cpp backend.
|
30 |
+
* [ctransformers](https://github.com/marella/ctransformers), a Python library with LangChain support and OpenAI-compatible AI server.
|
31 |
+
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with OpenAI-compatible API server.
|
32 |
+
|
33 |
+
These files were quantised using hardware kindly provided by [Latitude.sh](https://www.latitude.sh/accelerate).
|
34 |
|
35 |
## Repositories available
|
36 |
|
37 |
+
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/LLaMA-7b-GPTQ)
|
38 |
+
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/LLaMA-7b-GGML)
|
39 |
+
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/huggyllama/llama-7b)
|
40 |
+
|
41 |
+
## Prompt template: None
|
42 |
+
|
43 |
+
```
|
44 |
+
{prompt}
|
45 |
+
```
|
46 |
|
47 |
<!-- compatibility_ggml start -->
|
48 |
## Compatibility
|
49 |
|
50 |
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
|
51 |
|
52 |
+
These are guaranteed to be compatible with any UIs, tools and libraries released since late May. They may be phased out soon, as they are largely superseded by the new k-quant methods.
|
|
|
|
|
53 |
|
54 |
### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
|
55 |
|
56 |
+
These new quantisation methods are compatible with llama.cpp as of June 6th, commit `2d43387`.
|
57 |
|
58 |
+
They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python, ctransformers, rustformers and most others. For compatibility with other tools and libraries, please check their documentation.
|
59 |
|
60 |
## Explanation of the new k-quant methods
|
61 |
+
<details>
|
62 |
+
<summary>Click to see details</summary>
|
63 |
|
64 |
The new methods available are:
|
65 |
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
|
|
|
70 |
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
|
71 |
|
72 |
Refer to the Provided Files table below to see what files use which methods, and how.
|
73 |
+
</details>
|
74 |
<!-- compatibility_ggml end -->
|
75 |
|
76 |
## Provided files
|
77 |
| Name | Quant method | Bits | Size | Max RAM required | Use case |
|
78 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|
79 |
+
| llama-7b.ggmlv3.q2_K.bin | q2_K | 2 | 2.80 GB| 5.30 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
|
80 |
+
| llama-7b.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 3.55 GB| 6.05 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
|
81 |
+
| llama-7b.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 3.23 GB| 5.73 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
|
82 |
+
| llama-7b.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 2.90 GB| 5.40 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
|
83 |
+
| llama-7b.ggmlv3.q4_0.bin | q4_0 | 4 | 3.79 GB| 6.29 GB | Original quant method, 4-bit. |
|
84 |
+
| llama-7b.ggmlv3.q4_1.bin | q4_1 | 4 | 4.21 GB| 6.71 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
|
85 |
+
| llama-7b.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 4.05 GB| 6.55 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
|
86 |
+
| llama-7b.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 3.79 GB| 6.29 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
|
87 |
+
| llama-7b.ggmlv3.q5_0.bin | q5_0 | 5 | 4.63 GB| 7.13 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
|
88 |
+
| llama-7b.ggmlv3.q5_1.bin | q5_1 | 5 | 5.06 GB| 7.56 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
|
89 |
+
| llama-7b.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 4.77 GB| 7.27 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
|
90 |
+
| llama-7b.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 4.63 GB| 7.13 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
|
91 |
+
| llama-7b.ggmlv3.q6_K.bin | q6_K | 6 | 5.53 GB| 8.03 GB | New k-quant method. Uses GGML_TYPE_Q8_K for all tensors - 6-bit quantization |
|
92 |
+
| llama-7b.ggmlv3.q8_0.bin | q8_0 | 8 | 7.16 GB| 9.66 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
|
|
|
93 |
|
94 |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
|
95 |
|
|
|
98 |
I use the following command line; adjust for your tastes and needs:
|
99 |
|
100 |
```
|
101 |
+
./main -t 10 -ngl 32 -m llama-7b.ggmlv3.q4_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:"
|
102 |
```
|
103 |
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
|
104 |
|
|
|
115 |
|
116 |
For further support, and discussions on these models and AI in general, join us at:
|
117 |
|
118 |
+
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
|
119 |
|
120 |
## Thanks, and how to contribute.
|
121 |
|
|
|
130 |
* Patreon: https://patreon.com/TheBlokeAI
|
131 |
* Ko-Fi: https://ko-fi.com/TheBlokeAI
|
132 |
|
133 |
+
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
|
134 |
|
135 |
+
**Patreon special mentions**: Space Cruiser, Nikolai Manek, Sam, Chris McCloskey, Rishabh Srivastava, Kalila, Spiking Neurons AB, Khalefa Al-Ahmad, WelcomeToTheClub, Chadd, Lone Striker, Viktor Bowallius, Edmond Seymore, Ai Maven, Chris Smitley, Dave, Alexandros Triantafyllidis, Luke @flexchar, Elle, ya boyyy, Talal Aujan, Alex , Jonathan Leane, Deep Realms, Randy H, subjectnull, Preetika Verma, Joseph William Delisle, Michael Levine, chris gileta, K, Oscar Rangel, LangChain4j, Trenton Dambrowitz, Eugene Pentland, Johann-Peter Hartmann, Femi Adebogun, Illia Dulskyi, senxiiz, Daniel P. Andersen, Sean Connelly, Artur Olbinski, RoA, Mano Prime, Derek Yates, Raven Klaugh, David Flickinger, Willem Michiel, Pieter, Willian Hasse, vamX, Luke Pendergrass, webtim, Ghost , Rainer Wilmers, Nathan LeClaire, Will Dee, Cory Kujawski, John Detwiler, Fred von Graf, biorpg, Iucharbius , Imad Khwaja, Pierre Kircher, terasurfer , Asp the Wyvern, John Villwock, theTransient, zynix , Gabriel Tamborski, Fen Risland, Gabriel Puliatti, Matthew Berman, Pyrater, SuperWojo, Stephen Murray, Karl Bernard, Ajan Kanaga, Greatston Gnanesh, Junyu Yang.
|
136 |
|
137 |
Thank you to all my generous patrons and donaters!
|
138 |
|
139 |
<!-- footer end -->
|
140 |
|
141 |
+
# Original model card: Meta's LLaMA 7b
|
142 |
+
|
143 |
+
|
144 |
+
This contains the weights for the LLaMA-7b model. This model is under a non-commercial license (see the LICENSE file).
|
145 |
+
You should only use this repository if you have been granted access to the model by filling out [this form](https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform?usp=send_form) but either lost your copy of the weights or got some trouble converting them to the Transformers format.
|
146 |
+
|