TheBloke commited on
Commit
c3d8042
1 Parent(s): 98c2124

Upload new GPTQs with varied parameters

Browse files
Files changed (1) hide show
  1. README.md +93 -36
README.md CHANGED
@@ -1,6 +1,7 @@
1
  ---
2
  inference: false
3
  license: other
 
4
  tags:
5
  - llama
6
  - pytorch
@@ -14,7 +15,7 @@ tags:
14
  </div>
15
  <div style="display: flex; justify-content: space-between; width: 100%;">
16
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
17
- <p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
18
  </div>
19
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
20
  <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
@@ -24,48 +25,84 @@ tags:
24
 
25
  # Elinas' Chronos 33B GPTQ
26
 
27
- These files are GPTQ 4bit model files for [Elinas' Chronos 33B](https://huggingface.co/elinas/chronos-33b).
28
 
29
- It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
 
 
30
 
31
  ## Repositories available
32
 
33
- * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/chronos-33b-GPTQ)
34
  * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/chronos-33b-GGML)
35
- * [Elinas' original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/elinas/chronos-33b)
36
 
37
- ## Prompt template
38
 
39
  ```
40
- ### Instruction:
41
- Your instruction or question here.
 
 
42
  ### Response:
43
  ```
44
 
45
- ## How to easily download and use this model in text-generation-webui
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
46
 
47
- Please make sure you're using the latest version of text-generation-webui
48
 
49
  1. Click the **Model tab**.
50
  2. Under **Download custom model or LoRA**, enter `TheBloke/chronos-33b-GPTQ`.
 
 
51
  3. Click **Download**.
52
- 4. The model will start downloading, and once finished it will be automatically loaded.
53
- 5. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
 
 
 
54
  * Note that you do not need to set GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
55
- 6. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
56
 
57
  ## How to use this GPTQ model from Python code
58
 
59
  First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed:
60
 
61
- `pip install auto-gptq`
62
 
63
  Then try the following example code:
64
 
65
  ```python
66
  from transformers import AutoTokenizer, pipeline, logging
67
  from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
68
- import argparse
69
 
70
  model_name_or_path = "TheBloke/chronos-33b-GPTQ"
71
  model_basename = "chronos-33b-GPTQ-4bit--1g.act.order"
@@ -75,13 +112,33 @@ use_triton = False
75
  tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
76
 
77
  model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
78
- model_basename=model_basename,
79
  use_safetensors=True,
80
- trust_remote_code=True,
81
  device="cuda:0",
82
  use_triton=use_triton,
83
  quantize_config=None)
84
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
85
  print("\n\n*** Generate:")
86
 
87
  input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
@@ -93,10 +150,6 @@ print(tokenizer.decode(output[0]))
93
  # Prevent printing spurious transformers error when using pipeline with AutoGPTQ
94
  logging.set_verbosity(logging.CRITICAL)
95
 
96
- prompt = "Tell me about AI"
97
- prompt_template=f'''### Human: {prompt}
98
- ### Assistant:'''
99
-
100
  print("*** Pipeline:")
101
  pipe = pipeline(
102
  "text-generation",
@@ -111,26 +164,18 @@ pipe = pipeline(
111
  print(pipe(prompt_template)[0]['generated_text'])
112
  ```
113
 
114
- ## Provided files
115
-
116
- **chronos-33b-GPTQ-4bit--1g.act.order.safetensors**
117
-
118
- This will work with AutoGPTQ and CUDA versions of GPTQ-for-LLaMa. There are reports of issues with Triton mode of recent GPTQ-for-LLaMa. If you have issues, please use AutoGPTQ instead.
119
 
120
- It was created without group_size to lower VRAM requirements, and with --act-order (desc_act) to boost inference accuracy as much as possible.
121
 
122
- * `chronos-33b-GPTQ-4bit--1g.act.order.safetensors`
123
- * Works with AutoGPTQ in CUDA or Triton modes.
124
- * Works with GPTQ-for-LLaMa in CUDA mode. May have issues with GPTQ-for-LLaMa Triton mode.
125
- * Works with text-generation-webui, including one-click-installers.
126
- * Parameters: Groupsize = -1. Act Order / desc_act = True.
127
 
128
  <!-- footer start -->
129
  ## Discord
130
 
131
  For further support, and discussions on these models and AI in general, join us at:
132
 
133
- [TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
134
 
135
  ## Thanks, and how to contribute.
136
 
@@ -145,9 +190,9 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
145
  * Patreon: https://patreon.com/TheBlokeAI
146
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
147
 
148
- **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
149
 
150
- **Patreon special mentions**: Ajan Kanaga, Kalila, Derek Yates, Sean Connelly, Luke, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, trip7s trip, Jonathan Leane, Talal Aujan, Artur Olbinski, Cory Kujawski, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Johann-Peter Hartmann.
151
 
152
  Thank you to all my generous patrons and donaters!
153
 
@@ -155,9 +200,10 @@ Thank you to all my generous patrons and donaters!
155
 
156
  # Original model card: Elinas' Chronos 33B
157
 
 
158
  # chronos-33b
159
 
160
- This is the fp16 PyTorch / HF version of **chronos-33b**
161
 
162
  This model is primarily focused on chat, roleplay, and storywriting, but can accomplish other tasks such as simple reasoning and coding.
163
 
@@ -170,6 +216,17 @@ Your instruction or question here.
170
  ### Response:
171
  ```
172
 
 
 
 
 
 
 
 
 
 
 
 
173
  # LLaMA Model Card
174
 
175
  ## Model details
 
1
  ---
2
  inference: false
3
  license: other
4
+ model_type: llama
5
  tags:
6
  - llama
7
  - pytorch
 
15
  </div>
16
  <div style="display: flex; justify-content: space-between; width: 100%;">
17
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
18
+ <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
19
  </div>
20
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
21
  <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
 
25
 
26
  # Elinas' Chronos 33B GPTQ
27
 
28
+ These files are GPTQ model files for [Elinas' Chronos 33B](https://huggingface.co/elinas/chronos-33b).
29
 
30
+ Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
31
+
32
+ These models were quantised using hardware kindly provided by [Latitude.sh](https://www.latitude.sh/accelerate).
33
 
34
  ## Repositories available
35
 
36
+ * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/chronos-33b-GPTQ)
37
  * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/chronos-33b-GGML)
38
+ * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/elinas/chronos-33b)
39
 
40
+ ## Prompt template: Alpaca
41
 
42
  ```
43
+ Below is an instruction that describes a task. Write a response that appropriately completes the request.
44
+
45
+ ### Instruction: {prompt}
46
+
47
  ### Response:
48
  ```
49
 
50
+ ## Provided files
51
+
52
+ Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
53
+
54
+ Each separate quant is in a different branch. See below for instructions on fetching from different branches.
55
+
56
+ | Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
57
+ | ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
58
+ | main | 4 | None | True | 16.94 GB | True | GPTQ-for-LLaMa | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
59
+ | gptq-4bit-32g-actorder_True | 4 | 32 | True | 19.44 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 32g gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
60
+ | gptq-4bit-64g-actorder_True | 4 | 64 | True | 18.18 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 64g uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
61
+ | gptq-4bit-128g-actorder_True | 4 | 128 | True | 17.55 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 128g uses even less VRAM, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
62
+ | gptq-8bit--1g-actorder_True | 8 | None | True | 32.99 GB | False | AutoGPTQ | 8-bit, with Act Order. No group size, to lower VRAM requirements and to improve AutoGPTQ speed. |
63
+ | gptq-8bit-128g-actorder_False | 8 | 128 | False | 33.73 GB | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and without Act Order to improve AutoGPTQ speed. |
64
+ | gptq-3bit--1g-actorder_True | 3 | None | True | 12.92 GB | False | AutoGPTQ | 3-bit, with Act Order and no group size. Lowest possible VRAM requirements. May be lower quality than 3-bit 128g. |
65
+ | gptq-3bit-128g-actorder_False | 3 | 128 | False | 13.51 GB | False | AutoGPTQ | 3-bit, with group size 128g but no act-order. Slightly higher VRAM requirements than 3-bit None. |
66
+
67
+ ## How to download from branches
68
+
69
+ - In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/chronos-33b-GPTQ:gptq-4bit-32g-actorder_True`
70
+ - With Git, you can clone a branch with:
71
+ ```
72
+ git clone --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/chronos-33b-GPTQ`
73
+ ```
74
+ - In Python Transformers code, the branch is the `revision` parameter; see below.
75
+
76
+ ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
77
+
78
+ Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
79
 
80
+ It is strongly recommended to use the text-generation-webui one-click-installers unless you know how to make a manual install.
81
 
82
  1. Click the **Model tab**.
83
  2. Under **Download custom model or LoRA**, enter `TheBloke/chronos-33b-GPTQ`.
84
+ - To download from a specific branch, enter for example `TheBloke/chronos-33b-GPTQ:gptq-4bit-32g-actorder_True`
85
+ - see Provided Files above for the list of branches for each option.
86
  3. Click **Download**.
87
+ 4. The model will start downloading. Once it's finished it will say "Done"
88
+ 5. In the top left, click the refresh icon next to **Model**.
89
+ 6. In the **Model** dropdown, choose the model you just downloaded: `chronos-33b-GPTQ`
90
+ 7. The model will automatically load, and is now ready for use!
91
+ 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
92
  * Note that you do not need to set GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
93
+ 9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
94
 
95
  ## How to use this GPTQ model from Python code
96
 
97
  First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed:
98
 
99
+ `GITHUB_ACTIONS=true pip install auto-gptq`
100
 
101
  Then try the following example code:
102
 
103
  ```python
104
  from transformers import AutoTokenizer, pipeline, logging
105
  from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
 
106
 
107
  model_name_or_path = "TheBloke/chronos-33b-GPTQ"
108
  model_basename = "chronos-33b-GPTQ-4bit--1g.act.order"
 
112
  tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
113
 
114
  model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
115
+ model_basename=model_basename
116
  use_safetensors=True,
117
+ trust_remote_code=False,
118
  device="cuda:0",
119
  use_triton=use_triton,
120
  quantize_config=None)
121
 
122
+ """
123
+ To download from a specific branch, use the revision parameter, as in this example:
124
+
125
+ model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
126
+ revision="gptq-4bit-32g-actorder_True",
127
+ model_basename=model_basename,
128
+ use_safetensors=True,
129
+ trust_remote_code=False,
130
+ device="cuda:0",
131
+ quantize_config=None)
132
+ """
133
+
134
+ prompt = "Tell me about AI"
135
+ prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
136
+
137
+ ### Instruction: {prompt}
138
+
139
+ ### Response:
140
+ '''
141
+
142
  print("\n\n*** Generate:")
143
 
144
  input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
 
150
  # Prevent printing spurious transformers error when using pipeline with AutoGPTQ
151
  logging.set_verbosity(logging.CRITICAL)
152
 
 
 
 
 
153
  print("*** Pipeline:")
154
  pipe = pipeline(
155
  "text-generation",
 
164
  print(pipe(prompt_template)[0]['generated_text'])
165
  ```
166
 
167
+ ## Compatibility
 
 
 
 
168
 
169
+ The files provided will work with AutoGPTQ (CUDA and Triton modes), GPTQ-for-LLaMa (only CUDA has been tested), and Occ4m's GPTQ-for-LLaMa fork.
170
 
171
+ ExLlama works with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
 
 
 
 
172
 
173
  <!-- footer start -->
174
  ## Discord
175
 
176
  For further support, and discussions on these models and AI in general, join us at:
177
 
178
+ [TheBloke AI's Discord server](https://discord.gg/theblokeai)
179
 
180
  ## Thanks, and how to contribute.
181
 
 
190
  * Patreon: https://patreon.com/TheBlokeAI
191
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
192
 
193
+ **Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
194
 
195
+ **Patreon special mentions**: Space Cruiser, Nikolai Manek, Sam, Chris McCloskey, Rishabh Srivastava, Kalila, Spiking Neurons AB, Khalefa Al-Ahmad, WelcomeToTheClub, Chadd, Lone Striker, Viktor Bowallius, Edmond Seymore, Ai Maven, Chris Smitley, Dave, Alexandros Triantafyllidis, Luke @flexchar, Elle, ya boyyy, Talal Aujan, Alex , Jonathan Leane, Deep Realms, Randy H, subjectnull, Preetika Verma, Joseph William Delisle, Michael Levine, chris gileta, K, Oscar Rangel, LangChain4j, Trenton Dambrowitz, Eugene Pentland, Johann-Peter Hartmann, Femi Adebogun, Illia Dulskyi, senxiiz, Daniel P. Andersen, Sean Connelly, Artur Olbinski, RoA, Mano Prime, Derek Yates, Raven Klaugh, David Flickinger, Willem Michiel, Pieter, Willian Hasse, vamX, Luke Pendergrass, webtim, Ghost , Rainer Wilmers, Nathan LeClaire, Will Dee, Cory Kujawski, John Detwiler, Fred von Graf, biorpg, Iucharbius , Imad Khwaja, Pierre Kircher, terasurfer , Asp the Wyvern, John Villwock, theTransient, zynix , Gabriel Tamborski, Fen Risland, Gabriel Puliatti, Matthew Berman, Pyrater, SuperWojo, Stephen Murray, Karl Bernard, Ajan Kanaga, Greatston Gnanesh, Junyu Yang.
196
 
197
  Thank you to all my generous patrons and donaters!
198
 
 
200
 
201
  # Original model card: Elinas' Chronos 33B
202
 
203
+
204
  # chronos-33b
205
 
206
+ This is the fp16 PyTorch / HF version of **chronos-33b** - if you need another version, GGML and GPTQ versions are linked below.
207
 
208
  This model is primarily focused on chat, roleplay, and storywriting, but can accomplish other tasks such as simple reasoning and coding.
209
 
 
216
  ### Response:
217
  ```
218
 
219
+ [GGML Version provided by @TheBloke](https://huggingface.co/TheBloke/chronos-33b-GGML)
220
+
221
+ [4bit GPTQ Version provided by @TheBloke](https://huggingface.co/TheBloke/chronos-33b-GPTQ)
222
+
223
+ <!--**Support My Development of New Models**
224
+ <a href='https://ko-fi.com/Q5Q6MB734' target='_blank'><img height='36' style='border:0px;height:36px;'
225
+ src='https://storage.ko-fi.com/cdn/kofi1.png?v=3' border='0' alt='Support Development' /></a>-->
226
+
227
+ --
228
+ license: other
229
+ ---
230
  # LLaMA Model Card
231
 
232
  ## Model details