Edit model card

Platypus-30B merged with kaiokendev's 33b SuperHOT 8k LoRA, quantized at 4 bit.

It was created with GPTQ-for-LLaMA with group size 32 and act order true as parameters, to get the maximum perplexity vs FP16 model.

I HIGHLY suggest to use exllama, to evade some VRAM issues.

Use (max_seq_len = context):

If max_seq_len = 4096, compress_pos_emb = 2

If max_seq_len = 8192, compress_pos_emb = 4

If you have 2x24 GB VRAM GPUs cards, to not get Out of Memory errors at 8192 context, use:

gpu_split: 9,21

Downloads last month
9
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.