Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) BigWeave-v27-95b - GGUF - Model creator: https://huggingface.co/llmixer/ - Original model: https://huggingface.co/llmixer/BigWeave-v27-95b/ | Name | Quant method | Size | | ---- | ---- | ---- | | [BigWeave-v27-95b.Q2_K.gguf](https://huggingface.co/RichardErkhov/llmixer_-_BigWeave-v27-95b-gguf/blob/main/BigWeave-v27-95b.Q2_K.gguf) | Q2_K | 32.98GB | | [BigWeave-v27-95b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/llmixer_-_BigWeave-v27-95b-gguf/blob/main/BigWeave-v27-95b.IQ3_XS.gguf) | IQ3_XS | 36.69GB | | [BigWeave-v27-95b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/llmixer_-_BigWeave-v27-95b-gguf/tree/main/) | IQ3_S | 38.78GB | | [BigWeave-v27-95b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/llmixer_-_BigWeave-v27-95b-gguf/tree/main/) | Q3_K_S | 38.66GB | | [BigWeave-v27-95b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/llmixer_-_BigWeave-v27-95b-gguf/tree/main/) | IQ3_M | 40.12GB | | [BigWeave-v27-95b.Q3_K.gguf](https://huggingface.co/RichardErkhov/llmixer_-_BigWeave-v27-95b-gguf/tree/main/) | Q3_K | 43.16GB | | [BigWeave-v27-95b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/llmixer_-_BigWeave-v27-95b-gguf/tree/main/) | Q3_K_M | 43.16GB | | [BigWeave-v27-95b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/llmixer_-_BigWeave-v27-95b-gguf/tree/main/) | Q3_K_L | 47.01GB | | [BigWeave-v27-95b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/llmixer_-_BigWeave-v27-95b-gguf/tree/main/) | IQ4_XS | 48.37GB | | [BigWeave-v27-95b.Q4_0.gguf](https://huggingface.co/RichardErkhov/llmixer_-_BigWeave-v27-95b-gguf/tree/main/) | Q4_0 | 50.55GB | | [BigWeave-v27-95b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/llmixer_-_BigWeave-v27-95b-gguf/tree/main/) | IQ4_NL | 51.04GB | | [BigWeave-v27-95b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/llmixer_-_BigWeave-v27-95b-gguf/tree/main/) | Q4_K_S | 50.94GB | | [BigWeave-v27-95b.Q4_K.gguf](https://huggingface.co/RichardErkhov/llmixer_-_BigWeave-v27-95b-gguf/tree/main/) | Q4_K | 53.82GB | | [BigWeave-v27-95b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/llmixer_-_BigWeave-v27-95b-gguf/tree/main/) | Q4_K_M | 53.82GB | | [BigWeave-v27-95b.Q4_1.gguf](https://huggingface.co/RichardErkhov/llmixer_-_BigWeave-v27-95b-gguf/tree/main/) | Q4_1 | 56.14GB | | [BigWeave-v27-95b.Q5_0.gguf](https://huggingface.co/RichardErkhov/llmixer_-_BigWeave-v27-95b-gguf/tree/main/) | Q5_0 | 61.74GB | | [BigWeave-v27-95b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/llmixer_-_BigWeave-v27-95b-gguf/tree/main/) | Q5_K_S | 61.74GB | | [BigWeave-v27-95b.Q5_K.gguf](https://huggingface.co/RichardErkhov/llmixer_-_BigWeave-v27-95b-gguf/tree/main/) | Q5_K | 63.42GB | | [BigWeave-v27-95b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/llmixer_-_BigWeave-v27-95b-gguf/tree/main/) | Q5_K_M | 63.42GB | | [BigWeave-v27-95b.Q5_1.gguf](https://huggingface.co/RichardErkhov/llmixer_-_BigWeave-v27-95b-gguf/tree/main/) | Q5_1 | 67.33GB | | [BigWeave-v27-95b.Q6_K.gguf](https://huggingface.co/RichardErkhov/llmixer_-_BigWeave-v27-95b-gguf/tree/main/) | Q6_K | 73.62GB | | [BigWeave-v27-95b.Q8_0.gguf](https://huggingface.co/RichardErkhov/llmixer_-_BigWeave-v27-95b-gguf/tree/main/) | Q8_0 | 95.35GB | Original model description: --- base_model: - 152334H/miqu-1-70b-sf license: unknown language: - en pipeline_tag: text-generation tags: - merge - frankenmerge - 95b --- # BigWeave v27 95b The BigWeave models aim to experimentally identify merge settings for increasing model performance. The version number merely tracks various attempts and is not a quality indicator. Only results demonstrating good performance are retained and shared. # Prompting Format Chatml, Mistral, Vicuna. # Merge process This is a self-merge of 152334H/miqu-1-70b-sf. The 30 most important layers (according to exl2 measurements) are duplicated with 50% overlap. Merge configuration: ``` slices: - sources: - model: 152334H/miqu-1-70b-sf layer_range: [0,40] - sources: - model: 152334H/miqu-1-70b-sf layer_range: [34,45] # dup 34-44 - sources: - model: 152334H/miqu-1-70b-sf layer_range: [40,52] - sources: - model: 152334H/miqu-1-70b-sf layer_range: [51,53] # dup 51-52 - sources: - model: 152334H/miqu-1-70b-sf layer_range: [52,55] - sources: - model: 152334H/miqu-1-70b-sf layer_range: [54,56] # dup 54-55 - sources: - model: 152334H/miqu-1-70b-sf layer_range: [55,59] - sources: - model: 152334H/miqu-1-70b-sf layer_range: [58,60] # dup 58-59 - sources: - model: 152334H/miqu-1-70b-sf layer_range: [59,72] - sources: - model: 152334H/miqu-1-70b-sf layer_range: [64,79] # dup 64-78 - sources: - model: 152334H/miqu-1-70b-sf layer_range: [72,80] merge_method: passthrough dtype: float16 ```