YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

BigWeave-v27-95b - GGUF

Original model description:

base_model: - 152334H/miqu-1-70b-sf license: unknown language: - en pipeline_tag: text-generation tags: - merge - frankenmerge - 95b

BigWeave v27 95b

The BigWeave models aim to experimentally identify merge settings for increasing model performance. The version number merely tracks various attempts and is not a quality indicator. Only results demonstrating good performance are retained and shared.

Prompting Format

Chatml, Mistral, Vicuna.

Merge process

This is a self-merge of 152334H/miqu-1-70b-sf. The 30 most important layers (according to exl2 measurements) are duplicated with 50% overlap.

Merge configuration:

slices:
  - sources:
    - model: 152334H/miqu-1-70b-sf
      layer_range: [0,40]
  - sources:
    - model: 152334H/miqu-1-70b-sf
      layer_range: [34,45] # dup 34-44
  - sources:
    - model: 152334H/miqu-1-70b-sf
      layer_range: [40,52]
  - sources:
    - model: 152334H/miqu-1-70b-sf
      layer_range: [51,53] # dup 51-52
  - sources:
    - model: 152334H/miqu-1-70b-sf
      layer_range: [52,55]
  - sources:
    - model: 152334H/miqu-1-70b-sf
      layer_range: [54,56] # dup 54-55
  - sources:
    - model: 152334H/miqu-1-70b-sf
      layer_range: [55,59]
  - sources:
    - model: 152334H/miqu-1-70b-sf
      layer_range: [58,60] # dup 58-59
  - sources:
    - model: 152334H/miqu-1-70b-sf
      layer_range: [59,72]
  - sources:
    - model: 152334H/miqu-1-70b-sf
      layer_range: [64,79] # dup 64-78
  - sources:
    - model: 152334H/miqu-1-70b-sf
      layer_range: [72,80]
merge_method: passthrough
dtype: float16
Downloads last month
2
GGUF
Model size
96.4B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support