Edit model card

QuantFactory/Stella-mistral-nemo-12B-GGUF

This is quantized version of nbeerbower/Stella-mistral-nemo-12B created using llama.cpp

Original Model Card


base_model:

  • flammenai/Mahou-1.3-mistral-nemo-12B
  • Gryphe/Pantheon-RP-1.5-12b-Nemo
  • VAGOsolutions/SauerkrautLM-Nemo-12b-Instruct
  • nbeerbower/mistral-nemo-wissenschaft-12B
  • nbeerbower/HolyNemo-12B
  • nbeerbower/mistral-nemo-gutenberg-12B-v2
  • intervitens/mini-magnum-12b-v1.1
  • NeverSleep/Lumimaid-v0.2-12B
  • nbeerbower/mistral-nemo-bophades-12B library_name: transformers tags:
  • mergekit
  • merge

QuantFactory/Stella-mistral-nemo-12B-GGUF

This is quantized version of nbeerbower/Stella-mistral-nemo-12B created using llama.cpp

Original Model Card

Stella-mistral-nemo-12B

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the Model Stock merge method using nbeerbower/mistral-nemo-gutenberg-12B-v2 as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
    - model: flammenai/Mahou-1.3-mistral-nemo-12B
    - model: Gryphe/Pantheon-RP-1.5-12b-Nemo
    - model: intervitens/mini-magnum-12b-v1.1
    - model: NeverSleep/Lumimaid-v0.2-12B
    - model: VAGOsolutions/SauerkrautLM-Nemo-12b-Instruct
    - model: nbeerbower/HolyNemo-12B
    - model: nbeerbower/mistral-nemo-wissenschaft-12B
    - model: nbeerbower/mistral-nemo-bophades-12B
merge_method: model_stock
base_model: nbeerbower/mistral-nemo-gutenberg-12B-v2
dtype: bfloat16

Downloads last month
434
GGUF

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
Inference API (serverless) is not available, repository is disabled.