YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

MagnusIntellectus-12B-v1 - GGUF

Original model description:

tags: - merge - mergekit - lazymergekit - UsernameJustAnother/Nemo-12B-Marlin-v5 - anthracite-org/magnum-12b-v2 base_model: - UsernameJustAnother/Nemo-12B-Marlin-v5 - anthracite-org/magnum-12b-v2 model-index: - name: MagnusIntellectus-12B-v1 results: - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: HuggingFaceH4/ifeval args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 44.21 name: strict accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=GalrionSoftworks/MagnusIntellectus-12B-v1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: BBH args: num_few_shot: 3 metrics: - type: acc_norm value: 33.26 name: normalized accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=GalrionSoftworks/MagnusIntellectus-12B-v1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: hendrycks/competition_math args: num_few_shot: 4 metrics: - type: exact_match value: 5.14 name: exact match source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=GalrionSoftworks/MagnusIntellectus-12B-v1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa args: num_few_shot: 0 metrics: - type: acc_norm value: 4.59 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=GalrionSoftworks/MagnusIntellectus-12B-v1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 15.18 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=GalrionSoftworks/MagnusIntellectus-12B-v1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 26.9 name: accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=GalrionSoftworks/MagnusIntellectus-12B-v1 name: Open LLM Leaderboard license: apache-2.0 pipeline_tag: text-generation library_name: transformers

MagnusIntellectus-12B-v1

image/png

How pleasant, the rocks appear to have made a decent conglomerate. A-.

MagnusIntellectus is a merge of the following models using LazyMergekit:

🧩 Configuration

models:
  - model: UsernameJustAnother/Nemo-12B-Marlin-v5
    parameters:
      density: 0.4
      weight: 0.70
  - model: anthracite-org/magnum-12b-v2
    parameters:
      density: 0.6
      weight: 0.30

merge_method: ties
base_model: UsernameJustAnother/Nemo-12B-Marlin-v5
parameters:
  normalize: true
dtype: bfloat16

πŸ’» Usage

!pip install -qU transformers accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "GalrionSoftworks/MagnusIntellectus-12B-v1"
messages = [{"role": "user", "content": "What is a large language model?"}]

tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)

outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 21.55
IFEval (0-Shot) 44.21
BBH (3-Shot) 33.26
MATH Lvl 5 (4-Shot) 5.14
GPQA (0-shot) 4.59
MuSR (0-shot) 15.18
MMLU-PRO (5-shot) 26.90

Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more quants, at much higher speed, than I would otherwise be able to.

Downloads last month
6
GGUF
Model size
12.2B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support