Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

Nous-Hermes-2-MoE-2x34B - GGUF

Original model description:

license: apache-2.0 model-index: - name: Nous-Hermes-2-MoE-2x34B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 66.64 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibndias/Nous-Hermes-2-MoE-2x34B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 85.73 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibndias/Nous-Hermes-2-MoE-2x34B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 76.49 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibndias/Nous-Hermes-2-MoE-2x34B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 58.08 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibndias/Nous-Hermes-2-MoE-2x34B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 83.35 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibndias/Nous-Hermes-2-MoE-2x34B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 69.52 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibndias/Nous-Hermes-2-MoE-2x34B name: Open LLM Leaderboard

This is an experimental model to make an MoE of Nous Hermes 2 Yi 34B as Mixture of Expert.

The base model is Yi-34B.

All credits belong to NousResearch for fine tuned Yi model, 01-AI for Yi model, and Charles O. Goddard for the 'mergekit'.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 73.30
AI2 Reasoning Challenge (25-Shot) 66.64
HellaSwag (10-Shot) 85.73
MMLU (5-Shot) 76.49
TruthfulQA (0-shot) 58.08
Winogrande (5-shot) 83.35
GSM8k (5-shot) 69.52
Downloads last month
398
GGUF

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .