Edit model card

Quantized GGUF model Hermes-2-Pro-Mistral-7B-Mistral-7B-Instruct-v0.3-linear-merge

This model has been quantized using llama-quantize from llama.cpp

Hermes-2-Pro-Mistral-7B-Mistral-7B-Instruct-v0.3-linear-merge

Hermes-2-Pro-Mistral-7B-Mistral-7B-Instruct-v0.3-linear-merge is a merge of the following models using mergekit:

🧩 Merge Configuration

merge_method: linear
base_model: mistralai/Mistral-7B-Instruct-v0.3
models:
  - model: NousResearch/Hermes-2-Pro-Mistral-7B
    parameters:
      weight: 0.3
  - model: mistralai/Mistral-7B-Instruct-v0.3
    parameters:
      weight: 0.7
parameters:
  normalize: true
dtype: float16

Model Description

The Hermes-2-Pro-Mistral-7B-Mistral-7B-Instruct-v0.3-linear-merge combines the advanced conversational capabilities of the Hermes 2 Pro model with the instruction-following prowess of the Mistral-7B-Instruct model. This strategic fusion aims to enhance the model's ability to understand and generate contextually relevant responses while maintaining a high level of performance across various natural language processing tasks.

Hermes 2 Pro is an upgraded version of the original Nous Hermes 2, featuring a refined dataset and improved function calling capabilities. It excels in generating structured outputs, making it particularly useful for applications requiring precise data formatting, such as JSON responses. The Mistral-7B-Instruct model, on the other hand, is designed to follow instructions effectively, making it a strong candidate for tasks that require adherence to user prompts.

Use Cases

This merged model is well-suited for a variety of applications, including but not limited to:

  • Conversational agents and chatbots
  • Function calling and structured data generation
  • Instruction-based tasks and question answering
  • Creative writing and storytelling

Model Features

  • Enhanced Conversational Abilities: The model leverages the conversational strengths of Hermes 2 Pro, allowing for engaging and context-aware dialogues.
  • Instruction Following: With the integration of Mistral-7B-Instruct, the model can effectively follow user instructions, making it ideal for task-oriented applications.
  • Function Calling and JSON Outputs: The model supports advanced function calling and can generate structured JSON outputs, facilitating integration with various applications and APIs.

Evaluation Results

The performance of the parent models provides a solid foundation for the merged model. Here are some evaluation metrics from the original models:

Hermes 2 Pro

  • Function Calling Accuracy: 91%
  • JSON Mode Accuracy: 84%

Mistral-7B-Instruct

While specific evaluation metrics for Mistral-7B-Instruct were not available, it is known for its strong instruction-following capabilities, which contribute to the overall performance of the merged model.

Limitations

Despite the strengths of the merged model, it may inherit some limitations from its parent models. Potential issues include:

  • Biases: The model may reflect biases present in the training data of both parent models, which could affect the fairness and neutrality of its outputs.
  • Contextual Understanding: While the model excels in many areas, there may still be challenges in understanding highly nuanced or ambiguous prompts.

In summary, the Hermes-2-Pro-Mistral-7B-Mistral-7B-Instruct-v0.3-linear-merge represents a powerful tool for a wide range of NLP tasks, combining the best features of its parent models while also carrying forward some of their limitations.

Downloads last month
128
GGUF
Model size
7.24B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .