Pinecone-Sage-24b

image/png

🌲Pinecone Series

The Pinecone Series is a collection of thoughtfully crafted model merges, combining the strengths of the best models among my personal favourites. Each version is curated to excel in roleplay, general knowledge, intelligence, and rich creative writing, while preserving the unique capabilities of its underlying models.

Version Params Strengths
Pinecone-Rune 12B Fast, lightweight, surprisingly capable for its size
Pinecone-Sage 24B Balanced speed and performance, rich prose and RP
Pinecone-Titan 70B Rich prose, better long context capabilities, top-tier roleplay & knowledge

Recommended ST preset for RP :

☕ Support My Work

If you like my work, consider buying me a coffee to support future merges, GPU time, and experiments.

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the DARE TIES merge method using darkc0de/XortronCriminalComputingConfig as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

base_model: darkc0de/XortronCriminalComputingConfig
chat_template: auto
merge_method: dare_ties
modules:
  default:
    slices:
    - sources:
      - layer_range: [0, 40]
        model: Entropicengine/DarkTriad-24b
        parameters:
          density: 0.5
          weight: 0.3
      - layer_range: [0, 40]
        model: darkc0de/XortronCriminalComputingConfig
        parameters:
          density: 0.8
          weight: 0.8
      - layer_range: [0, 40]
        model: Entropicengine/Trifecta-Max-24b
        parameters:
          density: 0.5
          weight: 0.1
out_dtype: bfloat16
tokenizer: {}
Downloads last month
67
Safetensors
Model size
23.6B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Entropicengine/Pinecone-sage-24b

Collection including Entropicengine/Pinecone-sage-24b