Palmyra Mini Thinking B - MLX BF16

Model Description

This is a bfloat16 precision version of the palmyra-mini-thinking-b model, optimized for Apple Silicon using the MLX framework. This model is based on the Qwen2 architecture and represents an advanced iteration of the thinking model series, featuring improved reasoning capabilities and a refined chat template using the ChatML format.

Quick Start

Installation

pip install mlx-lm

Usage

from mlx_lm import load, generate

# Load the model
model, tokenizer = load("/Users/thomas/Documents/Model Weights/SPW2 Mini Launch/palmyra-mini-thinking-b/MLX/palmyra-mini-thinking-b-MLX-bf16")

# Generate text
prompt = "<|im_start|>user\nSolve this step by step: What is 20% of 350?<|im_end|>\n<|im_start|>assistant\n"
response = generate(model, tokenizer, prompt=prompt, verbose=True, max_tokens=512)
print(response)

Technical Specifications

Model Architecture

  • Model Type: qwen2 (Qwen2 Architecture)
  • Architecture: Qwen2ForCausalLM
  • Parameters: ~1.7 billion parameters
  • Precision: bfloat16
  • Specialization: Advanced reasoning and thinking tasks

Core Parameters

Parameter Value
Hidden Size 1,536
Intermediate Size 8,960
Number of Layers 28
Attention Heads 12
Key-Value Heads 2
Head Dimension 128
Vocabulary Size 151,936

Attention Mechanism

  • Attention Type: Full attention across all 28 layers
  • Max Position Embeddings: 131,072 tokens
  • Attention Dropout: 0.0
  • Sliding Window: Not used
  • Max Window Layers: 28

RoPE (Rotary Position Embedding) Configuration

  • RoPE Theta: 1,000,000.0 (enhanced for longer contexts)
  • RoPE Scaling: None

Advanced Features

  • Extended Context: Enhanced RoPE theta for better long-context performance
  • Improved Tokenizer: Qwen2Tokenizer with 151,936 vocabulary size
  • ChatML Format: Uses standard ChatML conversation format
  • Word Embeddings: Tied embeddings for efficiency

File Structure

palmyra-mini-thinking-b/MLX/palmyra-mini-thinking-b-MLX-bf16/
โ”œโ”€โ”€ config.json                    # Model configuration
โ”œโ”€โ”€ model.safetensors              # Model weights (2.9GB)
โ”œโ”€โ”€ model.safetensors.index.json   # Model sharding index
โ”œโ”€โ”€ tokenizer.json                 # Tokenizer configuration
โ”œโ”€โ”€ tokenizer_config.json          # Tokenizer settings
โ”œโ”€โ”€ special_tokens_map.json        # Special tokens mapping
โ”œโ”€โ”€ chat_template.jinja            # ChatML template
โ”œโ”€โ”€ generation_config.json         # Generation parameters
โ”œโ”€โ”€ added_tokens.json              # Additional tokens
โ”œโ”€โ”€ merges.txt                     # BPE merges
โ””โ”€โ”€ vocab.json                     # Vocabulary mapping

Performance Characteristics

Hardware Requirements

  • Platform: Apple Silicon (M1, M2, M3, M4 series)
  • Memory: ~2.9GB for model weights
  • Recommended RAM: 10GB+ for optimal performance
  • Precision: Full bfloat16 precision

Layer Configuration

All 28 layers use full attention mechanism as specified in the layer_types configuration, providing consistent attention patterns optimized for reasoning tasks.

Training Details

Tokenizer

  • Type: Qwen2Tokenizer with 151,936 vocabulary size
  • Special Tokens:
    • EOS Token ID: 151643 (<|endoftext|>)
    • Pad Token ID: 151643 (<|endoftext|>)
    • IM Start: 151644 (<|im_start|>)
    • IM End: 151645 (<|im_end|>)

Model Configuration

  • Hidden Activation: SiLU (Swish)
  • Normalization: RMSNorm (ฮต = 1e-06)
  • Initializer Range: 0.02
  • Attention Dropout: 0.0
  • Word Embeddings: Tied (for efficiency)
  • Use Cache: True (optimized for inference)

Chat Template

The model uses the standard ChatML format for conversations:

<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{user_message}<|im_end|>
<|im_start|>assistant
{assistant_response}<|im_end|>

Generation Configuration

  • BOS Token ID: 151643
  • EOS Token ID: 151643
  • Transformers Version: 4.53.0

Usage Examples

Reasoning Task

prompt = """<|im_start|>system
<|im_end|>
<|im_start|>user
A train travels 180 miles in 3 hours. If it maintains the same speed, how far will it travel in 7 hours?<|im_end|>
<|im_start|>assistant
"""

response = generate(model, tokenizer, prompt=prompt, max_tokens=300)

Problem Solving

prompt = """<|im_start|>system
You are a helpful assistant that thinks step by step.<|im_end|>
<|im_start|>user
Explain the process of photosynthesis in plants.<|im_end|>
<|im_start|>assistant
"""

response = generate(model, tokenizer, prompt=prompt, max_tokens=400)

Multi-turn Conversation

prompt = """<|im_start|>system
<|im_end|>
<|im_start|>user
What is the capital of France?<|im_end|>
<|im_start|>assistant
The capital of France is Paris.<|im_end|>
<|im_start|>user
What is the population of that city?<|im_end|>
<|im_start|>assistant
"""

response = generate(model, tokenizer, prompt=prompt, max_tokens=200)

Known Limitations

  1. Platform Dependency: Optimized specifically for Apple Silicon; may not run on other platforms
  2. Memory Requirements: Requires significant memory due to full precision weights
  3. Context Management: While supporting long contexts, performance may vary with very long sequences
  4. Format Dependency: Optimized for ChatML format; other formats may not work as well

Compatibility

  • MLX-LM: Requires recent version with Qwen2 support
  • Apple Silicon: M1, M2, M3, M4 series processors
  • macOS: Compatible with recent macOS versions supporting MLX
  • Transformers: Version 4.53.0+

License

Apache 2.0

Original model card below:


Palmyra-mini-thinking-b

Model Description

  • Language(s) (NLP): English
  • License: Apache-2.0
  • Finetuned from model: Qwen/Qwen2.5-1.5B
  • Context window: 131,072 tokens
  • Parameters: 1.7 billion

Introduction

Palmyra-mini-thinking-b represents a significant step forward in generative AI, demonstrating exceptional capabilities in complex reasoning and problem-solving domains. This model excels in mathematical and programming challenges, showcasing a robust understanding of abstract concepts and logical structures. Its performance is not just a measure of its power but a testament to its specialized training, which has honed its ability to tackle tasks that demand deep, multi-step thinking.

Mathematical Prowess

The model's mathematical abilities are particularly noteworthy. It achieves an impressive score of 0.925 on the AMC23 benchmark, indicating a strong grasp of advanced high school mathematics. This is further complemented by its performance on MATH500, where it scores 0.882, proving its proficiency across a wide range of mathematical problems. The model also shows its strength in competitive mathematics, scoring 0.6 on AIME24(pass@1)(avg-of-1) and 0.5733 on Olympiadbench (extractive_match). These scores highlight the model's capacity for sophisticated mathematical reasoning, making it a powerful tool for both educational and research applications.

Excellence in Competitive Programming

Beyond mathematics, Palmyra-mini-thinking-b demonstrates strong performance in the competitive programming arena. Its score of 0.6343 on the Codeforces (pass_rate) benchmark underscores its ability to understand complex algorithmic problems and generate correct, efficient code. This capability suggests the model is well-suited for tasks involving code generation, debugging, and algorithmic design, making it a valuable asset for software developers and computer science researchers.

Benchmark Scores (sampling params: temperature:0.6, top_p:0.95)

Pass@1(avg-of-64)

Benchmark Pass@1 (avg-of-64) Majority@64
AIME24 59.43% 71.67%
AIME25 49.69% 60.00%
GPQA 42.01% 47.22%
HMMT25 27.86% 30.00%
HLE 5.22% N/A
MMLU-PRO 55.49% 60.60%
MATH500 93.80% 95.40%
LCB 34.51% N/A

LCB here is version v6_2408_2505

Pass@1(avg-of-1)

Benchmark Score (%)
GSM8K (strict-match) 42.68%
Minerva Math (exact match) 7.08%
MMLU-PRO (exact match) 29.26%
MATH (Hendrycks) 0.16%
IFEval (inst_level_loose_acc) 32.97%
MathQA (acc) 30.45%
HumanEval (pass@1) 7.32%
BBH (get-answer)(exact match) 28.80%
MBPP 16.80%
GPQA (diamond, pass@1: 8 samples) 39.58%
AIME24 (pass@1)(avg-of-1) 60.00%
AIME25 (pass@1)(avg-of-1) 50.00%
Livecodebench-codegen (livecodebench/code_generation_lite v4_v5) 28.73%
AMC23 92.50%
MATH500 88.20%
Minerva 29.41%
Olympiadbench (extractive_match) 57.33%
Codecontests (pass_rate) 20.18%
Codeforces (pass_rate) 63.43%
Taco (pass_rate) 34.56%
APPS (all_levels) 5.84%
HMMT (Feb 2025) (extractive_match) 23.33%
Average 35.94%

Use with transformers

You can run conversational inference using the Transformers Auto classes with the generate() function. Here's an example:

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "Writer/palmyra-mini-thinking-b"

tokenizer = AutoTokenizer.from_pretrained(model_id)

model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype=torch.float16,
    device_map="auto",
    attn_implementation="flash_attention_2",
)

messages = [
      {
        "role": "user",
        "content": "You have a 3-liter jug and a 5-liter jug. How can you measure exactly 4 liters of water?"
      }
    ],

input_ids = tokenizer.apply_chat_template(
    messages, tokenize=True, add_generation_prompt=True, return_tensors="pt"
)

gen_conf = {
    "max_new_tokens": 256,
    "eos_token_id": tokenizer.eos_token_id,
    "temperature": 0.3,
    "top_p": 0.9,
}

with torch.inference_mode():
    output_id = model.generate(input_ids, **gen_conf)

output_text = tokenizer.decode(output_id[0][input_ids.shape[1] :])

print(output_text)

Running with vLLM

vllm serve Writer/palmyra-mini-thinking-b
curl -X POST http://localhost:8000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "Writer/palmyra-mini-thinking-b",
    "messages": [
      {
        "role": "user",
        "content": "You have a 3-liter jug and a 5-liter jug. How can you measure exactly 4 liters of water?"
      }
    ],
    "max_tokens": 8000,
    "temperature": 0.2
  }'

Ethical Considerations

As with any language model, there is a potential for generating biased or inaccurate information. Users should be aware of these limitations and use the model responsibly.

Footnotes

  • Base model: This model builds on NVIDIA's OpenReasoning-Nemotron-1.5B (https://huggingface.co/nvidia/OpenReasoning-Nemotron-1.5B).
  • Evaluation methodology:
    • Pass@1 (avg-of-1): computed using lm_eval and lighteval.
    • Pass@1 (avg-of-64) and Majority@64: computed using nemoskills.

Citation and Related Information

To cite this model:

@misc{Palmyra-mini-thinking-b,
  author = {Writer Engineering team},
  title = {{Palmyra-mini: A powerful LLM designed for math and coding}},
  howpublished = {\url{https://dev.writer.com}},
  year = 2025,
  month = Sep 
}

Contact [email protected]

Downloads last month
63
Safetensors
Model size
1.54B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Writer/palmyra-mini-thinking-b-MLX-BF16

Finetuned
(1)
this model