|
|
--- |
|
|
license: apache-2.0 |
|
|
base_model: |
|
|
- Writer/palmyra-mini-thinking-a |
|
|
tags: |
|
|
- mlx |
|
|
- qwen2 |
|
|
- palmyra |
|
|
- thinking |
|
|
- reasoning |
|
|
--- |
|
|
|
|
|
# Palmyra Mini Thinking A - MLX BF16 |
|
|
|
|
|
## Model Description |
|
|
|
|
|
This is a bfloat16 precision version of the [palmyra-mini-thinking-a model](https://huggingface.co/Writer/palmyra-mini-thinking-a), optimized for Apple Silicon using the MLX framework. This model is based on the Qwen2 architecture and is specifically designed for reasoning tasks with explicit thinking capabilities through special `<think>` and `</think>` tokens. |
|
|
|
|
|
## Quick Start |
|
|
|
|
|
### Installation |
|
|
|
|
|
```bash |
|
|
pip install mlx-lm |
|
|
``` |
|
|
|
|
|
### Usage |
|
|
|
|
|
```python |
|
|
from mlx_lm import load, generate |
|
|
|
|
|
# Load the model |
|
|
model, tokenizer = load("/Users/thomas/Documents/Model Weights/SPW2 Mini Launch/palmyra-mini-thinking-a/MLX") |
|
|
|
|
|
# Generate text with thinking |
|
|
prompt = "Solve this step by step: What is 15% of 240?" |
|
|
response = generate(model, tokenizer, prompt=prompt, verbose=True, max_tokens=512) |
|
|
print(response) |
|
|
``` |
|
|
|
|
|
## Technical Specifications |
|
|
|
|
|
### Model Architecture |
|
|
- **Model Type**: `qwen2` (Qwen2 Architecture) |
|
|
- **Architecture**: `Qwen2ForCausalLM` |
|
|
- **Parameters**: ~1.7 billion parameters |
|
|
- **Precision**: bfloat16 |
|
|
- **Specialization**: Reasoning and thinking tasks |
|
|
|
|
|
### Core Parameters |
|
|
| Parameter | Value | |
|
|
|-----------|-------| |
|
|
| Hidden Size | 1,536 | |
|
|
| Intermediate Size | 8,960 | |
|
|
| Number of Layers | 28 | |
|
|
| Attention Heads | 12 | |
|
|
| Key-Value Heads | 2 | |
|
|
| Head Dimension | 128 | |
|
|
| Vocabulary Size | 151,665 | |
|
|
|
|
|
### Attention Mechanism |
|
|
- **Attention Type**: Full attention across all 28 layers |
|
|
- **Max Position Embeddings**: 131,072 tokens |
|
|
- **Attention Dropout**: 0.0 |
|
|
- **Sliding Window**: Not used |
|
|
- **Max Window Layers**: 21 |
|
|
|
|
|
### RoPE (Rotary Position Embedding) Configuration |
|
|
- **RoPE Theta**: 10,000 |
|
|
- **RoPE Scaling**: None |
|
|
|
|
|
### Thinking Capabilities |
|
|
- **Thinking Tokens**: `<think>` (151648) and `</think>` (151649) |
|
|
- **Reasoning Mode**: Explicit step-by-step reasoning |
|
|
- **Chat Template**: Automatically adds `<think>` tag for generation prompts |
|
|
|
|
|
### File Structure |
|
|
``` |
|
|
palmyra-mini-thinking-a/MLX/ |
|
|
├── config.json # Model configuration |
|
|
├── model.safetensors # Model weights (3.3GB) |
|
|
├── model.safetensors.index.json # Model sharding index |
|
|
├── tokenizer.json # Tokenizer configuration |
|
|
├── tokenizer_config.json # Tokenizer settings |
|
|
├── special_tokens_map.json # Special tokens mapping |
|
|
├── chat_template.jinja # Chat template with thinking |
|
|
└── README.md # Model documentation |
|
|
``` |
|
|
|
|
|
## Performance Characteristics |
|
|
|
|
|
### Hardware Requirements |
|
|
- **Platform**: Apple Silicon (M1, M2, M3, M4 series) |
|
|
- **Memory**: ~3.3GB for model weights |
|
|
- **Recommended RAM**: 12GB+ for optimal performance |
|
|
- **Precision**: Full bfloat16 precision |
|
|
|
|
|
### Layer Configuration |
|
|
All 28 layers use full attention mechanism as specified in the `layer_types` configuration, providing consistent attention patterns across the entire model depth. |
|
|
|
|
|
## Training Details |
|
|
|
|
|
### Tokenizer |
|
|
- **Type**: LlamaTokenizerFast with 151,665 vocabulary size |
|
|
- **Special Tokens**: |
|
|
- BOS Token ID: 151646 (` |
|
|
`) |
|
|
- EOS Token ID: 151643 (` |
|
|
`) |
|
|
- Pad Token ID: 151643 (` |
|
|
`) |
|
|
- Think Start: 151648 (`<think>`) |
|
|
- Think End: 151649 (`</think>`) |
|
|
|
|
|
### Model Configuration |
|
|
- **Hidden Activation**: SiLU (Swish) |
|
|
- **Normalization**: RMSNorm (ε = 1e-06) |
|
|
- **Initializer Range**: 0.02 |
|
|
- **Attention Dropout**: 0.0 |
|
|
- **Word Embeddings**: Not tied |
|
|
- **Use Cache**: False (optimized for thinking tasks) |
|
|
|
|
|
### Chat Template |
|
|
The model uses a specialized chat template that automatically initiates thinking mode: |
|
|
- User messages: ` |
|
|
` |
|
|
- Assistant messages: ` |
|
|
<|Assistant|><think>\n` (automatically adds thinking prompt) |
|
|
- Tool calling support with `<tool_call>` and `</tool_call>` tokens |
|
|
- Vision and multimodal tokens included |
|
|
|
|
|
## Usage Examples |
|
|
|
|
|
### Reasoning Task |
|
|
```python |
|
|
prompt = """ |
|
|
A train travels 120 miles in 2 hours. If it maintains the same speed, how far will it travel in 5 hours? |
|
|
<|Assistant|><think> |
|
|
""" |
|
|
|
|
|
response = generate(model, tokenizer, prompt=prompt, max_tokens=300) |
|
|
``` |
|
|
|
|
|
### Problem Solving |
|
|
```python |
|
|
prompt = """ |
|
|
Explain why the sky appears blue during the day. |
|
|
<|Assistant|><think> |
|
|
""" |
|
|
|
|
|
response = generate(model, tokenizer, prompt=prompt, max_tokens=400) |
|
|
``` |
|
|
|
|
|
## Known Limitations |
|
|
|
|
|
1. **Platform Dependency**: Optimized specifically for Apple Silicon; may not run on other platforms |
|
|
2. **Memory Requirements**: Requires significant memory due to full precision weights |
|
|
3. **Thinking Overhead**: Explicit thinking may increase response length and generation time |
|
|
4. **Cache Disabled**: Model has `use_cache: false` which may impact inference speed |
|
|
|
|
|
## Compatibility |
|
|
|
|
|
- **MLX-LM**: Requires recent version with Qwen2 support |
|
|
- **Apple Silicon**: M1, M2, M3, M4 series processors |
|
|
- **macOS**: Compatible with recent macOS versions supporting MLX |
|
|
- **Transformers**: Version 4.52.4+ |
|
|
|
|
|
## License |
|
|
|
|
|
Apache 2.0 |
|
|
|
|
|
#### Original model card below: |
|
|
|
|
|
------ |
|
|
|
|
|
|
|
|
<div align="center"> |
|
|
<h1>Palmyra-mini-thinking-a</h1> |
|
|
|
|
|
</div> |
|
|
|
|
|
### Model Description |
|
|
|
|
|
- **Language(s) (NLP):** English |
|
|
- **License:** Apache-2.0 |
|
|
- **Finetuned from model:** Qwen/Qwen2.5-1.5B |
|
|
- **Context window:** 131,072 tokens |
|
|
- **Parameters:** 1.7 billion |
|
|
|
|
|
|
|
|
## Model Details |
|
|
|
|
|
The palmyra-mini-thinking-a model demonstrates exceptional performance in advanced mathematical reasoning and competitive programming. Its capabilities are highlighted by an outstanding score of 0.886 on the 'MATH500' benchmark, showcasing a robust ability to solve complex mathematical problems. The strength of the model in quantitative challenges is further confirmed by its score of 0.8287 on 'gsm8k (strict-match)', which demonstrates proficiency in multi-step arithmetic reasoning. Additionally, the model proves its aptitude for high-level problem-solving with a score of 0.8 on 'AMC23'. The model also shows strong potential in the coding domain, achieving a score of 0.5631 on 'Codeforces (pass_rate)' and 0.5481 on 'Olympiadbench (extractive_match)', indicating competence in generating correct solutions for programming challenges. |
|
|
|
|
|
## Benchmark Performance |
|
|
|
|
|
This section provides a detailed breakdown of the palmyra-mini-thinking-a model's performance across a standardized set of industry benchmarks. The data is presented in its original order from the source evaluation. |
|
|
|
|
|
| Benchmark | Score | |
|
|
|:-----------------------------------------------------------------|---------:| |
|
|
| gsm8k (strict-match) | 0.8287 | |
|
|
| minerva_math(exact_match) | 0.3842 | |
|
|
| mmlu_pro(exact_match) | 0.2748 | |
|
|
| hendrycks_math | 0.0054 | |
|
|
| ifeval (inst_level_loose_acc) | 0.3657 | |
|
|
| mathqa (acc) | 0.4171 | |
|
|
| humaneval (pass@1) | 0.2378 | |
|
|
| BBH (get-answer)(exact_match) | 0.462 | |
|
|
| mbpp | 0.304 | |
|
|
| leadboard_musr (acc_norm) | 0.3413 | |
|
|
| gpqa lighteval gpqa diamond_pass@1:8_samples | 0.3826 | |
|
|
| AIME24(pass@1)(avg-of-1) | 0.4333 | |
|
|
| AIME25(pass@1)(avg-of-1) | 0.3667 | |
|
|
| Livecodebench-codegen (livecodebench/code_generation_lite v4_v5) | 0.1784 | |
|
|
| AMC23 | 0.8 | |
|
|
| MATH500 | 0.886 | |
|
|
| Minerva | 0.3493 | |
|
|
| Olympiadbench (extractive_match) | 0.5481 | |
|
|
| Codecontests (pass_rate) | 0.1778 | |
|
|
| Codeforces (pass_rate) | 0.5631 | |
|
|
| Taco (pass_rate) | 0.3083 | |
|
|
| APPS (all_levels) | 0.0447 | |
|
|
| HMMT23 (extractive_match) | 0.1 | |
|
|
| Average | 0.380839 | |
|
|
|
|
|
|
|
|
|
|
|
### Use with transformers |
|
|
|
|
|
You can run conversational inference using the Transformers Auto classes with the `generate()` function. Here's an example: |
|
|
|
|
|
```py |
|
|
import torch |
|
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
|
|
|
|
model_id = "Writer/palmyra-mini-thinking-a" |
|
|
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model_id) |
|
|
|
|
|
model = AutoModelForCausalLM.from_pretrained( |
|
|
model_id, |
|
|
torch_dtype=torch.float16, |
|
|
device_map="auto", |
|
|
attn_implementation="flash_attention_2", |
|
|
) |
|
|
|
|
|
messages = [ |
|
|
{ |
|
|
"role": "user", |
|
|
"content": "You have a 3-liter jug and a 5-liter jug. How can you measure exactly 4 liters of water?" |
|
|
} |
|
|
], |
|
|
|
|
|
input_ids = tokenizer.apply_chat_template( |
|
|
messages, tokenize=True, add_generation_prompt=True, return_tensors="pt" |
|
|
) |
|
|
|
|
|
gen_conf = { |
|
|
"max_new_tokens": 256, |
|
|
"eos_token_id": tokenizer.eos_token_id, |
|
|
"temperature": 0.3, |
|
|
"top_p": 0.9, |
|
|
} |
|
|
|
|
|
with torch.inference_mode(): |
|
|
output_id = model.generate(input_ids, **gen_conf) |
|
|
|
|
|
output_text = tokenizer.decode(output_id[0][input_ids.shape[1] :]) |
|
|
|
|
|
print(output_text) |
|
|
``` |
|
|
|
|
|
## Running with vLLM |
|
|
```py |
|
|
vllm serve Writer/palmyra-mini-thinking-a |
|
|
``` |
|
|
```py |
|
|
curl -X POST http://localhost:8000/v1/chat/completions \ |
|
|
-H "Content-Type: application/json" \ |
|
|
-d '{ |
|
|
"model": "Writer/palmyra-mini-thinking-a", |
|
|
"messages": [ |
|
|
{ |
|
|
"role": "user", |
|
|
"content": "You have a 3-liter jug and a 5-liter jug. How can you measure exactly 4 liters of water?" |
|
|
} |
|
|
], |
|
|
"max_tokens": 8000, |
|
|
"temperature": 0.2 |
|
|
}' |
|
|
``` |
|
|
|
|
|
|
|
|
## Ethical Considerations |
|
|
|
|
|
As with any language model, there is a potential for generating biased or inaccurate information. Users should be aware of these limitations and use the model responsibly. |
|
|
|
|
|
### Citation and Related Information |
|
|
|
|
|
To cite this model: |
|
|
``` |
|
|
@misc{Palmyra-mini-thinking-a, |
|
|
author = {Writer Engineering team}, |
|
|
title = {{Palmyra-mini: A powerful LLM designed for math and coding}}, |
|
|
howpublished = {\url{https://dev.writer.com}}, |
|
|
year = 2025, |
|
|
month = Sep |
|
|
} |
|
|
``` |
|
|
Contact [email protected] |