Motif-2.6B / README.md
JH-Motif's picture
Update README.md
8750422 verified
|
raw
history blame
3.74 kB
metadata
license: other
license_name: motif-license
license_link: LICENSE
language:
  - en

Introduction

We announce Motif 2.6B, a 2.6 billion parameter language model trained from scratch on AMD Instinct™ MI250X GPUs. Motif 2.6B marks our very first step toward building helpful, reliable AI aligned with human values.

With this first release, we aim for Motif 2.6B to achieve performance comparable to well-known open-source models such as Phi, Llama, and Qwen — particularly those in the 7B–9B parameter range. A detailed technical report will be released at a later time; here, we present the initial evaluation results.

Evaluation

When models are released, their accompanying technical reports or papers often present benchmark results based on evaluation settings chosen by the developers. While this is a common and understandable practice, it can lead to challenges when comparing models across different organizations. The same model may yield different scores depending on evaluation conditions, and details of these conditions are not always fully disclosed. This lack of standardization can make it difficult for the open-source community to interpret and trust reported results. We therefore reference performance scores based on the official numbers reported by each model’s developers in their respective publications.

To illustrate how much evaluation scores can vary across reports, we provide concrete examples of benchmark score differences for major models in the Evaluation Appendix.

Comparsion to Mistral

The benchmarks and metrics used are identical to those in the Mistral 7B technical report.

Benchmark Metric Mistral 7B Motif 2.6B Improvement
MMLU 5-shot 60.1 57.93 -3.61%
HellaSwag 0-shot 81.3 61.35 -24.54%
WinoG 0-shot 75.3 59.91 -20.44%
PIQA 0-shot 83 75.95 -8.49%
Arc-e 0-shot 80 87.21 +9.01%
Arc-c 0-shot 55.5 74.2 +33.69%
NQ 5-shot 28.8 11.14 -61.32%
TriviaQA 5-shot 69.9 54.97 -21.36%
HumanEval 0-shot 30.5 68.3 +123.93%
MBPP 3-shot 47.5 60.3 +26.95%
MATH 4-shot, maj@4 13.1 39.2* +199.24%
GSM8K 8-shot, maj@8 52.2 77.71 +48.87%
Average +33.01%

* : We report the 4-shot score instead of the 4-shot, maj@4.

Comparsion to Llama

Llama 3

The benchmarks and metrics used are identical to those in the Llama 3 technical report.

Benchmark Metric Llama 3 8B Motif 2.6B Improvement
MMLU 5-shot 69.4 57.93 -16.53%
MMLU 0-shot, CoT 73 55.9 -23.42%
MMLU-Pro 5-shot, CoT 48.3 - -
IFEval - 80.4 74.02 -7.94%
HumanEval 0-shot 72.6 68.3 -5.92%
MBPP 0-shot 72.8 57.93 -20.43%
GSM8K 8-shot, CoT 84.5 77.71 -8.04%
MATH 0-shot, CoT 51.9 49.68 -4.28%
ARC Challenge 0-shot 83.4 74.2 -11.03%
GPQA 0-shot, CoT 32.8 18.53 -43.51%
Average -15.68%

Llama 3.2

The benchmarks and metrics used are identical to those in the Llama 3.2 official blog.

Benchmark Metric Llama 3.2 1B Llama 3.2 1B Motif 2.6B Improvement(over 1B) Improvement(over 3B)
MMLU 0-shot 49.3 63.4 57.6 +16.75% -9.21%
Open-rewrite eval* 0-shot, rougeL 41.6 40.1 - - -
TLDR9+* test, 1-shot, rougeL 16.8 19 - - -
IFEval - 59.5 77.4 74.02 +24.40% -4.37%
GSM9K 8-shot, CoT 44.4 77.7 74.9 +68.69% -3.60%
MATH 0-shot, CoT 30.6 48 49.68 +62.35% +3.50%
ARC Challenge 0-shot 59.4 78.5 74.2 +24.92% -5.48%
GPQA 0-shot 27.2 32.8 25.45 -6.43% -22.41%
Hellaswag 0-shot 41.2 69.8 61.35 +48.91% -12.11%
Average +39.42% -3.83%

* We were unable to find an evaluation framework for this benchmark.