Post
1846
Hi community,
Few days back, I posted about my ongoing research on making reasoning mamba models and I found great insights from the community.
Today, I am announcing an update to the model weights. With newer checkpoints, the Falcon3 Mamba R1 model now outperforms very large transformer based LLMs (including Gemini) for Formal Logic questions of MMLU. It scores 60% on formal logic which is considered a tough subset of questions in MMLU.
I would highly appreciate your insights and suggestions on this new checkpoint.
Model Repo: hanzla/Falcon3-Mamba-R1-v0
Chat space: hanzla/Falcon3MambaReasoner
Few days back, I posted about my ongoing research on making reasoning mamba models and I found great insights from the community.
Today, I am announcing an update to the model weights. With newer checkpoints, the Falcon3 Mamba R1 model now outperforms very large transformer based LLMs (including Gemini) for Formal Logic questions of MMLU. It scores 60% on formal logic which is considered a tough subset of questions in MMLU.
I would highly appreciate your insights and suggestions on this new checkpoint.
Model Repo: hanzla/Falcon3-Mamba-R1-v0
Chat space: hanzla/Falcon3MambaReasoner