@ThomasTheMaker if you make an issue on the repo, I'll look into it!
Jordan Legg PRO
AI & ML interests
Recent Activity
Organizations
takarajordan's activity


@ThomasTheMaker it's just the raw attention and transformer architecture in golang designed for serverless so performance will definitely be less than ggml and llama.cpp since it's not accelerated by GPU's but if you're into edge AI CPU only, this is the first, only and best way to compute attention.
Quantization can definitely be supported as it's just a math model!

We built this library at takara.ai to bring attention mechanisms and transformer layers to Go β in a form that's lightweight, clean, and dependency-free.
Weβre proud to say that every part of this project reflects what we set out to do.
- Pure Go β no external dependencies, built entirely on the Go standard library
- Core support for DotProductAttention and MultiHeadAttention
- Full transformer layers with LayerNorm, feed-forward networks, and residual connections
- Designed for edge, embedded, and real-time environments where simplicity and performance matter
Thank you to everyone who has supported this so far β the stars, forks, and feedback mean a lot.


No abstracts, just bullet points.
Start your day here: https://tldr.takara.ai
This is a pretty big update for sure. The models have improved significantly which is great for everyone involved, especially the end user. Those datasets look very promising as well!
Sounds interesting, Iβll check it out!
This is a really interesting post. Iβve been looking at the DeepSeek models for sure. This shows a pretty nice improvement, would love to see some example changes!
Very cool

A little over 2 weeks ago @aldigobbler and I set out to create the largest MultiModal SVG dataset ever created, we succeeded in this and when I was in Munich, Germany I took it one step further and made an entire app with it!
We fine-tuned Mistral Small, made a Next.JS application and blew some minds, taking 3rd place out of over 100 hackers. So cool!
If you want to see the dataset, please see below.
takara-ai/fudeno-instruct-4M
