What a beautiful monster...

#1
by techAInewb - opened

After much fiddling with knobs like the luddite that I am, I've finally succeeded in converting MistralAI's Mistral Nemo Instruct 2407 to F32 ONNX format. Tests are currently underway, however I was able to use Kaggle to get fairly basic benchmarks of the base model. As soon as the ONNX model finishes loading to Kaggle, I should have the ability to benchmark this monstrosity. If it shows well, I'll be releasing F16, Q8, Q4, and yes, a Q2 simply for dev and research purposes.

Assuming success with this, I'll be setting my sights on making an ONNX version of Mamba 2 to see if I can retain the Mamba 2 architecture's strengths and quantization resistance and use it for my overall project goals.

I wanted to thank everyone here and online for their help, encouragement, and assistance while I've worked on this. To my family and friends, I'm not dead (lol). To everyone out there sharing my passion for exploration, and have no fear of "failing forward" thank you for putting up with my absolute newb questions. Hopefully, this F32 convert - even if it's somewhat problematic, shows that with grit and desire, you can teach yourself how to do anything you want.

Love to everyone out there.
-Ryan

Oh! and I don't know why I feel it's important to share, but I'm doing EVERYTHING using my Dell G15 5535, and free online resources.

That's it. I have a single old-ass CUDA in this and a LOT of patience. Don't let your hardware stop you. Get an old Dell dual Xeon Optiplex and start fiddling!

techAInewb pinned discussion

Sign up or log in to comment