492
QwQ 32B Demo
🌖
Send text and get detailed responses
Ok, but actually it's pretty usable. I get acceptable speed in stable diffusion with directml. Also there is "Amuse" that's made for amd, works great. And if i need to run language models, vulkan brings speed.
I wonder, is there a tutorial on how to run it on AMD, in directml mode. I tried once, didn't work ... 1.3B maximum i could run, but it gave me some errors in comfyui.
CPU mode worked but too slow.
I have ryzen 7735hs which can use ram for vram, 8GB