license: apache-2.0 | |
https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Instruct-2503 | |
Q6_K_XL: Q6_K weights, untouched outputs, untouched embed | |
Fits 24K CTX on a 24GiB GPU | |
llama.cpp does not support exporting the vision components yet |