roleplaiapp/Mistral-Small-24B-Instruct-2501-IQ3_M-GGUF

Repo: roleplaiapp/Mistral-Small-24B-Instruct-2501-IQ3_M-GGUF Original Model: Mistral-Small-24B-Instruct-2501 Quantized File: Mistral-Small-24B-Instruct-2501-IQ3_M.gguf Quantization: GGUF Quantization Method: IQ3_M

Overview

This is a GGUF IQ3_M quantized version of Mistral-Small-24B-Instruct-2501

Quantization By

I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful.

Andrew Webby @ RolePlai.

Downloads last month
11
GGUF
Model size
23.6B params
Architecture
llama
Hardware compatibility
Log In to view the estimation
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support