Spacewars 24B v1.00a
Released by spacewars123
Quant by FrenzyBiscuit

AWQ Details
- Model was quantized down to INT4 using GEMM Kernels.
- Zero point quantization
- Group size of 64
- Downloads last month
- 25
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for FrenzyBiscuit/Space-Wars-24B-v1.00a-AWQ
Base model
mistralai/Mistral-Small-24B-Base-2501
Finetuned
spacewars123/Space-Wars-24B-v1.00a