roleplaiapp/internlm3-8b-instruct-Q8_0-GGUF
Repo: roleplaiapp/internlm3-8b-instruct-Q8_0-GGUF
Original Model: internlm3-8b-instruct
Organization: internlm
Quantized File: internlm3-8b-instruct-q8_0.gguf
Quantization: GGUF
Quantization Method: Q8_0
Use Imatrix: False
Split Model: False
Overview
This is an GGUF Q8_0 quantized version of internlm3-8b-instruct.
Quantization By
I often have idle A100 GPUs while building/testing and training the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful.
Andrew Webby @ RolePlai
- Downloads last month
- 4
Hardware compatibility
Log In
to view the estimation
8-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for roleplaiapp/internlm3-8b-instruct-Q8_0-GGUF
Base model
internlm/internlm3-8b-instruct