Configuration Parsing
Warning:
In config.json: "quantization_config.bits" must be an integer
6.75bpw-h8-exl2 quant of DavidAU's L3.1-RP-Hero-BigTalker-8B
Link to orginal model and creator: https://huggingface.co/DavidAU/L3.1-RP-Hero-BigTalker-8B-GGUF
- Downloads last month
- 4
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for James2313123/L3.1-RP-Hero-BigTalker-8B_6.75bpw-h8-exl2
Base model
DavidAU/L3.1-RP-Hero-BigTalker-8B