Configuration Parsing
Warning:
In config.json: "quantization_config.bits" must be an integer
4.5bpw hb8 quant of https://huggingface.co/TareksTesting/MO-MODEL-Fused-V0.6-LLaMa-70B
measuremnet.json file included.
- Downloads last month
- 6
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for zerofata/Aurora-Borealis-LLaMa-70B-4.5bpw-hb8-exl2
Base model
Tarek07/Aurora-Borealis-LLaMa-70B