原始模型:Sakura-13B-LNovel-v0.11pre1

4Bit AWQ量化,未测试,不建议使用。

采用未安装flash_attn的环境进行量化

Intel-XPU测试用,该量化模型可能不适合所有人。

Downloads last month
6
Safetensors
Model size
2.69B params
Tensor type
I32
·
BF16
·
FP16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support