Disclaimer:

The model is reproduced based on the paper VPTQ: Extreme Low-bit Vector Post-Training Quantization for Large Language Models github and arXiv

The model itself is sourced from a community release.

It is intended only for experimental purposes.

Users are responsible for any consequences arising from the use of this model.

Downloads last month
6
Safetensors
Model size
8.83B params
Tensor type
BF16
·
I32
·
I16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Model tree for VPTQ-community/Meta-Llama-3.3-70B-Instruct-v8-k65536-256-woft

Quantized
(80)
this model

Collection including VPTQ-community/Meta-Llama-3.3-70B-Instruct-v8-k65536-256-woft