Llama-3.3-70B-Instruct-abliterated-FP8-Dynamic

This is a quantized version of thisnick/Llama-3.3-70B-Instruct-abliterated using FP8 quantization.

Downloads last month
5
Safetensors
Model size
70.6B params
Tensor type
BF16
·
F8_E4M3
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.