YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Custom GGUF quant for : failspy/llama-3-70B-Instruct-abliterated-GGUF

IQ4_SR is Optimal (8k ctx) for 36GB VRAM with an IGP displaying the OS.

IQ4_MR is optimal for the same config with MMQ and KV quants (8 bits)

Without an IGP, the IQ4_XSR is for you.

Downloads last month
20
GGUF
Model size
70.6B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

3-bit

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support