bin4ry_stheno_8B_v1_gguf

These are quantizations of banelingz/bin4ry_stheno_8B_v1.

Downloads last month
0
GGUF
Model size
8.03B params
Architecture
llama

4-bit

6-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model’s pipeline type.

Model tree for banelingz/bin4ry_stheno_8B_v1_gguf

Quantized
(16)
this model