abliterated using Householder

no refusals related to dangerous questions in testing benchmarks

I ran a zero-shot benchmark on HellaSwag using lm-eval-harness, and the model had an acc_norm score of 0.7601 vs the base model 0.7758

Downloads last month
1,516
Safetensors
Model size
30.5B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support