abliterated using Householder
no refusals related to dangerous questions in testing benchmarks
I ran a zero-shot benchmark on HellaSwag using lm-eval-harness, and the model had an acc_norm score of 0.7601 vs the base model 0.7758
- Downloads last month
- 1,516
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support