This was an experiment. I got the delta between mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated and meta-llama/Llama-3.1-8B-Instruct and applied that on the common layers from ICTNLP/Llama-3.1-8B-Omni.

The intention was to see if the Omni model can gain abliterated functions. The result (this model) is coherent, but it's not 100% uncensored. The reason most probably has to do with the way the Omni model was trained.

Downloads last month
25
Safetensors
Model size
9.11B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support