Quantized using the default exllamav3 (0.0.3) quantization process.
- Original model: https://huggingface.co/summykai/gemma3-27b-abliterated-dpo
- exllamav3: https://github.com/turboderp-org/exllamav3
Uploaded finetuned model
- Developed by: Summykai
- License: apache-2.0
- Finetuned from model : mlabonne/gemma-3-27b-it-abliterated
This gemma3 model was trained 2x faster with Unsloth and Huggingface's TRL library.
- Downloads last month
- 6
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for MetaphoricalCode/gemma3-27b-abliterated-dpo-exl3-6bpw-hb6
Base model
google/gemma-3-27b-pt
Finetuned
google/gemma-3-27b-it
Finetuned
mlabonne/gemma-3-27b-it-abliterated
Finetuned
summykai/gemma3-27b-abliterated-dpo