
π Overview
A collection of optimized GGUF quantized models derived from vmlu-llm-7B-Uncensored, providing various performance-quality tradeoffs.
π Model Variants
Variant | Use Case | Download |
---|---|---|
Q2_K | Basic text completion tasks | π₯ |
Q3_K_M | Memory-efficient quality operations | π₯ |
Q4_K_S | Balanced performance and quality | π₯ |
Q4_K_M | Balanced performance and quality | π₯ |
Q5_K_S | Enhanced quality text generation | π₯ |
Q5_K_M | Enhanced quality text generation | π₯ |
Q6_K | Superior quality outputs | π₯ |
Q8_0 | Maximum quality, production-grade results | π₯ |
π€ Contributors
Developed with β€οΈ by BlossomAI
Star βοΈ this repo if you find it valuable!
- Downloads last month
- 62
Hardware compatibility
Log In
to view the estimation
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
HF Inference deployability: The model has no library tag.
Model tree for BlossomsAI/vmlu-llm-7B-Uncensored-GGUF
Base model
vtrungnhan9/vmlu-llm
Finetuned
BlossomsAI/vmlu-llm-7B-Uncensored