Melvin56/UIGEN-T2-7B-GGUF

Original Model : Tesslate/UIGEN-T2-7B

Llama.cpp build: 5219 (7d3af70b)

I used imatrix to create all these quants using this Dataset.


CPU (AVX2) CPU (ARM NEON) Metal cuBLAS rocBLAS SYCL CLBlast Vulkan Kompute
K-quants βœ… βœ… βœ… βœ… βœ… βœ… βœ… 🐒5 βœ… 🐒5 ❌
I-quants βœ… 🐒4 βœ… 🐒4 βœ… 🐒4 βœ… βœ… PartialΒΉ ❌ ❌ ❌
βœ…: feature works
🚫: feature does not work
❓: unknown, please contribute if you can test it youself
🐒: feature is slow
ΒΉ: IQ3_S and IQ1_S, see #5886
Β²: Only with -ngl 0
Β³: Inference is 50% slower
⁴: Slower than K-quants of comparable size
⁡: Slower than cuBLAS/rocBLAS on similar cards
⁢: Only q8_0 and iq4_nl
Downloads last month
361
GGUF
Model size
7.62B params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Melvin56/UIGEN-T2-7B-GGUF

Base model

Qwen/Qwen2.5-7B
Quantized
(5)
this model