Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
QuantStack
/
HunyuanImage-2.1-Refiner-GGUF
like
0
Follow
QuantStack
921
GGUF
Model card
Files
Files and versions
xet
Community
README.md exists but content is empty.
Downloads last month
210
GGUF
Model size
15B params
Architecture
hyvid
Hardware compatibility
Log In
to view the estimation
2-bit
Q2_K
5.67 GB
3-bit
Q3_K_S
7.11 GB
Q3_K_M
7.17 GB
4-bit
Q4_K_S
9.06 GB
Q4_0
9.05 GB
Q4_1
9.97 GB
Q4_K_M
9.17 GB
5-bit
Q5_K_S
10.9 GB
Q5_0
10.9 GB
Q5_1
11.8 GB
Q5_K_M
10.9 GB
6-bit
Q6_K
12.8 GB
8-bit
Q8_0
16.4 GB
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for
QuantStack/HunyuanImage-2.1-Refiner-GGUF
Base model
tencent/HunyuanImage-2.1
Quantized
(
4
)
this model
Collection including
QuantStack/HunyuanImage-2.1-Refiner-GGUF
HunyuanImage2.1 GGUFs
Collection
3 items
โข
Updated
1 day ago