Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
avinashhm
/
Qwen3-8B-4bit-SINQ
like
1
PyTorch
qwen3
Model card
Files
Files and versions
xet
Community
2
main
Qwen3-8B-4bit-SINQ
8.69 GB
1 contributor
History:
2 commits
avinashhm
Push quantized Qwen3-8B 4-bit SINQ model with README
d2627ab
verified
12 days ago
.gitattributes
Safe
1.57 kB
Push quantized Qwen3-8B 4-bit SINQ model with README
12 days ago
README.md
1.38 kB
Push quantized Qwen3-8B 4-bit SINQ model with README
12 days ago
added_tokens.json
Safe
707 Bytes
Push quantized Qwen3-8B 4-bit SINQ model with README
12 days ago
chat_template.jinja
Safe
4.17 kB
Push quantized Qwen3-8B 4-bit SINQ model with README
12 days ago
config.json
Safe
1.54 kB
Push quantized Qwen3-8B 4-bit SINQ model with README
12 days ago
merges.txt
Safe
1.67 MB
Push quantized Qwen3-8B 4-bit SINQ model with README
12 days ago
pytorch_model.bin
pickle
Detected Pickle imports (3)
"torch._utils._rebuild_tensor_v2"
,
"torch.BFloat16Storage"
,
"collections.OrderedDict"
What is a pickle import?
2.49 GB
xet
Push quantized Qwen3-8B 4-bit SINQ model with README
12 days ago
qmodel.pt
pickle
Detected Pickle imports (8)
"torch.Size"
,
"torch.HalfStorage"
,
"collections.OrderedDict"
,
"torch.bfloat16"
,
"torch._utils._rebuild_tensor_v2"
,
"torch.uint8"
,
"torch.BFloat16Storage"
,
"torch.ByteStorage"
How to fix it?
6.18 GB
xet
Push quantized Qwen3-8B 4-bit SINQ model with README
12 days ago
quantization_config.json
195 Bytes
Push quantized Qwen3-8B 4-bit SINQ model with README
12 days ago
special_tokens_map.json
Safe
613 Bytes
Push quantized Qwen3-8B 4-bit SINQ model with README
12 days ago
tokenizer.json
Safe
11.4 MB
xet
Push quantized Qwen3-8B 4-bit SINQ model with README
12 days ago
tokenizer_config.json
Safe
5.4 kB
Push quantized Qwen3-8B 4-bit SINQ model with README
12 days ago
vocab.json
Safe
2.78 MB
Push quantized Qwen3-8B 4-bit SINQ model with README
12 days ago