Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
MediaTek-Research
/
Breeze2-3B-8W16A-Instruct-mobile-npu
like
0
Follow
MediaTek Research
315
Model card
Files
Files and versions
xet
Community
main
Breeze2-3B-8W16A-Instruct-mobile-npu
Ctrl+K
Ctrl+K
2 contributors
History:
8 commits
MXLouis
Add modelOutputQuantScale for using temperature
1212ff4
verified
15 days ago
.gitattributes
1.98 kB
Upload folder using huggingface_hub
about 2 months ago
BreezeTinyInstruct_v0.1_sym8W_sym16A_Overall_14layer_128t1024c_0_extracted.dla
Safe
3.03 MB
xet
Upload folder using huggingface_hub
about 2 months ago
BreezeTinyInstruct_v0.1_sym8W_sym16A_Overall_14layer_128t1024c_1_extracted.dla
Safe
3.06 MB
xet
Upload folder using huggingface_hub
about 2 months ago
BreezeTinyInstruct_v0.1_sym8W_sym16A_Overall_14layer_1t1024c_0_extracted.dla
Safe
1.61 MB
xet
Upload folder using huggingface_hub
about 2 months ago
BreezeTinyInstruct_v0.1_sym8W_sym16A_Overall_14layer_1t1024c_1_extracted.dla
Safe
1.63 MB
xet
Upload folder using huggingface_hub
about 2 months ago
added_tokens.yaml
9.76 kB
Upload folder using huggingface_hub
about 2 months ago
config_breezetiny_3b_instruct.yaml
Safe
3.01 kB
Add modelOutputQuantScale for using temperature
15 days ago
embedding_int16.bin
788 MB
xet
Upload embedding_int16.bin
about 2 months ago
shared_weights_0.bin
1.41 GB
xet
Upload folder using huggingface_hub
about 2 months ago
shared_weights_1.bin
1.81 GB
xet
Upload folder using huggingface_hub
about 2 months ago
tokenizer.tiktoken
Safe
2.18 MB
Upload folder using huggingface_hub
about 2 months ago