Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
MaoyueOUO
/
mistral-nemo-minitron-8b-instruct-GGUF
like
0
GGUF
conversational
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
README.md exists but content is empty.
Downloads last month
2
GGUF
Model size
8.41B params
Architecture
llama
Chat template
Hardware compatibility
Log In
to view the estimation
2-bit
Q2_K
3.33 GB
3-bit
Q3_K_S
3.83 GB
Q3_K_M
4.21 GB
Q3_K_L
4.54 GB
4-bit
Q4_K_S
4.91 GB
Q4_0
4.88 GB
Q4_1
5.37 GB
Q4_K_M
5.15 GB
5-bit
Q5_K_S
5.86 GB
Q5_0
5.86 GB
Q5_1
6.36 GB
Q5_K_M
6 GB
6-bit
Q6_K
6.91 GB
8-bit
Q8_0
8.95 GB
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Collection including
MaoyueOUO/mistral-nemo-minitron-8b-instruct-GGUF
GGUF
Collection
Some GGUF files converted by me
โข
5 items
โข
Updated
May 24
โข
1