Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
root4k
/
Dolphin-Mistral-24B-Venice-q4_0-G32
like
0
Text Generation
MLX
Safetensors
mistral
conversational
4-bit precision
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
Use this model
README.md exists but content is empty.
Downloads last month
311
Safetensors
Model size
4.42B params
Tensor type
BF16
·
U32
·
Chat template
Files info
Inference Providers
NEW
Text Generation
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for
root4k/Dolphin-Mistral-24B-Venice-q4_0-G32
Base model
cognitivecomputations/Dolphin-Mistral-24B-Venice-Edition
Quantized
(
22
)
this model
Collection including
root4k/Dolphin-Mistral-24B-Venice-q4_0-G32
Dolphin-Mistral-24B-Venice-G32
Collection
4 items
•
Updated
28 days ago