Bielik-4.5B-v3.0-Instruct-MLX-8bit
This model was converted to MLX format from SpeakLeash's Bielik-4.5B-v3.0-Instruct.
DISCLAIMER: Be aware that quantised models show reduced response quality and possible hallucinations!
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("speakleash/Bielik-4.5B-v3.0-Instruct-MLX-8bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)
Model description:
- Developed by: SpeakLeash & ACK Cyfronet AGH
- Language: Polish
- Model type: causal decoder-only
- Quant from: Bielik-4.5B-v3.0-Instruct
- Finetuned from: Bielik-4.5B-v3
- License: Apache 2.0 and Terms of Use
Responsible for model quantization
- Remigiusz KinasSpeakLeash - team leadership, conceptualizing, calibration data preparation, process creation and quantized model delivery.
Contact Us
If you have any questions or suggestions, please use the discussion tab. If you want to contact us directly, join our Discord SpeakLeash.
- Downloads last month
- 80
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for speakleash/Bielik-1.5B-v3.0-Instruct-MLX-8bit
Base model
speakleash/Bielik-4.5B-v3
Finetuned
speakleash/Bielik-4.5B-v3.0-Instruct