---
language:
- pl
license: apache-2.0
library_name: transformers
tags:
- finetuned
- gguf
inference: false
pipeline_tag: text-generation
base_model: speakleash/Bielik-1.5B-v3.0-Instruct
---
# Bielik-1.5B-v3.0-Instruct-MLX-8bit
This model was converted to MLX format from [SpeakLeash](https://speakleash.org/)'s [Bielik-1.5B-v3.0-Instruct](https://huggingface.co/speakleash/Bielik-11B-v2.2-Instruct).
**DISCLAIMER: Be aware that quantised models show reduced response quality and possible hallucinations!**
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("speakleash/Bielik-1.5B-v3.0-Instruct-MLX-8bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
### Model description:
* **Developed by:** [SpeakLeash](https://speakleash.org/) & [ACK Cyfronet AGH](https://www.cyfronet.pl/)
* **Language:** Polish
* **Model type:** causal decoder-only
* **Quant from:** [Bielik-1.5B-v3.0-Instruct](https://huggingface.co/speakleash/Bielik-1.5B-v3.0-Instruct)
* **Finetuned from:** [Bielik-1.5B-v3](https://huggingface.co/speakleash/Bielik-1.5B-v3)
* **License:** Apache 2.0 and [Terms of Use](https://bielik.ai/terms/)
### Responsible for model quantization
* [Remigiusz Kinas](https://www.linkedin.com/in/remigiusz-kinas/)SpeakLeash - team leadership, conceptualizing, calibration data preparation, process creation and quantized model delivery.
## Contact Us
If you have any questions or suggestions, please use the discussion tab. If you want to contact us directly, join our [Discord SpeakLeash](https://discord.gg/CPBxPce4).