language: | |
- en | |
license: other | |
tags: | |
- causal-lm | |
- mlx | |
datasets: | |
- HuggingFaceH4/ultrachat_200k | |
- allenai/ultrafeedback_binarized_cleaned | |
- meta-math/MetaMathQA | |
- WizardLM/WizardLM_evol_instruct_V2_196k | |
- openchat/openchat_sharegpt4_dataset | |
- LDJnr/Capybara | |
- Intel/orca_dpo_pairs | |
- hkust-nlp/deita-10k-v0 | |
- Anthropic/hh-rlhf | |
- glaiveai/glaive-function-calling-v2 | |
extra_gated_fields: | |
Name: text | |
Email: text | |
Country: text | |
Organization or Affiliation: text | |
I ALLOW Stability AI to email me about new model releases: checkbox | |
# voxmenthe/stablelm-2-12b-chat-mlx-4bit | |
This model was converted to MLX format from [`stabilityai/stablelm-2-12b-chat`]() using mlx-lm version **0.8.0**. | |
Refer to the [original model card](https://huggingface.co/stabilityai/stablelm-2-12b-chat) for more details on the model. | |
## Use with mlx | |
```bash | |
pip install mlx-lm | |
``` | |
```python | |
from mlx_lm import load, generate | |
model, tokenizer = load("voxmenthe/stablelm-2-12b-chat-mlx-4bit") | |
response = generate(model, tokenizer, prompt="hello", verbose=True) | |
``` | |