--- base_model: - OddTheGreat/Comet_12B_V.4 base_model_relation: quantized pipeline_tag: image-text-to-text tags: - chat - mlx - apple - 8bit - multimodal language: - en - ru library_name: mlx --- # Comet-12B V4 8-bit MLX (Uncensored Gemma-3 12B) This is a merge of pre-trained language models: - [`ReadyArt/The-Omega-Directive-Gemma3-12B-v1.0`](https://huggingface.co/ReadyArt/The-Omega-Directive-Gemma3-12B-v1.0) - [`TheDrummer/Fallen-Gemma3-12B-v1`](https://huggingface.co/TheDrummer/Fallen-Gemma3-12B-v1) - [`Delta-Vector/Pascal-12B`](https://huggingface.co/Delta-Vector/Pascal-12B) - [`soob3123/amoral-gemma3-12B-v2`](https://huggingface.co/soob3123/amoral-gemma3-12B-v2) The goal of this merge was to create a good, all‑purpose, **uncensored** model without excessive positive bias. The model was converted to MLX format from [`OddTheGreat/Comet_12B_V.4`](https://huggingface.co/OddTheGreat/Comet_12B_V.4) using mlx-vlm version **0.1.23**. Refer to the [original model card](https://huggingface.co/OddTheGreat/Comet_12B_V.4) for more details on the model. ## Use with mlx ```bash pip install -U mlx-vlm ``` ```bash python -m mlx_vlm.generate --model TheCluster/Comet-12B-v4-mlx-8bit --max-tokens 512 --temperature 0.0 --prompt "Describe this image." --image ```