--- license: other license_name: cc-by-nc-4.0 license_link: LICENSE language: - en - de base_model: - Gryphe/MythoMax-L2-13b pipeline_tag: text2text-generation --- --- license: cc-by-nc-4.0 license_name: cc-by-nc-4.0 --- # ๐Ÿง  MythoMax-L2-13B - GGUF FP16 (Unquantized) This is a **GGUF-converted, float16 version** of [Gryphe's MythoMax-L2-13B](https://huggingface.co/Gryphe/MythoMax-L2-13b), designed for **local inference with full quality** on high-VRAM GPUs. ๐ŸŽ™๏ธ **Converted & shared by:** [Sandra Weidmann](https://huggingface.co/py-sandy) ๐Ÿ› ๏ธ **Tested with:** RTX 3090, `text-generation-webui` + `llama.cpp` ๐Ÿ”— Original Model: [`Gryphe/MythoMax-L2-13B`](https://huggingface.co/Gryphe/MythoMax-L2-13b) --- ## โœจ Why this model? This model was converted to **preserve full precision (float16)** for use in: - ๐Ÿง  fine-tuned instruction tasks - ๐ŸŽญ roleplay and creative writing - ๐Ÿ’ฌ emotionally nuanced dialogue - ๐Ÿงช experimentation with full-context outputs (4096+ tokens) --- ## ๐Ÿ“ฆ Model Details | Property | Value | |--------------------|-------------------------------| | Format | GGUF | | Precision | float16 (f16) | | Context Size | 4096 | | Tensor Count | 363 | | File Size | ~26.0โ€ฏGB | | Original Format | Transformers (`.bin`) | | Converted Using | `convert_hf_to_gguf.py` | --- ## ๐Ÿงฐ Usage (with `llama.cpp`) ```bash ./main -m mythomax-l2-13b-f16.gguf -c 4096 -n 512 --color Or via text-generation-webui: Backend: llama.cpp Load model: mythomax-l2-13b-f16.gguf Set context: 4096+ ๐Ÿ’™ Notes This GGUF build is shared for non-commercial, experimental, and educational use. Full credit to the original model author Gryphe. If this version helped you, consider giving it a โญ and sharing feedback. Sandra โœจ py-sandy https://samedia.app/dev